Peer review, stem cells and science
Scientific papers are peer reviewed, but despite this fraud gets past. There have been a number of high-profile frauds discovered lately - the famous polywater case, the Indian paleontologist who salted his finds, the faked data on element 118 at Berkeley, and so on. But most of these frauds are manipulation of data to support some prior conclusion. It seems that around 3-10% of scientific papers, at least in medical research, are fraudulent in one way or another.
Peer review is supposed to catch these errors, and it can do. But if the fraud is subtle, or the reviewer is sloppy, it can get past this process, as is demonstrated by the recent stem cell debacle in South Korea. After all, this was the leading journal, Science, not some tinpot low-impact journal.
Why do researchers fabricate data? According to David Goodstein, there are three reasons: career pressure, knowing the answer ahead of time, and therefore saving time by making it up (arguably, Mendel did this), and problems with reproducibility. The last occurs when something like the Harvard Law of Animal Behavior is in play - "under carefully controlled experimental circumstances, an animal will behave as it damned well pleases." Multivariate systems are hard to maintain in stasis except for the variables being tested. It's easier to just write down what you already know is true. Of course, this only works when it is true - Nature often surprises us, hence the need for experimentation.
I want to focus on the career aspects of this. According to a view of science I hold to, proposed by David Hull in his Science as a Process (1988), scientists are motivated by the "desire for conceptual fitness", a cultural and professional analogue for genetic fitness in the evolution of genes. Conceptual fitness results from researchers trying to maximise their professional career, and it means that no matter whether they are nice or nasty, they will tend to "bet" on a strategy for career advancement.
There is a complex field of such strategies. One is to be a conservative late adopter. It is safer, and enables you to share a decreasing percentage of an increasing amount of conceptual credit of a successful research program. It's something many newly minted PhDs go for. You can always be radical later...
Another is to be a radical early adopter. Here the payoff is high, but the chances that you will get it are low, and so to be one of these you need to be really self-confident (a reason I never became a scientific researcher, apart from my lack of actual scientific education). REAs are often arrogant bastards. Hull calls them son-of-a-bitches. And their opponents are likewise motivated to "get" that son-of a bitch". As Kissinger wrongly noted, academic politics are so vicious because the stakes are so low. The stakes actually aren't low (although to a man who bombed a noncombatant nation they may seem so) - scientists are fighting, metaphorically, for their survival (as scientists).
This explains why researchers might choose to fabricate data. It's a way of getting an edge over your competitors, no matter what the strategy. It reduces the resources and time required, both economic factors in science, and maximises the chances of getting credit. The risk of discovery is relatively low. We're just asking for trouble.
In the early days of science, scientists were "gentlemen" who valued their "standing" amongst the scientific community. Things haven't changed much (despite the changes in class and inclusions of gender and culture) - scientists still need to jealously guard their standing in their community, and fight to attain it (mutatis mutandis this also applies to any profession, particularly the academic ones). This is what keeps science honest, and enables progress to be made.
But aspects of science have become corrupted by other influences. The baleful influence of pharmaceutical companies and manufacturers of medical equipment, of tobacco funding and lobby groups in Congress and Parliament, of National Institutions like educational bodies, of commercialisation and intellectual property rights of host institutions, of defence involvement in research, and so on, have made the simple peer review less useful. Often the reviewers are people chosen for an editorial outcome, like rejection or publication. Moreover, the sheer number of journals in print and electronic, and the size of the scientific community itself, make it hard.
But in my view the single greatest corrupting influence in science is the scarcity of funding from governments and the increase in paperwork and red tape. About fifty years ago, anecdotally, scientists did almost no paperwork and were given funds when they asked for it. A time of postwar reverence for science, scarcity of researchers, and increasing economic wealth in the West, meant that the only measure of success was how one was regarded in the field by one's peers. Now you must be assessed by citation indices, journal impact, granting body reviews, and so on.
This is an exercise in game theory. If you want to discourage fraud, you have to make it that the payoffs can only be achieved honestly. A large part of this is to make the payoffs more directly offered by the professions concerned. But some of it has to be also making publishable research more difficult to achieve, and making careers not depend so much of quantity of papaers as quality. Therein lies the rub.
Scientists are, surprisingly, human, as Hull points out repeatedly. Personalities play a part, but one thing we have learned from biology over the past century is that traits, even psychological dispositions, are spread over distribution curves, in any population (including scientists) you care to measure. There will be a small percentage of sociopaths amongst scientists as well as politicians. To continue to be successful, science has to rely not on personalities by output, and that has to be measured in terms of addition to knowledge, not institutional or corporate, or even national, prestige.
Late note: It seems Hwang faked the cloning of a human embryo from stem cells.
Late late note: Polywater was a mistake, not a fraud. Damn those eagle-eyed critics!
Peer review is supposed to catch these errors, and it can do. But if the fraud is subtle, or the reviewer is sloppy, it can get past this process, as is demonstrated by the recent stem cell debacle in South Korea. After all, this was the leading journal, Science, not some tinpot low-impact journal.
Why do researchers fabricate data? According to David Goodstein, there are three reasons: career pressure, knowing the answer ahead of time, and therefore saving time by making it up (arguably, Mendel did this), and problems with reproducibility. The last occurs when something like the Harvard Law of Animal Behavior is in play - "under carefully controlled experimental circumstances, an animal will behave as it damned well pleases." Multivariate systems are hard to maintain in stasis except for the variables being tested. It's easier to just write down what you already know is true. Of course, this only works when it is true - Nature often surprises us, hence the need for experimentation.
I want to focus on the career aspects of this. According to a view of science I hold to, proposed by David Hull in his Science as a Process (1988), scientists are motivated by the "desire for conceptual fitness", a cultural and professional analogue for genetic fitness in the evolution of genes. Conceptual fitness results from researchers trying to maximise their professional career, and it means that no matter whether they are nice or nasty, they will tend to "bet" on a strategy for career advancement.
There is a complex field of such strategies. One is to be a conservative late adopter. It is safer, and enables you to share a decreasing percentage of an increasing amount of conceptual credit of a successful research program. It's something many newly minted PhDs go for. You can always be radical later...
Another is to be a radical early adopter. Here the payoff is high, but the chances that you will get it are low, and so to be one of these you need to be really self-confident (a reason I never became a scientific researcher, apart from my lack of actual scientific education). REAs are often arrogant bastards. Hull calls them son-of-a-bitches. And their opponents are likewise motivated to "get" that son-of a bitch". As Kissinger wrongly noted, academic politics are so vicious because the stakes are so low. The stakes actually aren't low (although to a man who bombed a noncombatant nation they may seem so) - scientists are fighting, metaphorically, for their survival (as scientists).
This explains why researchers might choose to fabricate data. It's a way of getting an edge over your competitors, no matter what the strategy. It reduces the resources and time required, both economic factors in science, and maximises the chances of getting credit. The risk of discovery is relatively low. We're just asking for trouble.
In the early days of science, scientists were "gentlemen" who valued their "standing" amongst the scientific community. Things haven't changed much (despite the changes in class and inclusions of gender and culture) - scientists still need to jealously guard their standing in their community, and fight to attain it (mutatis mutandis this also applies to any profession, particularly the academic ones). This is what keeps science honest, and enables progress to be made.
But aspects of science have become corrupted by other influences. The baleful influence of pharmaceutical companies and manufacturers of medical equipment, of tobacco funding and lobby groups in Congress and Parliament, of National Institutions like educational bodies, of commercialisation and intellectual property rights of host institutions, of defence involvement in research, and so on, have made the simple peer review less useful. Often the reviewers are people chosen for an editorial outcome, like rejection or publication. Moreover, the sheer number of journals in print and electronic, and the size of the scientific community itself, make it hard.
But in my view the single greatest corrupting influence in science is the scarcity of funding from governments and the increase in paperwork and red tape. About fifty years ago, anecdotally, scientists did almost no paperwork and were given funds when they asked for it. A time of postwar reverence for science, scarcity of researchers, and increasing economic wealth in the West, meant that the only measure of success was how one was regarded in the field by one's peers. Now you must be assessed by citation indices, journal impact, granting body reviews, and so on.
This is an exercise in game theory. If you want to discourage fraud, you have to make it that the payoffs can only be achieved honestly. A large part of this is to make the payoffs more directly offered by the professions concerned. But some of it has to be also making publishable research more difficult to achieve, and making careers not depend so much of quantity of papaers as quality. Therein lies the rub.
Scientists are, surprisingly, human, as Hull points out repeatedly. Personalities play a part, but one thing we have learned from biology over the past century is that traits, even psychological dispositions, are spread over distribution curves, in any population (including scientists) you care to measure. There will be a small percentage of sociopaths amongst scientists as well as politicians. To continue to be successful, science has to rely not on personalities by output, and that has to be measured in terms of addition to knowledge, not institutional or corporate, or even national, prestige.
Late note: It seems Hwang faked the cloning of a human embryo from stem cells.
Late late note: Polywater was a mistake, not a fraud. Damn those eagle-eyed critics!