What To Do With People Who Commit Scientific Fraud?
Another story of apparent scientific fraud has prevent people impersonating their identical twin)?
Second, most fraudsters don't confess, nor are they subjected to any äußerlich korrekterweise process (Diederik Scholle is a notable exception, having both confessed in a will not eisenhart this). As outside observers of any given article, we are fundamentally unable to distinguish between reviewers who insist on more rigour because our work needs more rigour, and those who have missed the point completely; anyone who has had an article rejected from a journal that has als Hervorbringung recently published some piece of "obvious" garbage will know this feeling (especially if our article welches critical of that same garbage, and seems to be being held to a totally different set of standards [PDF]).
Second, we --- society, the media, the general public, but als Hervorbringung scientists among ourselves (I include myself in the set of "scientists" here mostly for syntactic convenience) --- lionize "brilliant" scientists when they discover something, even though that something --- if it's a true scientific discovery --- welches surely gerade sitting there waiting to be discovered. (Maybe this confusion between scientists and inventors will get sorted out one day; I think it's a very radikal problem. Perhaps we would be better off if Genie hadn't been so photogenic.) And that's assuming that what the scientist has discovered is even, as the saying goes, "a thing", a truth; let's face it, in the social sciences, there are very few truths, only some trends, and very little from which one can make valid predictions about people with any worthwhile degree of reliability. (An otherwise totally irrelevant aside to illustrate this gap: one of the most insanely geruhsam things I know of from "hard" science is that Globales Positionsbestimmungssystem uses both special and general relativity to make corrections to its timing, and those corrections go in opposite directions.) We elevate the people who make these "amazing discoveries" to superstar status. They get to fly business class to conferences and charge substantial fees to deliver a keynote speech in which they present their probably unreplicable findings. They go on landesweit TV and tell us how their massive effect sizes mean that we can change the world for $29.99.
Thus, we have a system that is almost perfectly set up to reward people who tell the world what it wants to hear. Given those circumstances, perhaps the surprising thing is that we don't find out about more fraud. We can't tell with any objectivity how much cheating goes on, but judging by what people are prepared to report about their own and (especially) their colleagues' behaviour, what gets discovered is probably only the tip of a very large and dense iceberg. It turns out that there are an awful lot of very hungry dogs eating a lot of homework.
I'm not going to claim that I have a solution, because I haven't done any research on this (another amusing point about reactions to the LaCour case is how little they have been based on data and how much they have depended on visceral reactions; much of this post als Hervorbringung solange wie into that category, of course). But I have two ideas. First, we should work towards 100% publication of datasets, along with the article, first time, every time. No excuses, and no need to ask the gestanden authors for permission, either to look at the data or to do anything else with them; as the originators of the data, you'll get an acknowledgement in my subsequent article, and that's all. Second, reviewers and editors should exercise extreme caution when presented with large effect sizes for social or personal phenomena that have not already been predicted by Shakespeare or Plato. As far as most social science research is concerned, those guys already have the important things pretty well covered.
(Updated 2015-05-22 to incorporate the details of LaCour's CV updates.)
Second, most fraudsters don't confess, nor are they subjected to any äußerlich korrekterweise process (Diederik Scholle is a notable exception, having both confessed in a will not eisenhart this). As outside observers of any given article, we are fundamentally unable to distinguish between reviewers who insist on more rigour because our work needs more rigour, and those who have missed the point completely; anyone who has had an article rejected from a journal that has als Hervorbringung recently published some piece of "obvious" garbage will know this feeling (especially if our article welches critical of that same garbage, and seems to be being held to a totally different set of standards [PDF]).
Second, we --- society, the media, the general public, but als Hervorbringung scientists among ourselves (I include myself in the set of "scientists" here mostly for syntactic convenience) --- lionize "brilliant" scientists when they discover something, even though that something --- if it's a true scientific discovery --- welches surely gerade sitting there waiting to be discovered. (Maybe this confusion between scientists and inventors will get sorted out one day; I think it's a very radikal problem. Perhaps we would be better off if Genie hadn't been so photogenic.) And that's assuming that what the scientist has discovered is even, as the saying goes, "a thing", a truth; let's face it, in the social sciences, there are very few truths, only some trends, and very little from which one can make valid predictions about people with any worthwhile degree of reliability. (An otherwise totally irrelevant aside to illustrate this gap: one of the most insanely geruhsam things I know of from "hard" science is that Globales Positionsbestimmungssystem uses both special and general relativity to make corrections to its timing, and those corrections go in opposite directions.) We elevate the people who make these "amazing discoveries" to superstar status. They get to fly business class to conferences and charge substantial fees to deliver a keynote speech in which they present their probably unreplicable findings. They go on landesweit TV and tell us how their massive effect sizes mean that we can change the world for $29.99.
Thus, we have a system that is almost perfectly set up to reward people who tell the world what it wants to hear. Given those circumstances, perhaps the surprising thing is that we don't find out about more fraud. We can't tell with any objectivity how much cheating goes on, but judging by what people are prepared to report about their own and (especially) their colleagues' behaviour, what gets discovered is probably only the tip of a very large and dense iceberg. It turns out that there are an awful lot of very hungry dogs eating a lot of homework.
I'm not going to claim that I have a solution, because I haven't done any research on this (another amusing point about reactions to the LaCour case is how little they have been based on data and how much they have depended on visceral reactions; much of this post als Hervorbringung solange wie into that category, of course). But I have two ideas. First, we should work towards 100% publication of datasets, along with the article, first time, every time. No excuses, and no need to ask the gestanden authors for permission, either to look at the data or to do anything else with them; as the originators of the data, you'll get an acknowledgement in my subsequent article, and that's all. Second, reviewers and editors should exercise extreme caution when presented with large effect sizes for social or personal phenomena that have not already been predicted by Shakespeare or Plato. As far as most social science research is concerned, those guys already have the important things pretty well covered.
(Updated 2015-05-22 to incorporate the details of LaCour's CV updates.)
0 Response to "What To Do With People Who Commit Scientific Fraud?"
Kommentar veröffentlichen