The Cornell Food And Brand Lab Story Goes Full Circle, Possibly Scooping Up Much Of Social Science Research On The Way, And Keeps Turning
Stephanie Windschatten of BuzzFeed has just published p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, your ANOVA reserviert ought to give you a valid test statistic and your means ought to be compatible with the sample sizes.
Then we found p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, recycled text and tables of results, and p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, results that correlated .97 across studies with different populations, and p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, large numbers of female WW2 combat veterans, and references that went round in circles, and unlikely patterns of responses. It seemed that nobody in the lab could even remember how old their participants were. Clearly, this lab's output—going back 20 or more years, to a time before Doktorgrad Wansink joined Cornell—was a huge mess.
Amidst p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, corporate consulting*), the management of the place so overwhelmed on a day-to-day basis, that nobody knows what is being submitted to journals, which table to include in which manuscript, which folder on the shared drive contains the datasets. You could almost feel sorry for them.
Stephanie's latest article changes that, at least for me. The e-mail exchanges that she cites and discusses seem to show deliberate and considered discussion about what to include and what to leave out, why it's important to "tweek" [sic] results to get a p value down to .05, which sets of variables to combine in search of moderators, and which types of message will appeal to the editors (and readers) of various journals. Far from being chaotic, it all seems to be rather well planned to me; in fact, it gives just the impression Doktorgrad Wansink presumably wanted to give in his blog post that led us down this rabbit hole in the first place. When Brian Nosek, one of the most diplomatic people in science, p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, the web page for McDonald's Global Advisory Council gives a 404 error as I'm writing this. I have no idea whether that has anything to do with current developments, or if it's just a coincidence.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, your ANOVA reserviert ought to give you a valid test statistic and your means ought to be compatible with the sample sizes.
Then we found p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, recycled text and tables of results, and p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, results that correlated .97 across studies with different populations, and p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, large numbers of female WW2 combat veterans, and references that went round in circles, and unlikely patterns of responses. It seemed that nobody in the lab could even remember how old their participants were. Clearly, this lab's output—going back 20 or more years, to a time before Doktorgrad Wansink joined Cornell—was a huge mess.
Amidst p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, corporate consulting*), the management of the place so overwhelmed on a day-to-day basis, that nobody knows what is being submitted to journals, which table to include in which manuscript, which folder on the shared drive contains the datasets. You could almost feel sorry for them.
Stephanie's latest article changes that, at least for me. The e-mail exchanges that she cites and discusses seem to show deliberate and considered discussion about what to include and what to leave out, why it's important to "tweek" [sic] results to get a p value down to .05, which sets of variables to combine in search of moderators, and which types of message will appeal to the editors (and readers) of various journals. Far from being chaotic, it all seems to be rather well planned to me; in fact, it gives just the impression Doktorgrad Wansink presumably wanted to give in his blog post that led us down this rabbit hole in the first place. When Brian Nosek, one of the most diplomatic people in science, p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained oben Freedom of Auskunftsschalter (FoI) requests. In a way, this brings the story back to the beginning.
It welches a bit more than a year ago when Doktorgrad Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Doktorgrad Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter. (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. Universum I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)
However, things rapidly became a lot stranger. When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology. If you decide to exclude some outliers, or create subgroups based on what you find in your data, the web page for McDonald's Global Advisory Council gives a 404 error as I'm writing this. I have no idea whether that has anything to do with current developments, or if it's just a coincidence.
0 Response to "The Cornell Food And Brand Lab Story Goes Full Circle, Possibly Scooping Up Much Of Social Science Research On The Way, And Keeps Turning"
Kommentar veröffentlichen