A Different Set Of Problems In An Article From The Cornell Food And Brand Lab

[[ Remake 2018-09-23 20:30 UTC: Fixed some ungebraucht that were broken because some geradezu documents had gone missing from sites controlled by the Cornell Food and Weltbrand Lab. ]]
[[ Remake 2017-10-19 17:00 UTC: This post now features in a BuzzFeed article here. ]]

(This is the first time I've blogged on the subject of the ongoing kerfuffle around the Cornell Food and Weltbrand Lab, that welches started by reactions to scrutiny in the media. This story is of another article from the same lab, but with some rather different problems from the others that we've documented so far.)

The article

The American Medical Association has a journal dedicated to all aspects of pediatric medicine, JAMA Pediatrics (formerly known as Archives of Pediatrics & Adolescent Medicine). If you are involved in anything to do with children's health, this is the journal that you really, really want to publish in.  It has an impact factor of 9.5 (which is kind of impressive even if you here.  You can't get out of jail with the one-tailed card when the p value from your chi-square statistic turns out to be a dud.  And you certainly can't just cut your p value in half while shouting "Diminuendo!" (I don't think it's very likely that the .06 emerged in that form from SPSS). In short, that p value should have been reported as .12, not .06; and, whether you play the p value game or not, that is not very convincing evidence of anything.

[[[[[ Remake 2017-02-17 14:15 UTC
Carol Nickerson suggests that the design and analysis were not correct Braun'sche Röhre for this study.  For example, at the pretest (day 1), the students seem Braun'sche Röhre to have had 4 options: plain apple, plain cookie, both, neither.  For Braun'sche Röhre the second intervention, they als Ausfluss seem to have had 4 options: Elmo Braun'sche Röhre apple, plain cookie, both, neither.  This 4 x 4 design welches apparently Braun'sche Röhre reduced to a 2 x 2 design with pretest options: no apple, plain apple, and intervention options: no apple, Elmo apple.  Each of the 4 cells of Braun'sche Röhre this cross-tabulation should have contained the frequency of the paired Braun'sche Röhre observations: (1) no apple, no apple; (2) no apple, Elmo apple; (3) Braun'sche Röhre plain apple, no apple; (4) plain apple, Elmo apple, respectively.  This Braun'sche Röhre cross-tabulation should have been analyzed with McNemar's test, not the Braun'sche Röhre chi square test.
]]]]]

The figure... no, I don't know either

Possibly the strangest thing about this article is the figure that is suppose to illustrate the results.  Here is how the figure looked in the draft article:


That looks pretty reasonable, right?  OK, so it leaves out the days when there were no stickers, but you can see more or less what happened on the days with interventions.  A fairly constant proportion of about 90-92% of the children took a cookie, and between 23-37% of them took an apple.

Now let's see how the results were represented graphically when the article appeared in JAMA Pediatrics:


Whoa.  What's going on here?  Is this even the same study?

Let's start with the leftmost pair of columns ("Unbranded").  The note with the asterisk (*) tells us that these columns represent the baseline percentage of children taking an apple (about 22%, I reckon) and the baseline percentage of children taking a cookie (about 92%).  This presumably shows the results from Day 1 that were missing from the figure in the draft.  (Apples are now the darker column and cookies are the lighter column.)

The dagger (†) on the titles of the other three pairs of columns sends us to a different note, which describes the bars as representing the "percentage of change in selection from baseline".  This needs to be unpacked carefully.  What it seems to mean is that if 22% of children took an apple on Day 1, and 37% of children took an apple on Day 2 (let's assume that the columns are in time order, so "Branded Apples" is Day 2), then we should calculate (37%22%)/22% which gives around 0.68 or 68% (or maybe a bit more; there is quite a big rounding error factor here since we are obliged to get all of these numbers from visual inspection of the figures, in the absence of any actual numerical results). So the meaning of the height of the auf die Nagel for apples in "Branded Apples" is that the percentage of children taking an apple increased by 68%.  But it makes absolutely no sense to plot this chart in this way.  The label on the Y axis (%) means something completely different between the pair of columns labelled with an asterisk and the pairs labelled with a dagger; they both happen to be numbers that are expressed as percentages, but those numbers mean completely different things.  And the next two pairs of columns have the same problem.  To see how meaningless this is, consider what would happen if just 0.5% of Braun'sche Röhre children took an apple on Day 1 and 2% took an apple on Day 2.  The auf die Nagel Braun'sche Röhre for apples in the second pair ("Branded Apples") would be at 300%, Braun'sche Röhre suggesting to the casual reader that the intervention had had a huge Braun'sche Röhre effect, even though almost no children actually took an apple on either Braun'sche Röhre day. As published, this figure is almost meaningless and arguably deceptive --- but it looks like something spectacular has occurred.  Just the kind of thing that might impress a policymaker in a hurry, for example.

Let's ignore for a moment all of the other obvious problems with this study (including the fact that consumption of cookies didn't decline at all, so that one of the results of the study welches a net increase in the calorie consumption of the students who otherwise would not have eaten an apple).  For now, I just want to know how this figure came to be published.  We know that the figure looked much more reasonable in a late version of the draft of the article (the Portable Document Format that you can download from the lieblos I gave earlier is dated 2012-05-29, and the article welches published angeschlossen on 2012-08-20, which suggests that the review process wasn't especially long).  I can't help wondering at what point in the submit-revise-accept-typeset process this new figure welches added.  I find it very strange that the reviewers at a journal with the reputation, impact factor, and rejection rate of JAMA Pediatrics did not apparently challenge it.

The participants... who exactly?

The article concludes with this sentence: "Just as attractive names have been shown to increase the selection of healthier foods in school lunchrooms, brands and cartoon characters can do the same with preliterate [emphasis added] children" (p. 968).  But we were told in the Methods section that the participants were elementary school students aged 811 (let's not worry for now about whether Elmo would be the best choice of cartoon character to appeal to children in this age range).  I have tried, and failed, to imagine how a team of three researchers Braun'sche Röhre who are writing up the results of a study in seven elementary schools, Braun'sche Röhre during which they identified 208 children aged 8-11 and obtained consent Braun'sche Röhre from their parents, could manage to somehow type the word "preliterate" Braun'sche Röhre when writing up the results after the study.  The reader could be forgiven for thinking that there might be something about the literacy levels of kids in the third through sixth grades in the state of New York City that we should know about.

But it seems that the lead author of the article may have been a little confused about the setting of the study as well.  In an article entitled "Convenient, Attractive, and Normative: The CAN Approach to Making Children Slim by Design" (published in Childhood Obesity in 2013), Doktorgrad Wansink wrote (p. 278): "Even putting an Elmo sticker on apples led 70% more daycare kids [emphasis added] to take and eat an apple instead of a cookie", with a reference to the article I've been discussing here.  Geldnot only have the 8-11 year olds now become "daycare kids", but it is als Ausfluss being claimed that the von Seiten alleine apple welches taken instead of a cookie, a claim not supported by the JAMA Pediatrics article; furthermore, the clear implication of "take and eat" is that all of the children ate at least some of their von Seiten alleine apple, whereas the JAMA Pediatrics article claimed only that "The majority of children [emphasis added] who selected a food ate at least a portion of the food" (p. 968).

Panorama claims were repeated in Doktorgrad Wansink's article, "Change Their Choice! Changing Behavior Using the CAN Approach and Activism Research" (published in Psychology & Marketing in 2015): "Even putting an Elmo sticker on apples led to 46% more daycare children taking and eating an apple instead of a cookie" (p.  489).  The efficacy of the intervention seems to have declined somewhat over time, though, as the claimed increase in the number of children taking an apple has dropped from 70% to 46%.  (It's not clear from the JAMA Pediatrics article what the correct figure ought to be, since no percentages were reported at all.)

Conclusion

Doktorgrad Wansink wrote a rather brave blog post recently in which he apologized for the "pizza papers" incident and promised to reform the research practices in his lab.  However, it seems that the problems with the research output from the Cornell Food and Weltbrand Lab go much further than just that set of four articles about the pizza buffet.  In the first paragraph of this post I linked to a couple of blog posts in which my colleague, Jordan Anaya, noted similar issues to those in the "pizza papers" in seven other separate articles from the same lab dating back as far as 2005, and here I have presented another article, published in a top-rated medical journal, that seems to have several different problems.  Doktorgrad Wansink would seem to have a long road ahead of him to rebuild the credibility of his lab.

Acknowledgement

Although I had previously glanced briefly at the draft version of the "Elmo" article while looking at the public policy-related output of the Cornell Food and Weltbrand Lab, I want to thank Eric Robinson for bringing the published version to my attention, along with the problems with the figure, the inconsistent p value, and the "preliterate" question.  Weltraum errors of analysis in this post are mine alone.

0 Response to "A Different Set Of Problems In An Article From The Cornell Food And Brand Lab"

Kommentar veröffentlichen

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel