A little while ago, I wrote a post about bias in science. I started with the story of a recent retraction of a paper that attributed certain negatively-framed characteristics to conservatives when they actually applied to liberals. How did this mistake happen? I said that it was likely related to the fact that the wrong data was in line with the researchers expectations.
Jesse Singal wrote a long piece on the conflict. He starts off dismissive of the political angle, pointing out repeatedly that the wrong results actually ran contrary to most scientific work done in this area. Which is to say that the news is no news at all, scientifically. It was only the wrong news that was news, because it was new, because it was wrong. So why in the world did the researchers so matter-of-factly expect the data to run the other direction (and why did they believe the prevailing science already stated as much)? Well, here we go:
So we have two things of note. The first is that as I previously supposed they were satisfied with the answers because that’s what they were expecting. But secondly, their approach to the entire enterprise seemed to carry the really heavy assumptions. They (incorrectly) looked for “negative” attributes, assumed that conservatives would be assigned to them, and proceeded without really understanding what they were doing but with an implicit understanding of which side of the right/wrong line that conservatives would fall on.
Ludeke [the young researcher challenging the wrong findings] was right: This is exactly what Hatemi and Verhulst got wrong — highlighted by DeYoung, writing on his grad student’s behalf, in his very first email to Hatemi.
The first of many emails, it would turn out. Hatemi responded in a friendly enough manner the following morning, but sounded surprised by what DeYoung and Ludeke were claiming. “[Y]ou have a [data] set where P tracks with being more liberal? Weird. The scale is pro authoritarian and militarism – that doesn’t make a lot of sense to me.” (This is a clear misreading of the P scale.) A few emails later, after Hatemi noted he was on vacation but assured DeYoung that “the directions of the relationships… [were] right” when he looked at the raw data, DeYoung responded, “Thanks Pete. Didn’t mean to bug you on your vacation. Maybe we can talk about this further when you’re back at work. We’d love to take a look at your data to see if we can understand why your results are opposite to ours.”
Good to know.
Anyway, the whole article is worth reading even apart from this particular aspect. It deals with hierarchies, among other things. Essentially how a little fish caught a big fish in error, had a lot of difficulty correcting the record, and by the time all was said and done was left regretting that he ever had.
That is not a good recipe.
About the Author
please enter your email address on this page.