If there’s one thing we can take for granted, it’s that authoritarians veer Republican. Everybody knows this. Science says so. You don’t have science, do you? If you do, you’re probably a Republican.
So when a study in the American Journal of Political Science found more data points to confirm what we already know, it got a lot of coverage. As researcher Steven Ludeke says:
The erroneous results represented some of the larger correlations between personality and politics ever reported; they were reported and interpreted, repeatedly, in the wrong direction; and then cited at rates that are (for this field) extremely high. And the relationship between personality and politics is, as we note in the paper, quite a “hot” topic, with a large number of new papers appearing every year.
The problem, as you may have heard, is that it’s exactly backwards. It is not conservatives that have the propensity for psychoticism, like authoritarianism, but liberals. It is not liberals that that are so wonderfully into social desirability, but conservatives. They read the data the wrong way. How could this have happened? Fortunately, they explain it for us:
In line with our expectations, P [for “Psychoticism”] (positively related to tough-mindedness and authoritarianism) is associated with social conservatism and conservative military attitudes. Intriguingly, the strength of the relationship between P and political ideology differs across sexes. P‘s link with social conservatism is stronger for females while its link with military attitudes is stronger for males. We also find individuals higher in Neuroticism are more likely to be economically liberal. Furthermore, Neuroticism is completely unrelated to social ideology, which has been the focus of many in the field. Finally, those higher in Social Desirability are also more likely to express socially liberal attitudes.
One almost wonders what might have happened if they’d read the results correctly. Results that did not conform to their expectations? Let us look, for a moment, Richard Feynmann talking about the hard sciences:
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
A good thing, that! Then there’s no problem, right? Let us consider, for a moment, the dangers of cell phones on the roads. Some scientists look at it recently and found themselves quite surprised to find that cell phone bans resulted in no reduction in accidents:
Interesting. So they got a result that didn’t seem right to them, so they kept investigating. They tried to control for every single factor in search for a particular answer, and they didn’t find it. This is science working. Except, of course, if you consider if some factor gave them the result they were expecting. If some fluke or uncontrolled-for-factor had given them the reduction they were looking for, that cell phone bans result in fewer accidents wouldn’t be wrong or unproven, it would be scientific fact.
“We were expecting to find maybe a five to ten percent reduction in accidents. We had read the studies that talking on your phone is as dangerous as drinking and driving. But even after controlling for gas prices, miles traveled, rainfall, and holidays—all factors that impact traffic patterns, road volume, and crashes—they found no impact on the rate of accidents.
“Only after spending a ton of time looking at the data, slicing it in different ways, we eventually came to the conclusion that there was no evidence of a decline in accidents. It took a while for us to convince ourselves that there wasn’t something there.”
Now, imagine that we’re not talking about the unforgiving hard sciences. Imagine we’re not talking about something as technocratic as cell phone bans, but matters of social political theory. The difference in most of our minds between right and wrong. While we’re at it, imagine that you’re investigating the foot soldiers of right and the foot soldiers of wrong. I don’t know what you imagine, but I imagine that if you are getting results that are “in like with your expectations”, your normative expectations, how likely are you to double-check the findings? How likely are you to wonder if perhaps you are using the wrong test to determine what is and is not authoritarian? In this case, maybe they wouldn’t have because they say that which attributes to which side of the gradient wasn’t the point of the study. Maybe they would have looked at the results and said, “Huh. Maybe we’re the authoritarians.”
I mean, it’s not actually impossible. Science! Heck, it’s possible that the researchers themselves were conservative and were not the least bit surprised to find out about their own authoritarian streak. Maybe they wore it as a badge of honor. Maybe, maybe, maybe. Or maybe they are, in fact, human. Maybe they came into it with their own suspicions about how people divide themselves into liberal and conservative and maybe, just maybe, that influenced their thinking. And maybe sometimes “reality has a liberal bias” because those whose job it is to (scientifically!) determine reality, approach it from the point of view of human beings with their own expectations and impressions.
You can move beyond the scientists themselves and to another step. If you’re an academic journal, are you more likely to scrutinize the methodology of something that brings an unexpected, or unpleasant result? Or are you going to approach both of them with absolute neutrality because science? Well?
About the Author
please enter your email address on this page.