If there’s one thing we can take for granted, it’s that authoritarians veer Republican. Everybody knows this. Science says so. You don’t have science, do you? If you do, you’re probably a Republican.
So when a study in the American Journal of Political Science found more data points to confirm what we already know, it got a lot of coverage. As researcher Steven Ludeke says:
The erroneous results represented some of the larger correlations between personality and politics ever reported; they were reported and interpreted, repeatedly, in the wrong direction; and then cited at rates that are (for this field) extremely high. And the relationship between personality and politics is, as we note in the paper, quite a “hot” topic, with a large number of new papers appearing every year.
The problem, as you may have heard, is that it’s exactly backwards. It is not conservatives that have the propensity for psychoticism, like authoritarianism, but liberals. It is not liberals that that are so wonderfully into social desirability, but conservatives. They read the data the wrong way. How could this have happened? Fortunately, they explain it for us:
In line with our expectations, P [for “Psychoticism”] (positively related to tough-mindedness and authoritarianism) is associated with social conservatism and conservative military attitudes. Intriguingly, the strength of the relationship between P and political ideology differs across sexes. P‘s link with social conservatism is stronger for females while its link with military attitudes is stronger for males. We also find individuals higher in Neuroticism are more likely to be economically liberal. Furthermore, Neuroticism is completely unrelated to social ideology, which has been the focus of many in the field. Finally, those higher in Social Desirability are also more likely to express socially liberal attitudes.
One almost wonders what might have happened if they’d read the results correctly. Results that did not conform to their expectations? Let us look, for a moment, Richard Feynmann talking about the hard sciences:
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
A good thing, that! Then there’s no problem, right? Let us consider, for a moment, the dangers of cell phones on the roads. Some scientists look at it recently and found themselves quite surprised to find that cell phone bans resulted in no reduction in accidents:
Interesting. So they got a result that didn’t seem right to them, so they kept investigating. They tried to control for every single factor in search for a particular answer, and they didn’t find it. This is science working. Except, of course, if you consider if some factor gave them the result they were expecting. If some fluke or uncontrolled-for-factor had given them the reduction they were looking for, that cell phone bans result in fewer accidents wouldn’t be wrong or unproven, it would be scientific fact.“We were expecting to find maybe a five to ten percent reduction in accidents. We had read the studies that talking on your phone is as dangerous as drinking and driving. But even after controlling for gas prices, miles traveled, rainfall, and holidays—all factors that impact traffic patterns, road volume, and crashes—they found no impact on the rate of accidents.
“Only after spending a ton of time looking at the data, slicing it in different ways, we eventually came to the conclusion that there was no evidence of a decline in accidents. It took a while for us to convince ourselves that there wasn’t something there.”
Now, imagine that we’re not talking about the unforgiving hard sciences. Imagine we’re not talking about something as technocratic as cell phone bans, but matters of social political theory. The difference in most of our minds between right and wrong. While we’re at it, imagine that you’re investigating the foot soldiers of right and the foot soldiers of wrong. I don’t know what you imagine, but I imagine that if you are getting results that are “in like with your expectations”, your normative expectations, how likely are you to double-check the findings? How likely are you to wonder if perhaps you are using the wrong test to determine what is and is not authoritarian? In this case, maybe they wouldn’t have because they say that which attributes to which side of the gradient wasn’t the point of the study. Maybe they would have looked at the results and said, “Huh. Maybe we’re the authoritarians.”
I mean, it’s not actually impossible. Science! Heck, it’s possible that the researchers themselves were conservative and were not the least bit surprised to find out about their own authoritarian streak. Maybe they wore it as a badge of honor. Maybe, maybe, maybe. Or maybe they are, in fact, human. Maybe they came into it with their own suspicions about how people divide themselves into liberal and conservative and maybe, just maybe, that influenced their thinking. And maybe sometimes “reality has a liberal bias” because those whose job it is to (scientifically!) determine reality, approach it from the point of view of human beings with their own expectations and impressions.
You can move beyond the scientists themselves and to another step. If you’re an academic journal, are you more likely to scrutinize the methodology of something that brings an unexpected, or unpleasant result? Or are you going to approach both of them with absolute neutrality because science? Well?
About the Author
24 Responses to It’s Ain’t Heavy. It’s Science.
Leave a Reply
please enter your email address on this page.
Back in my day, we had these repeatability studies, where researchers from other places would try to replicate studies to make sure the results weren’t a fluke, and to filter bias.
But no one gets any money for those anymore, so they are pretty rare.
You mad, bro?
Not really.
😉
So is this one study that goes against the grain, or was there in fact never any board? I’m seeing a lot of “the balance of the research suggests that authoritarianism correlates with right-wing views.” Was that always just not the case – a lot of hype?
Or are you just pointing out that when faced with results that go against the balance of previous findings, scientists tend to be extra-skeptical of their results before publishing them? Far be it from me to contradict Dr. Feynman, but that doesn’t seem that wrong to me. It seems like almost whenever I hear a scientist being interviewed about a surprising finding, they quite proudly emphasize how much they scrutinized their results before publishing them because they wouldn’t want to be seen as rushing to gain notice got a surprising result that turned out to be researcher error.
It actually sounds to me from the way it’s described like the problem may have been being too credulous about the result because it was unsurprising (whether due to bias or the balance of the existing research). Which is the flip-side. I can’t quite tell, but it sounds like maybe the problem was a coding error or something – something that produced the result they were expecting (on a point that wasn’t the main point of the study), so they accepted it, even though they perhaps should have been more skeptical because the size of the effect was quite large (which sort of suggests design issues beyond flipping the correlation at whatever point). I guess it kind of has to be one or the other – either they got the real result and said, “No, we must have this backward, let’s just say that’s the case and publish that,” or a negative sign was inadvertently missed in the equations or whatnot. I’m not sure how else the correlation just gets flipped like that.
Haidt has been beating this drum for ages: A lot of social psych is heavily tainted with ideological bias.
“The balance of research” conducted by whom? With what assumptions? Verified through procedures by whom, what what assumptions, and with what checks?
Much of what I’ve seen relies on the Authoritarian Index, which is at once ubiquitous and questionable for the purposes in which it is used. The index was not created towards nefarious ends, but I suspect the attraction to it is not unrelated to what it says about each side. (Which is fine, and interesting, though would be better if a less loaded term were used.)
I harbor the suspicion that if it yielded different results, the index would be more popular at the National Review than in academia as it pertains to contemporary politics.
As for how the problem occurred, I assume it was a pretty straightforward transposition error. They saw the values and transposed what seemed obvious to them, which turned out to be wrong. I’ve made that error myself. It’s an error rooted in human nature.
Mistakes happen, and I’m not going to battle over this particular error. Merely that I suspect this was indicative, caught mostly because it is the sort of (likely unintentional) thumbing that’s easiest to catch.
Those are questions we always ask about all science, (and, as your Millikan example shows, that Science as a broad profession is hardly infalliable in asking the right ways), though, are they not?
And it’s a process. Any given body of research can only be so far along in the process of being checked & re-checked; this body is not that old (it’s not brand new either). I’m not quite sure what you’re saying is the problem or what should be done wth it going forward. Just toss the whole subject out as invalidly conceived? Keep doing studies & try to improve their design? Or what?
I’m unclear what the National Review not liking certain of these findings should tell us about the validity of the research itself or the instruments used in it. There are other findings that The Nation wouldn’t like about liberals or leftists. They would not like such findings whether they were valid or not. To review the quality of the research, you actually have to do that.
Whether results are investigated, or not, depends on the degree to which they are expected. Unexpected results are scrutinized, or ignored. They do as they did in the cell phone thing, look very closely for what they might be missing. The thing that is getting them the result that they are not expecting.
I agree that it’s a process. But it’s a process made of people. Ones that are going to look at results differently depending on how happy they are with the results. How happy they are with the results depending on all sort of value judgments.
What happens when almost all of the people in the room are expecting the same results? When they are, consciously or unconsciously, wanting to see the same results?
There is nothing unique about this with regard to politics. Examples provided include hard science, technocratic science, as well as politics. I could also talk your ear off on ecigarette research.
It’s certainly the case that we have to go with the science that we have, albeit hopefully with some skepticism that the current result is not the final result. Nor is it Word of God objective, as it is subject to the frailties of humanity.
But I don’t think that the thing cited above was some wild and crazy thing that just so happened to happen. It was a result of something pretty ingrained in the process. Something it can take a very long time to discover. Which is something that we need to keep in mind when we hear reporting of science’s latest findings.
Certainly it’s a process of humans, wth the associated foibles and biases. If your point is just to remind us of that and the evergreen implication that the argument “Science Says It, So It Must Be True” is the naive, nearly incoherent babble it always was, then we begin and end on a point of what I believe is widespread agreement among thinking people.
But it’s selective to be inclined to *completely* toss out one body of research while applying only the softest caveats to one’s embrace of another – without good understanding of why you’re doing that. You say there is nothing unique about research on political views – yet that’s what you’re singling out. If it’s the general point about the fallibility of (all) science you’re making, I wonder what the interest in singling out this research is. If it’s a specific point about this body of research, I’m not clear what it is.
I single it out because (a) it is an area of interest, and (b) for a variety of reasons I think it is especially susceptible to the wider phenomenon compared to a lot of other things. So I think it warrants extra caution.
I could have written it about ecigarettes, but three people didn’t send me an article about ecigarette research. They sent me this.
So you don’t think it’s unique in being *at all* susceptible, but you think it is susceptible to a unique degree – or at least “especially” susceptible. Are there other fields/topics that are equally especially susceptible – or is it …uniquely so?
And whether it’s especially susceptible along wth others or uniquely so, I’m still sort of missing the upshot of that. You leave the upshot at “when we hear reporting of science’s latest findings,” we consider them “hopefully with some skepticism that the current result is not the final result. Nor is it Word of God objective, as it is subject to the frailties of humanity.” But aren’t we doing that in general? Are we doing it more here? How would we describe how much more? Minor caveat (“This could always be contradicted by different results in the future, but I don’t really expect that”)? Wholesale discounting (“I basically think this body of research is worthless”)?
I mean, is the point here more that social psych or research into political views is different, or that Science Is Human And Therefore Fallible – this is just one case of that? And if it’s the former, how should we regard this type of research compared to just the normal skepticism we apply to new (or even not that new) scientific findings?
Skepticism seems to rise and fall with the happiness of the result. To the extent that there is a takeaway, it’s to keep it in mind when you get a comforting result*.
By way of example, when Vox (and others) ran a piece on authoritarianism and Trump (linear!) I kind of nodded my head in agreement and moved on. Sounded right.
Then, of course, someone pointed out that they basically used the same methodology to differentiate between Democrats and Republicans. And I find that methodology somewhat suspect (the authoritarian index in particular). I hadn’t made the connection because Vox didn’t really go into the methodology. I just kind of assumed that it was probably sound(ish) because, well, Trumpers.
Bad Trumwill.
* – I mean, also keep it in mind when it’s telling you something you don’t want to hear. Most people don’t need to be told that. They instinctively start thinking of possible flaws.
It isn’t an issue of a single result, and Will does a good job of framing the meta issue. And, as Murali notes, Haidt has been writing on this for some time Specifically the “this” is that there’s possibly a systematic flaw in the social sciences (at least) that crosses two lines:
1. Publication Bias against Null results
2. Haidt’s thesis that the absence of “conservatives” in the field is hurting the science qua science.
Haidt’s thesis implies that #1 is partly a result of #2 in that absent people for whom the obvious conclusion isn’t obvious, there’s simply insufficient scrutiny within the disciplines… and this is bad for science.
It isn’t simply that science is done by fallible people, its that there’s possibly a bigger issue with Science! that’s not being properly recognized among scientists. We can call it structural scientism.
…So given that we’re not in position to reform an entire branch of the sciences, what do we do with this information? Will, as far as I can tell, is sticking wth a view that we should really just be as skeptical of social science findings about liberals and conservatives (whichever we might be failing to be skeptical enough about) as we always ought to have been about any and all new scientific findings, just being more attendant to a greater likelihood we’re not being in this area. Do you go beyond that? Because the baseline for skepticism of new scientific findings is not to just regard the entire endeavor as structurally biased, and each new finding that isn’t backed up by verification as rock-solid as that of the gravitation constant (an impossibility for a new finding) as presumptively a worthless result of biased practitioners.
So where does this leave us? In practice, how do you guys really regard these type of findings about the psychological and reasoning tendencies of people with different political viewpoints? Is it all effectively just a bunch of liberal propaganda, even if the individuals involved think they are being
Does it bother you/do you find it at all convenient that this line of critique (basically Haidt’s) comes pretty darn close to giving conservatives carte blanche to basically just ignore whatever comes out of academic research that they don’t like the sound of? After all, the issue we really have with all of this is the effect these findings have on the political battle and relations between actual liberals and conservatives in the world, isn’t it? Do we really claim to be concerned with the state of the science for its own sake as our primary motivation here? Otherwise we would be much more interested in bias and distortion of all kinds across all (or a much broader swathe) of sciences. Even with Will’s interest in e-cigarette research, he wouldn’t (I don’t think) deny that his interest there is just as an entre into the secular question of research and publication bias (and popular interpretation). It serves that purpose (as does this discussion) well, but in reality the primary interest is that he has a non-analytic interest in the real-world effects of the course and reception of that research.
So, too, here, I think. I do find myself wondering if the focus on this type of bias in academic research reflects our own concerns as much as an objective take on all the kind of bias that besets research of all kinds.
…think they are genuinely applying their best substantive judgement (not trying to create liberal propaganda).
(…Also – sorry – “…he wouldn’t claim it’s just an entre…” (not deny).
Short answer? Yes, Conservatives get to treat such social science as suspect.
Haidt is not the only, or even the first, to bang this particular drum. Social scientists have largely shrugged such criticism off by claiming that the process will filter bias.
Which, if there was a robust effort to reproduce studies, would be OK. But there isn’t. Few funding sources issue grants for verification studies.
To expand a bit, social science has always had a problem being regarded as an actual science precisely because their variables are squishy and the data is slippery. When I measure the aerodynamic pressure on a wing, that measurement is the same no matter my political leaning. Same goes for my interpretation of the collated data. But social science can be affected by researcher bias at every step of the process, & it has to be controlled for very carefully.
If someone shows up with evidence that bias is affecting the work, and the collective response is, “eh”, then yeah, you get to discount their credibility.
I wouldn’t say “carte blanche.” More a sense of justified skepticism. Not a blanket rejection, but a need for explanation and an opposition to matter-of-fact “science says” pronouncements.
The Vox Trump/authoritarian connection has been treated, if not as holy writ, with far more deference than I believe the circumstances warrant. The “five parenting questions” aren’t even shared, except in conservative circles. Instead it’s about the meaning of the results.
Even the title, which suggests that “authoritarianism is a bigger factor than age, sex, and education. What’s the difference between authoritarianism and the other metrics? Three of the four are objective. The fourth isn’t. But for the sake of the reception, we are supposed to act like it is because Science.
I don’t doubt that McWilliams is genuine. I don’t think he’s doing propaganda. I think he had a theory (one I am sympathetic to) and ran with it. That’s how a lot of research works. And it becomes a problem when the sympathies and instincts of the researchers run so overwhelmingly in a particular direction.
I don’t claim to have a “view from nowhere” on this. I have my own biases. But I don’t think people look to me (or anybody here) for the scientific objectivity with which we’re supposed to view scientists, social or otherwise.
(I may write on this at a later date, but the interesting thing about ecigarette research is not that it’s predominantly skewed in one direction or the other. Rather, it’s that the research that comes out in both cases tend to support the view of the health advocates in each place (FDA/CDC vs PHE/RSM). Ecigarettes aren’t safer in the UK than in the US, so much as one side is preoccupied with determining dangers, and the other side has taken a different and more optimistic view. The mentality seems to be determining the science, rather than the other way around.)
Hospitals have this thing called “never events”.
https://en.wikipedia.org/wiki/Never_events
Stuff like “operating on the wrong patient” or “operating on the wrong part on the right patient” or “leaving a clamp, two sponges, and an ipod in the patient”.
You’d think that getting this stuff exactly reversed is the academic equivalent of a never event.
Wanna hear my conspiracy theory? They knew that they’d never get the real study published.
So they got the “never event” published.
Then they issued a correction.
This is an interesting line of inquiry but it almost all suffers from critical design flaws stemming from flawed assumptions.
The first major that jumps out at me is the reliance on self-identification as liberal or conservative. From other studies we know this to be flawed with a significant degree of mis-match between stated orientation and inferred orientation based on specific policy positions. And that’s not random; far more people mis-identify as conservative than the opposite.
But the bigger design flaw is adherence to the uni-dimensional, left-right paradigm. There was another study which looked at a body of responses to the rwa (right wing authoritarian) survey (a standard instrument for this kind of thing) and broke up the questions into social and economic categories. They then looked for clusters of response patterns and identified six distinct orientations with roughly equal occurrence. This is similar to plotting the results on a Nolan Chart. The identified orientations were Liberal on both, Conservative on both, socially Liberal and Economically Conservative (Libertarian), socially Conservative and Economically Liberal (Communitarian, e.g., Pope Francis), Socially Conservative and economically moderate, and Socially Liberal and economically moderate.
That was a huge improvement as far as fitting to the data, but even here I see problems. For instance, is legalized prostitution a social or economic issue? It’s not clear to me that “social” and “economic” are really great dimensions of analysis, nor that “conservative” and “liberal” are useful directions along those dimensions given that, in practice, they reduce to “policy positions typically held by people who self-identify as liberal/conservative”
Related to this is how much people’s responses can be shaped by how the subject is approached. One of my favorite “political” examples is Social Security. Given a series of questions about the federal debt, worker-to-retiree ratios, etc, etc, and thirty-somethings describe their positions as “I’ll never see a dime from the SS system.” Ask questions about how their parents have retained their independence in old age, savings and retirement planning, and the same people respond strongly with “Well, first I’ll have my Social Security check….”
This should have been a reply to Road Scholar’s comment.
Have you read Passing on the Right yet? I just did a few weeks ago and it makes some very similar points in support of your view in the OP. (It makes other points, too.)