A very dishonest “professor” over at ISU has released a so-called “Metastudy” that claims to “conclusively prove” that so-called “violent video games” desensitize people to violence.

Your first clue that he’s dishonest is that he used the words “utmost confidence” and “regardless of research method” in the same sentence. A quick look at his past research reveals him to be an agenda-driven “researcher”, precisely the sort of person you don’t want to let anywhere near “meta-analysis” methodology.

In the medical world, the phrase “metastudy” is quickly getting a bad name, for precisely the reasons that make the ISU study completely dishonest; more and more “metastudies” are turning out to be paid for by drug/treatment companies, for the specific purpose of promoting their products. Meta-analysis is also vulnerable to drug company tactics in pushing Publication bias, wherein studies that do not find certain drugs/treatments to be useful tend to not be published.

Put bluntly, a metastudy is one of the most easily abused “study” methods in terms of the how to lie with statistics phenomenon. Though I hate posting links to it, the Wikipedia page (concerning weaknesses, dangers, and the “file drawer problem”) as of the time I write this is a reasonable start concerning the problems of “metastudies.”

I’ll be referencing the ISU study throughout, but I shall do my best to make this post generic to most metastudies in the risk of this rather lousy “scientific” tool:

#1 – Metastudies are incredibly vulnerable to cherry-picking and confirmation bias. For instance, the ISU study references 130 “previous studies” for its work. Unfortunately, 130 studies is a very small part of the work done in the field of “violent media” and “violent play” in the past 50 (or even 20) years, and the ISU study only references studies that agree with the prepicked conclusion they want.

#2 – Metastudies are very vulnerable to the GIGO problem. Many of the aforementioned studies the ISU “metastudy” references, have previously been found to be unreliable, failing to account for confounding variables. With bad input, the eventual “conclusions” cannot be considered valid.

#3 – Following up on the GIGO phenomenon, “metastudies” quite often work to prove a correlational, rather than causational, relationship. The problem here is simple: by failing to account for confounding variables, and hiding the effects of poor study methodology by claiming an arbitrarily large sample size (ignoring the quality of data, similar to a “market security” that turned out to be backed by garbage loans in the money sector) , they mistake a correlated phenomenon for a causational factor.

To put it in layman’s terms, they reach the conclusion that exposure to beach towels and swim suits cause skin cancer, because people at high risk for skin cancer happen to have a high correlation of long-term exposure to beach towels and swimwear.

After a 2005 APA report regarding video games, one of the authors of the report made it very clear that all of the studies analyzed were merely correlational, not causational after a number of inaccurate media reports regarding their 2005 report. A major problem in the dishonest ISU “metastudy” is that much of their “study data” are the same studies referenced by the 2005 APA report, and yet the ISU report tries to claim a “causal” link.

#4 – The final problem is that when doing a “metastudy”, the individual studies are not controlled with regard to each other – in other words, they’re not studying the same phenomenon with the same methods and the same definitions. Got studies in the mix that are not properly blinded? Oops. Got studies that used a wacky definition of what you’re hunting for? Your dataset is, again, skewed.

“Metastudies” are, in the hands of a dishonest person, a tool for the “money laundering” of discredited, poorly conceived, or outright dishonestly contrived “study data”, hiding defects and bad methodology behind the term “large sample size” in order to lend a credibility that it ought not have.


Category: Elsewhere

About the Author

Guy Webster (web) is an IT specialist at Southern Tech University, where he and Will Truman attended college.

10 Responses to “Metastudy” = Cherry Picking

  1. Abel says:

    Your first clue that he’s dishonest is that he used the words “utmost confidence” and “regardless of research method” in the same sentence.

    He just took a page from the Global Warming scientists. Last time I checked, they seem to be doing okay. 🙂

  2. trumwill says:

    Though I don’t disagree with Web on the whole, that one quote does not necessarily represent dishonesty. If you run tests with three different methods and all three produce the same results, then it can be said that the the results were the same “regardless of research method.”

    For instance, if we were interested in demonstrating a link between lung cancer and smoking, there are a number of ways that we could investigate it. It seems likely that no matter how you investigate it (“regardless of research method”), the results are the same. We have every reason to be pretty confident of it.

    Regarding this study in particular, it strikes me as too unwieldy to be of much use. You run into the garbage-in-garbage-out problem where there are certain factors that play a role that it’s nearly impossible for any study to control for. And you have to translate between different research methods to get a singular result. Better, I think, to allow for different results and simply note the patterns that they represent.

    The individual studies can be useful. They pretty consistently show a correlation between violent video games and aggressive behavior. So it could be that violent video games are causing the behavior, or it could be that an interest in violent video games is indicative of an aggressive temperament. That latter point is useful because it could tell parents “Hey, if your kid has a penchant for violent video games, you might want to keep an eye out for aggressive behavior elsewhere” and, if it’s noticed, talk about how there is a time and a place for aggressiveness of different sorts (in video games, on a sports field, etc) and to better learn the appropriate behavior in appropriate places.

    Without clearly demonstrated causality, though (and that’s a pretty high bar), we should be really careful before passing broad laws on the basis of assumed causality. It’s useful to study these things even when it doesn’t prove something beyond doubt, but useful information should not automatically translate into inflexible policy.

    Even if we could demonstrate cause-and-effect, it clearly doesn’t cause aggressiveness in all kids. We may find common other factors among those kids that become aggressive and those that don’t. It would be a bad idea for the government or even parents to assume that just because violent video games can lead to problems in some kids that they should be outright banned. Among other things, we could discover that for some kids playing the video games can serve as an effective outlet for aggressive energy.

  3. web says:

    Will,

    The individual studies can be useful. They pretty consistently show a correlation between violent video games and aggressive behavior.

    Actually, the body of literature is wide open. There are studies that show a correlation, other studies that show no correlation after correcting for intervening/confounding variables, but no reputable studies as of yet (apart from “metastudies”) that showed a causal factor.

    Keep in mind, a great many studies manage to paint the words “violent video game” with such a broad brush that it becomes meaningless. When your “study” defines (among other titles) Burgertime, Super Mario Bros, and Nerf Arena Blast as “violent” games, something is rather amiss.

    If you run tests with three different methods and all three produce the same results, then it can be said that the the results were the same “regardless of research method.”

    This is one way of saying it. However, it is also a trigger to indicate that much greater scrutiny of the claims is needed. Likewise when the author of the “metastudy”, who has never had a “study” of his fail to reach his pre-decided conclusion, announces that the study has reached “definitive findings” and “conclusively proves” his agenda point, it seems we have the work of a charlatan on our hands.

  4. trumwill says:

    Actually, the body of literature is wide open. There are studies that show a correlation, other studies that show no correlation after correcting for intervening/confounding variables, but no reputable studies as of yet (apart from “metastudies”) that showed a causal factor.

    More of the ones I’ve seen than not suggest a correlation. That makes a good deal of sense, really, even if we take for granted that there is absolutely no causation. Aggressive people are more likely than not to be attracted to violent video games. I would add in addition to questioning what, for the studies, constitutes a “violent video game”, you’re also left with the question of what constitutes “aggressive behavior,” a category within which some would include pretend violence which causes no more harm than monkey bars.

    My view on the subject is pretty well captured by this Ars Technica article. Correlation has been demonstrated, but somewhat weakly. Causation is unproven, but in limited circumstances possible. I find the results of this study to be pretty plausible.

    While I do not like how some people that look at a study say “See! This study proves what I’ve been saying all along!” I find myself lately agitated with those that can find any flaw in any study and declare it a fatal one. In some discussions, it has really gotten to the point that you can’t cite any statistics because none match up to the “common sense” of the person you’re debating with. To be sure, I’m not including you or this post in that category as this study really has problems.

  5. web says:

    Will,

    More of the ones I’ve seen than not suggest a correlation. That makes a good deal of sense, really, even if we take for granted that there is absolutely no causation. Aggressive people are more likely than not to be attracted to violent video games. I would add in addition to questioning what, for the studies, constitutes a “violent video game”, you’re also left with the question of what constitutes “aggressive behavior,” a category within which some would include pretend violence which causes no more harm than monkey bars.

    Also remember: the problem about “correlation links” is that we’re back to the problem of publication bias. Studies which show “no link” or “zero correlation” tend to remain unpublished (or else get published to lower-impact journals) more than studies “showing something significant.” It’s been seeing the most exposure in medical circles, but it’s a problem in pretty much every field.

    In some discussions, it has really gotten to the point that you can’t cite any statistics because none match up to the “common sense” of the person you’re debating with. To be sure, I’m not including you or this post in that category as this study really has problems.

    This is true, but the underlying problem of publication bias really doesn’t help those situations much. Anyone familiar with publication bias and past misbehavior on the part of scientists is almost destined to develop a jaded eye towards statistics.

    Of course, it doesn’t help that activists/agenda-ists and snake oil salesmen tend to use the same words. A real, proper scientific report starts out reporting their findings in a mathematical way. “X% causation with a Y% confidence interval“, for instance. Snake oil salesmen, or activists/agenda-ists, tend to go straight for calling something “definitively proven” or “conclusively proven”.

    When I see a keyword like that, I immediately become skeptical. Because let’s face it, chances are it’s bogus.

  6. trumwill says:

    Anyone familiar with publication bias and past misbehavior on the part of scientists is almost destined to develop a jaded eye towards statistics.

    Here’s the problem: With this in mind, anybody and everybody is absolutely free to disregard any and every study that puts out statistics inconvenient to the speaker. I do think that statistics should be scrutinized and I do believe that we should be wary of statistics definitively proving anything.

    But one can easily go too far in the other direction. There is literally no statistical study analysis can stand up to an infinite amount of scrutiny. So while we should keep in mind that all studies are flawed to some degree and results are almost always going to be skewed somehow. But sometimes you have to go forward with the numbers that you have.

    That’s kind of difficult in this case, though, because even if what the study says is entirely accurate, it’s simply not enough to justify legislation. It’s not even enough to justify banning it in the Truman household. There isn’t a place to “go forward” to.

    The fact that a lot of studies suggest that there may be a link probably does mean that it’s worth keeping an eye out for if I have a kid that tends to play violent video games and the adrenal juices tend to produce excessively aggressive play afterwards.

    Given the flaws in the study and the fact that even if the results are true it only affects borderline cases, it’s probably not something that we should worry about too much. But it’s something to file away for future reference (both the results of the studies and the potential methodological problems) rather than completely dismissed.

  7. web says:

    Will,

    The problem is that nobody has ever managed to come up with a causal link except by misrepresenting their data (and being caught doing so).

    It’s sort of like the “stastistically relevant correlational link” between the amount of ice cream a person consumes, and their risk of drowning. The problem is that the “link” is merely seasonal (ice cream consumption and swimming both tend to happen more frequently in the summer).

    When a “metastudy” hodgepodges a series of discredited, questionable, and misrepresented studies and then announces a “causal” link as the conclusion, flashing lights and “Danger Will Robinson” sounds should be going off. And so I can’t agree: filing this away “for future reference” is a bad idea. Go back to the original studies and work out which (if any) are valid and what they actually say, but we may as well take this garbage and toss it where other garbage goes.

    But sometimes you have to go forward with the numbers that you have.

    If you’re doing an honest study, you can release the numbers you have, come to your conclusion, and include a notation in your conclusions that “followup to this study should attempt to verify the relevance and impact of possible confounding variable X.”

    “Numbers that you have” in regards to this “metastudy”, however, are already suspect – they picked 130 studies, constituting probably less than 1 or 2 percent of the available studies concerning violent media, and are quite obvious that they were picking only studies that matched the conclusion they wanted to reach.

  8. trumwill says:

    The problem is that nobody has ever managed to come up with a causal link except by misrepresenting their data (and being caught doing so).

    I would say that coming up with proof an actual causal link would be a tremendously difficult thing to do. That none has been established does not mean much. I would also say that correlation alone can be useful knowledge even if causation cannot be established.

    Regarding this particular study and the claims attached to it, I don’t disagree. I’m not defending this study. As I’ve said before, it’s very suspect.

  9. Kirk says:

    I’ve never heard the term “metastudy” before this. I take it from your descriptions that it is just a collection of other studies.

    Isn’t that a bit like taking a pile of parts from different cars (of different makes and models) and trying to put them together to make another car?

  10. web says:

    Kirk,

    A “metastudy” is supposed to be a statistical analysis of a “group of similarly conducted studies on the same topic.” Ideally, it’s a statistical analysis of a group of studies that all had the same (or “close enough”) methodology and term definitions.

    In medicine, this has the possibility of actually working (if you have 10 studies in 10 different locations studying the same drug, each with the same criteria for candidacy/placebo/blinding/time period/etc, and then aggregate the data for a larger sample size). Unfortunately, the reality (re: publication bias and selection bias in “leaving out” unfavorable studies) relating to this has been less than pleasant.

    When it comes to other disciplines (psychology, climate science, behavioral learning, etc), yeah, your analogy of the Frankencar is about accurate. The best you can hope for is something that “Kind of works but is really messy”, and the norm is something fatally flawed from the very first premise.

Leave a Reply

Your email address will not be published. Required fields are marked *

If you are interested in subscribing to new post notifications,
please enter your email address on this page.

Espresso


Recent Comments


Queenland

Greetings from Stonebridge a fictitious city in a fictitious state located in a tri-state area in the interior Mid-Atlantic region. We're in western Queenland, which is really a state unto itself, and not to be confused with Queensland in Australia.

Nothing written on this site should be taken as strictly true, though if the author were making it all up rest assured the main character and his life would be a lot less unremarkable.


Hit Categories


History Coffee