Scientific journal articles can be incredibly intimidating to read, even for other scientists. Heck, I have a Ph.D. in a research science and have authored scientific papers, but sometimes I look at a research report outside my field of study and just go, “Nope, can’t decipher this.”
Learning to read them is an important skill, however, in today’s environment of what I call “research sensationalism.” This is where the popular media gets hold of a scientific research report and blows the findings WAY out of proportion, usually while misrepresenting what the researchers actually did and/or found. You know what I’m talking about.
Unfortunately, you can’t trust popular media reports about scientific research studies. Too often, it’s shockingly evident that the people writing these reports (a) aren’t trained to evaluate scientific research, and (b) are just parroting whatever newswire release they got that morning with no apparent fact-checking.
Thus, if staying informed is important to you—or you just want to be able to shut down all the fearmongers in your life—you need to learn how to read the original journal articles and form your own judgments. You don’t have to become an expert in every scientific field, nor a statistician, to do so. With a little know-how, you can at least decide if the popular media reports seem accurate and if any given study is worth your time and energy.
Where to Begin
First things first, locate the paper. If it’s behind a paywall, try searching Google Scholar to see if you can find it somewhere else. Sometimes authors upload pdfs to their personal webpages, for example.
Ten years ago, I would have told you to check the journal’s reputation next. Now there are so many different journals with different publishing standards popping up all the time, it’s hard to keep up. More and more researchers are choosing to publish in newer open access journals for various reasons.
Ideally, though, you want to see that the paper was peer reviewed. This means that it at least passed the hurdle of other academics agreeing that it was worth publishing. This is not a guarantee of quality, however, as any academic can tell you. If a paper isn’t peer reviewed, that’s not an automatic dismissal, but it’s worth noting.
Next, decide what type of paper you’re dealing with:
- Authors synthesize what is “known” and offer their own interpretations and suggestions for future directions.
- Rarely the ones getting popular press.
- Great if you want to know the new frontiers and topics of debates in a given field.
Original research, aka empirical research
- Report the findings of one of more studies where the researchers gather data, analyze it, and present their findings.
- Encompasses a wide variety of methods, including ethnographic and historical data, observational research, and laboratory-based studies.
Meta-analyses & systematic reviews
- Attempt to pool or summarize the findings of a group of studies on the same topic to understand the big picture.
- Combining smaller studies increases the number of people studied and the statistical power. It can also “wash out” minor problems in individual studies.
- Only as good as the studies going into them. If there are too few studies, or existing studies are of poor quality, pooling them does little. Usually these types of reports include a section describing the quality of the data.
Since popular media articles usually focus on empirical research papers, that’s what I’ll focus on today. Meta-analyses and reviews tend to be structured in the same way, so this applies to them as well.
Evaluating Empirical Research
Scientists understand that even the best designed studies will have issues. It’s easy to pick apart and criticize any study, but “issues” don’t make studies unreliable. As a smart reader, part of your job is to learn to recognize the flaws in a study, not to tear it down necessarily, but to put the findings in context.
For example, there is always a trade-off between real-world validity and experimental control. When a study is conducted in a laboratory—whether on humans, mice, or individual cells—the researchers try to control (hold constant) as many variables as possible except the ones in which they are interested. The more they control the environment, the more confident they can be in their findings… and the more artificial the conditions.
That’s not a bad thing. Well-controlled studies, called randomized control trials, are the best method we have of establishing causality. Ideally, though, they’d be interpreted alongside other studies, such as observational studies that detect the same phenomenon out in the world and other experiments that replicate the findings.
NO STUDY IS EVER MEANT TO STAND ON ITS OWN. If you take nothing else from this post, remember that. There is no perfect study. No matter how compelling the results, a single study can never be “conclusive,” nor should it be used to guide policy or even your behavioral choices. Studies are meant to build on one another and to contribute to a larger body of knowledge that as a whole leads us to better understand a phenomenon.
Reading a Scientific Journal Article
Most journal articles follow the same format: Abstract, Introduction, Methods, Results, Discussion/Conclusions. Let’s go through what you should get out of each section, even if you’re not a trained research scientist.
The Abstract succinctly describes the purpose, methods, and main findings of the paper. Sometimes you’ll see advice to skip the abstract. I disagree. The abstract can give you a basic idea of whether the paper is interesting to you and if it is likely to be (in)comprehensible.
DO NOT take the abstract at face value though. Too often the abstract oversimplifies or even blatantly misrepresents the findings. The biggest mistake you can make is reading only the abstract. It is better to skip it altogether than to read it alone.
The Introduction describes the current research question, i.e., the purpose of the study. The authors review past literature and set up why their study is interesting and needed. It’s okay to skim the intro.
While reading the introduction:
- Make a note of important terms and definitions.
- Try to summarize in your own words what general question the authors are trying to address. If you can, also identify the specific hypothesis they are testing. For example, the question might be how embarrassment affects people’s behavior in social interactions, and the specific hypothesis might be that people are more likely to insult people online when they feel embarrassed.
- You might choose to look up other studies cited in the introduction.
The Methods should describe exactly what the researchers did in enough detail that another researcher could replicate it. Methods can be dense, but I think this is the most important section in terms of figuring out how much stock you should be putting in the findings.
While reading the methods, figure out:
- Who/what were the subjects in this study? Animals, humans, cells?
- If this is a human study, how were people selected to participate? What are their demographics? How well does the sample represent the general population or the population of interest?
- What type of study is this?
- Observational: observing their subjects, usually in the natural environment
- Questionnaire/survey: asking the subject questions such as opinion surveys, behavioral recall (e.g., how well they slept, what they ate), and standardized questionnaires (e.g., personality tests)
- Experimental: researchers manipulate one or more variables and measure the effects
- If this is an experiment, is there a control condition—a no-treatment condition used as a baseline for comparison?
- How were the variables operationalized and measured? For example, if the study is designed to compare low-carb and high-carb diets, how did the researchers define “low” and “high?” How did they figure out what people were eating?
Some red flags that should give you pause about the reliability of the findings are:
- Small or unrepresentative sample (although “small” can be relative).
- Lack of a control condition in experimental designs.
- Variables operationalized in a way that doesn’t make sense, for example “low-carb” diets that include 150+ grams of carbs per day.
- Variables measured questionably, as with the Food Frequency Questionnaire.
The Results present the statistical analyses. This is unsurprisingly the most intimidating section for a lot of people. You don’t need to understand statistics to get a sense of the data, however.
While reading the results:
- Start by looking at any tables and figures. Try to form your own impression of the findings.
- If you aren’t familiar with statistical tests, do your best to read what they authors say about the data, paying attention to which effects they are highlighting. Refer back to the tables and figures and see if what they’re saying jibes with what you see.
- Pay attention to the real magnitude of any differences. Just because two groups are statistically different or something changes after an intervention doesn’t make it important. See if you can figure out in concrete terms how much the groups differed, for example. If data are only reported in percentages or relative risk, be wary of drawing firm conclusions.
It can take a fair amount of effort to decipher a results section. Sometimes you have to download supplementary data files to get the raw numbers you’re looking for.
The Discussion or Conclusions summarize what the study was about. The authors offer their interpretation of the data, going into detail about what they think the results actually mean. They should also discuss the limitations of the study.
While reading the discussion:
- Use your own judgment to decide if you think the authors are accurately characterizing their findings. Do you agree with their interpretation? Are they forthcoming about the limitations of their study?
- Concrete statements like “proved.” Hypotheses can be supported, not proven.
- Talking in causal terms when the data is correlational! As I said above, well-controlled experimental designs are the only types of research that can possibly speak to causal effects. Questionnaire, survey, and historical data can tell you when variables are potentially related, but they say nothing about what causes what. Anytime authors use words like “caused,” “led to,” or “_[X]_ increased/decreased _[Y]_” about variables they didn’t manipulate in their study, they are either being sloppy or intentionally misleading.
What about Bias?
Bias is tricky. Even the best intentioned scientists can fall victim to bias at all stages of the research process. You certainly want to know who funded the study and if the researchers have any conflicts of interest. That doesn’t you should flatly dismiss every study that could potentially be biased, but it’s important to note and keep in mind. Journal papers should list conflicts of interest.
Solicit Other Opinions
Once you feel like you have your own opinion about the research, see what other knowledgeable people you trust have to say. I have a handful of people I trust for opinions—Mark, of course, Chris Kresser, and Robb Wolf being a few. Besides fact-checking yourself, this is a good way to learn more about what to look for when reading original research.
To be clear, I don’t think it’s important that you read every single study the popular media grabs hold of. It’s often okay just to go to your trusted experts and see what they say. However, if a report has you really concerned, or your interest is particularly piqued, this is a good skill to have.
Remember my admonition: No study is meant to stand alone. That means don’t put too much stock in any one research paper. It also means don’t dismiss a study because it’s imperfect, narrow in scope, or you can otherwise find flaws. This is how science moves forward—slowly, one (imperfect) study at a time.
That’s it for today. Share your questions and observations below, and thanks for reading.
})( jQuery );
eventCategory: ‘Ad Impression’,