In the past few weeks, you have probably seen or heard claims that a new study has found that GMOs contain dangerous levels of formaldehyde; however, the reality is that the study itself is questionable at best, and the claims being made by anti-GMO activists are dangerously misleading. This situation illustrates the crucially important fact that not all scientific studies are equal. As I have previously explained, the peer-review system is good, but it isn’t perfect, and sometimes bad papers do get through. So you should always read scientific papers carefully and critically evaluate whether or not the paper can be trusted (regardless of whether or not the paper agrees with your preconceived views). Therefore, in this post, I am going to provide you with a rough guide for understanding scientific results.
I want to be clear from the start that, with a few exceptions (like #7 and possibly #8), most of these are not in and of themselves justification for ardently rejecting the results of a paper. In other words, they do not give you carte blanche to disregard scientific results with reckless abandon. Rather, they are guidelines to help you judge how confident you should be in the results and how much time you need to invest in critically analyzing the paper. So, for example, suppose that I am reading a paper and the statistics aren’t explained particularly well, which results in me not being completely confident that they were done correctly. If the paper was written by respected experts in their fields, published in a high impact factor journal, and makes sense in light of what other researchers have found, then I would probably give the authors the benefit of the doubt and assume that they knew what they were doing. In contrast, if the that same paper was written by non-experts, published in a low-impact or predatory journal, and directly conflicts numerous other studies without adequately explaining the discord, then I would probably assume that something was wrong with that paper unless its results are confirmed by future research. Remember that extraordinary claims require extraordinary evidence. So if you are going to claim that a widely accepted scientific principle is wrong, that you found a simple cure for cancer, that vaccines are dangerous, etc. then your data and paper are going to need to be impeccable.
1. Read the original study yourself
This may seem obvious, but lots of people fail to do this. Rather than reading the study themselves, the simply trust the summary of the study provided by a blog or news article. This is extremely problematic because those sources are by their very nature second hand. This means that you are relying on someone else to evaluate the study, and you are implicitly trusting them to accurately report what the study found, but, as we all know, people are flawed and biased. The media tends to sensationalize things, and most blogs are agenda driven and written by non-experts. So you should never rely on someone else to examine the study for you. Rather, you should always read and critique the original paper for yourself (that goes for any papers that I talk about on this blog as well). Irritatingly, many papers are behind pay-walls, so it can be difficult to get a hold of the article, but you can often get access to them through a library, or, if nothing else, you can email the lead author and ask for a copy. It is very common for scientists to get emails asking for copies of their papers, and they are generally more than happy to send you their work.
2. Don’t rely on titles and abstracts
Another common mistake that people make is to just read the abstract (or sometimes just the title). The problem is that abstracts and titles are often misleading. Whether intentionally or unintentionally, abstracts often misrepresent what the scientists found (I think that this is usually just an accidental result of trying to condense your research into a few attention grabbing sentences). Further, the abstract is necessarily terse, so you don’t get the details of how the study was conducted or how the data were analyzed. Thus, you simply cannot evaluate a study if you are only reading the abstract.
Many people argue that you shouldn’t even read the abstract until you have read the rest of the paper, that way it doesn’t bias you. That is a perfectly valid argument, but I personally prefer to start with abstract, because I view it as a thesis statement of what the author’s think that they found. Then, as I read the paper, I compare that thesis statement to what the authors actually did and what their data actually say in order to see if their claims check out. So, you can use either approach as long as you keep in mind that the interpretation that is presented in the abstract may not actually be correct, and it is your job to assess the validity of the authors’ claims.
3. Acquire the necessary background knowledge
Scientific papers are written in the language of science. They are full of jargon and complex terms. If you really want to understand what the authors did and what they found, you are going to have to take the time learn the background information and learn what those technical terms mean. This is going to be a big time investment, but it is absolutely vital if you want to be able to properly critique a paper.
4. Make sure that the paper is published in a legitimate journal
Recent years have seen a massive proliferation of what are known as “predatory journals.” These are publications which masquerade as legitimate peer-reviewed journals, but do not actually follow the standards of peer-review and are usually either agenda driven or are simply for-hire (i.e., they will publish almost anything if you pay them), though I usually see the term “predatory” used for the latter case. The outstanding blog Scholarly Open Access does a great job of exposing predatory journals and publishers, and it maintains a large list of such journals. So it is a useful reference, which I personally have bookmarked.
To give one extreme example of a fake journal, consider DeNovo. It claims to be a proper peer-reviewed journal, but when we look more closely, we find a fascinating history. A few years ago, “scientist” and Bigfoot believer Dr. Melba Ketchum claimed to have found DNA evidence that Bigfoot existed, and even claimed to have sequenced part of the genome. There was only one problem: all of the actual journals that Dr. Ketchum submitted her work to concluded that her results were nonsense and promptly rejected her paper. So, rather than simply admitting that her study was no good, she bought a journal (now known as DeNovo), and (big surprise) her journal was willing to publish her study (I’m sure it was given a rigorous and unbiased peer-review [note the immense sarcasm]). To date, there are only two publications in DeNovo and both of them were written by Ketchum.
DeNovo is an extreme and laughable example of this type of corruption, but most of these journals are more serious threats to the dissemination of scientific knowledge, because most of them are not as obviously absurd. For example the Journal of American Physicians and Surgeons looks like a legitimate journal, and indeed it claims to be one, but it is driven by extreme ideology and cranks out one absurd and fundamentally flawed study after another. Further, when you look at its members and editorial board, you find that it is populated by prominent anti-vaccers and purveyors of woo such as Blaylock and Mercola. Similarly, the now defunct “journal” Medical Veritas was notorious for publishing outlandishly terrible anti-vaccine papers, and its editorially board contained people such as Andrew Wakefield (yes, that’s the same Wakefield that falsified data, started the “vaccines cause autism” myth, and had his medical license revoked for unethical behavior). Fortunately, that pseudo-journal is now out of business, but its papers are still available, and I frequently have anti-vaccers send me links to those papers. The problem with a journal like that should be obvious: we cannot be at all confident in the stringency of its review process when people like Wakefield are serving as editors, and when its papers are consistently of an extremely low quality.
I want to be very clear about what I am saying here. The fact that a paper was published in a questionable journal does not automatically mean that the study itself was bad, but it does mean that you cannot be confident that it received a proper peer-review. Indeed, more often than not, when someone sends me a paper that supposedly shows that GMOs are dangerous, vaccines don’t work, etc. it is from one of these pseudo-journals. The whole point of the peer-review process is to have objective third parties who are experts in the relevant fields critically examine the work, and these fake journals distort or completely evade that process. So, although you cannot automatically assume that something is wrong just because it appeared in one of these journals, you should be very, very skeptical of it, and you should examine it extremely closely before accepting its results.
5. Check the authors for relevant expertise and conflicts of interest
It is very common for pseudo-scientific papers to be written by non-experts, or people who are experts in a completely unrelated field. For example, in the GMO paper that I mentioned at the beginning, the author has no training or experience in genetic engineering, toxicology, or even biology. Science is very complicated and, despite what movies and TV often show, the majority of scientists don’t know that much about fields outside of their own areas of study. If we want to branch out, we almost always collaborate with experts in the relevant fields and include them on the publications. So, when someone with no training or expertise concludes that the vast majority of actual experts are horribly wrong, that should make you very suspicious.
The second problem is having a conflict of interest. Both money and ideology can easily cloud a researcher’s judgment. Fortunately, journals require researchers to declare conflicts of interest, so it is easy for you to see whether or not a paper has any. Also, I want to stress the fact that a conflict of interest does not automatically invalidate a study, it simply gives you a good reason to be extra critical and be very careful when evaluating the research.
On a side note, it is a common myth that all of the pro-vaccine studies are funded by pharmaceutical companies. Also, in many cases grants simply provide research funding, not salaries.
6. See if the journal’s impact factor matches the paper’s claims
The impact factor is simply a measure of how widely cited a journal is. High impact journals are very widely read and cited, whereas low impact journals are not read by many researchers. So, if you have an extraordinary result that is going to be of wide interest to many people, then you typically publish it in a high impact journal, but if your results are of fairly narrow interest, then it will be published in a low impact journal. Generally speaking, high impact journals also have more stringent peer-review and only publish really high quality papers. To be clear, having a low impact does not automatically mean that a journal is of poor quality, it may simply be a specialist journal, such as journal on a particular region or taxon, but in other cases it is a sign of low quality. So, rather than simply looking at the raw impact factor, I suggest looking at the impact factor in conjunction with the paper’s claims. For example, if a properly designed study found that vaccines did actually cause autism, that would be huge news and could be published in a very high impact factor journal. So, the fact that nearly all of the papers which claim that vaccines cause autism are published in extremely low impact journals should be very troubling to you. It should make you ask, “why wasn’t this published in a better journal?” The answer is generally that the study was flawed and thus could not get into a high quality journal.
7. Make sure that the study was designed, conducted, and analyzed correctly
This is the single most important point. Nothing else really matters if the experimental design is no good. Even if it was written by a leading expert and was published in one of the world’s best journals, if the experimental design was flawed, then the paper should be disregarded. So, you need to make sure that the study had a large enough sample size, used the proper controls, randomized correctly, used the appropriate statistical analyses, etc. This is again going to require you to learn experimental design and statics. If you don’t understand those topics then you simply aren’t qualified to critically examine scientific papers. That’s not me being an elitist, that is me simply stating what should be an obvious fact: if you don’t understand statistics, then you can’t be sure that the authors used the correct analyses. If you want to really be able to understand science, that is great! I applaud you for that and wish that everyone would endeavor to learn science, but don’t fool yourself into thinking that you can get by with reading blogs and glossing over the analysis sections of papers. Truly understanding science is going to take a lot of work and effort. You need to get yourself several good statistics books and take some statistics courses online or through a local college before you are qualified to critique scientific papers.
Additionally, when reading a paper, you need to check that the authors interpreted their results correctly. In other words, make sure that the authors’ claims are justified by the results. I have seen numerous papers where the deigns and analyses were fine, but the authors proceeded to draw conclusions that simply couldn’t be drawn from those results. So, make sure that the authors aren’t making unjustified assumptions or logical leaps. In other words, trust the data, not the authors’ interpretation of the data.
8. See if the paper is consistent with other studies
This is another really important point. When there are lots of papers on a given topic, you should be wary of the one or two that disagree with the others. For those papers, in addition to the normal checks for proper experimental design, statistics, etc. you should look for an explanation of why those papers got different results. Remember that if you do the same experiment enough times, you will eventually get erroneous results just by chance. So even if a paper was published in a high impact factor journal and was conducted properly, it may be wrong simply by chance (this is a statistical fluke that I previously described in detail, so I won’t belabor the point). Therefore, you should see if the authors offered a compelling explanation for their results. If they simply state that the other papers were wrong without explaining why, or (worse yet) simply ignore the other papers, you should be very suspicious. If, however, they can give a compelling explanation of what they did differently, and why their results are reliable, then you can have more confidence in that paper (note: this hinges on the reliability of their explanation). Ultimately though, it sometimes comes down to simply waiting for further testing. There is absolutely nothing wrong with saying, “this paper may be correct, but it makes extraordinary claims so I am going to wait for additional testing before reaching a conclusion.” That is a vital part of healthy skepticism. The problem arises when either the results have been repeatedly confirmed and you still ignore or deny them, or, conversely, when the results haven’t been independently verified, but you cling to them as absolute truth.
The other thing that I encourage you to look for is adequate citations to back up the authors’ claims (even if the paper agrees with the consensus and is just elaborating on some mechanism or detail). No paper should be an island. They should always build on existing work and back up their claims using what other researchers have found. The introduction of the paper is supposed to describe what is currently known and lay the groundwork for the study at hand, whereas the discussion section is designed to discuss the results of the current study in light of previous studies and explain how it expands our understanding. Both of the these sections should be citation heavy, and, importantly, they should cite high quality studies. I often find that pseudo-scientific papers make extraordinary claims in the introduction/discussion but utterly fail to back them up with appropriate citations, or their citations lead to questionable papers (often from predatory journals). Both of those situations are big clues that the paper isn’t grounded in reality and the authors may not actually know what they are talking about.
9. Make sure the paper follows the standard conventions of scientific writing
Scientific writing is generally objective and dispassionate. It’s designed to focus on facts, not emotions. So if the paper that you are reading sounds more like an impassioned argument than a dispassionate analysis, that should concern you. Consider, for example, this excerpt from the abstract of an anti-vaccine “paper.”
The propaganda dispensed by Public health care and vaccine apologists is, at best, a weak attempt to rationalize the healthcare establishment’s positions using all the tools of doublespeak or, as George Orwell’s called it in his book 1984, “newspeak”, to: (a) mislead, (b) distort reality, (c) pretend to communicate, (d) make the bad seem good, (e) avoid and/or shift responsibility, (f) make the negative appear positive,(g) create a false verbal map of the world, and (h) create dissonance between reality and what their narrative said or did not say.
My favorite part of this quote is how aptly it describes anti-vaccers, but I digress. This just isn’t how scientific papers are written. Whenever you see mainstream science described as “propaganda,” you can be fairly certain that you aren’t dealing with a proper paper. In this particular case, the article was published Medical Veritas which, as I previously explained, was an extremely biased and agenda driven pseudo-journal.
10. If all else fails, check Google for a refutation
I am extremely hesitant to include this, and I encourage you never to rely on it (after all, it directly conflicts with my first point), but if nothing else, before you blindly believe an abstract or blog about a paper, get on Google and see if people have objectively critiqued it. I don’t like this option because it means that you are relying on other people. So, again, I think that you should read and examine the paper for yourself, but if you don’t have the necessary background knowledge and don’t plan on acquiring it, then at the very least, do a basic fact check on the article before you accept a paper. If lots of people are pointing out flaws in it (particularly sites that are usually credible like Skeptical Science and Science Based Medicine), then you are probably safe in rejecting it. Conversely, if the only sites that are singing its praises are places like Collective Evolution and Natural News, you should be skeptical. Importantly, look at the actual arguments that they are using rather than simply thinking, “X says it’s good, therefore it is.”
The only other time that I would encourage using this step is to be sure that you didn’t miss something. In other words, its fine to see what other people have to say about the paper as long as you go back to the original paper to see if they are right, rather than blindly taking their word for it. Also, I encourage you to read the paper yourself first, that way you aren’t biased by their views, and you should always make sure that you are being objective. In other words, your question should simply be “is this paper right?” rather than only looking for other people who agree with your view of the paper.
Gallery| This entry was posted in Nature of Science, Uncategorized and tagged evaluating evidence, peer-reviewed studies. Bookmark the permalink.
CHECKLIST FOR EVALUATING A RESEARCH REPORT
Provided by Dr. Blevins
1. The Title
a. Is it clear and concise?
b. Does it promise no more than the study can provide?
2. The Problem
a. It is clearly stated?
b. Is it properly defined?
c. Is its significance recognized?
d. Are specific questions raised; hypotheses clearly stated?
e. Are assumptions and limitations stated?
f. Are important terms defined?
3. Review of Related Literature
a. Is it adequately covered?
b. Are important findings noted?
c. Is it well organized?
d. Is an effective summary provided?
4. Procedures Used
a. Is the research design described in detail?
b. Is it adequate?
c. Are the samples described?
d. Are relevant variables recognized?
e. Are appropriate controls provided?
f. Are data-gathering instruments appropriate?
g. Are validity and reliability established?
h. Is the statistical treatment appropriate?
5. Data Analysis
a. Is appropriate use made of tables and figures?
b. Is the textual discussion clear and concise?
c. Is the analysis of data relationships logical and perceptive?
d. Is the statistical analysis accurately interpreted?
6. Summary and Conclusions
a. Is the problem restated?
b. Are the procedures and findings concisely presented?
c. Is the analysis objective?
d. Are the findings and conclusions justified by the data presented and analyzed?
STEPS IN ANALYZING A RESEARCH ARTICLE
Provided by Dr. Blevins
· Does it properly introduce the subject?
· Does it clearly state the purpose of what is to follow?
· Does it briefly state why this report is different from previous publications?
METHODS AND MATERIALS
· Is the test population clearly stated? Is it appropriate for the experiment? Should it be larger? more
· Is the control population clearly stated? Are all variables controlled? Should it be larger? more
· Are methods clearly described or referenced so the experiment could be repeated?
· Are materials clearly described and when appropriate, manufacturers footnoted?
· Are all statements and descriptions concerning design of test and control populations and materials
and methods included in this section?
· Are results for all parts of the experimental design provided?
· Are they clearly presented with supporting statistical analyses and/or charts and graphs when
· Are results straightforwardly presented without a discussion of why they occurred?
· Are all statistical analyses appropriate for the situation and accurately performed?
· Are all results discussed?
· Are all conclusions based on sufficient data?
· Are appropriate previous studies integrated into the discussion section?
· Does the first sentence contain a clear statement of the purpose of the article (without
starting....The purpose of this article is to.....)
· Is the test population briefly described?
· Does it conclude with a statement of the experiment’s conclusions?