As we communicate science to the public, it is also important to explain how science works and how to distinguish science from pseudoscience. From years of being in the field, scientists generally have their ‘bullshit’ detectors pretty finely calibrated, which leads to a type of illusion of transparency.  In other words, scientists often underestimate how difficult it is for people outside their field to distinguish established science from pseudoscience. The reason for this is not just that pseudoscience is ubiquitous on the internet. The most persistent and damaging pseudoscience (homoeopathy, climate change denial, anti-vaccination, etc) is justified using studies published in peer reviewed journals by PhDs. Peddlers of pseudoscience use these studies to support their claims while ignoring all other studies which refute them. The vast majority of the public will have no knowledge of the follow up literature (or details of the experiment design and intricacies of statistical significance). Good science reporting will be careful to make wide reaching claims on the basis of a single study, and will mention the statistical significance of results, but such reporting is the exception, not the rule. Therefore, articles on pseudoscience websites can be effectively indistinguishable from much of run of the mill scientific news reporting.

The first lesson we must teach, therefore, is that science is not perfect — in fact flawed, incorrect science is published all the time. The current incentive structure in academic science is perverse in several ways, as laid bare in a 2013 article in The Economist.  Positive results are strongly biased over negative ones. New results are biased over independent verification of previously published results. Perhaps most importantly, there is little punishment for publishing flawed results, but often explicit punishments for not publishing (inability to get grants, loss of promotion). Thus researchers would rather publish ‘whatever they’ve got’, even if its flawed, than not publish at all. Finally, a panoply of lower tier peer review journals allow for virtually anything to be published.

Recently, there have been several ‘meta’ science studies – that is scientific studies about science. To quote from The Economist article:

A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in “Nature”, a leading scientific journal, they were able to reproduce the original results in just six.”

Also:

“John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication.”

Based on this, it is clear that a single publication in a peer reviewed journal lends little support to whether a phenomenon is real. Independent verification is key, as it always has been. The need for independent verification is often emphasized in the reporting of medical studies, but two recent examples from physics starkly illustrate this need as well – the ‘faster than light neutrinos’ claim and the BICEP 2 evidence for cosmic inflation. Both results were published in reputable journals with much media fanfare, and both turned out to be completely wrong within a few months due to rather mundane mistakes. In the case of the faster than light neutrinos, the problem was due to a loose fiber optic cable, in the case of the BICEP 2 results, it was due to improper calibration for cosmic dust.

Credentials are often pointed to as lending support to a claim, based on idea that credentials indicate that the person making the claim will statistically be less prone to making errors. Credentials are important, but mean much less than many people think.  A PhD signifies that someone has devoted a significant amount of time and energy studying a very specific area. Thus, the fact that someone has a PhD isn’t very relevant whenever the holder is making a claim about anything outside their area. Even a Nobel Prize can’t immune someone from believing in, publishing, and preaching nonsense. [Currently, there are at least two Nobel Laureates (Brian Josephson & Luc Montagnier) who are publishing and endorsing deeply flawed pseudoscience and one Nobelist who is in denial about global warming. See my post “Crackpot Nobelists”.]

In light of all this, it is useful to have a rigorous criteria for when we can safely say a claimed phenomena is real, and that other possible explanations can be ruled out. Steven Novella presents one such set of criteria.:

1- Methodologically rigorous, properly blinded, and sufficiently powered studies that adequately define and control for all relevant variables (confirmed by surviving peer-review and post-publication analysis).
2- Positive results that are statistically significant.
3- A reasonable signal to noise ratio (clinically significant for medical studies, or generally well within our ability to confidently detect).
4- Independently reproducible. No matter who repeats the experiment, the effect is reliably detected.

These four criteria provide a baseline for scientific acceptance of a phenomena. In some cases additional considerations of plausibility may still cast doubt on phenomena that have passed all four tests. Note that he mentions the need for a claim to have survived ‘post publication analysis’ – this type of work is often overlooked by news reporters, who tend to focus on new results rather than follow up research which independently verifies results, or often, fails reproduce the claims.

This four pronged ‘evidence threshold’ constitutes a restatement of the scientific method and provides a useful heuristic for distinguishing scientific claims from pseudoscientific ones.