In their book, Merchants of Doubt, Naomi Oreskes and Eric Conway argue that scientists “know bad science when they see it”:
“It’s science that is obviously fraudulent — when data have been invented, fudged, or manipulated. Bad science is where data is have been cherry-picked— when some data have been deliberately left out—or it’s impossible for the reader to understand the steps that were taken to produce or analyze the data. It is a set of claims that can’t be tested, claims that are based on samples that are too small, and claims that don’t follow from the evidence provided. And science is bad—or at least weak—when proponents of a position jump to conclusions on insufficient or inconsistent data.”
Few would disagree with the Oreskes and Conway criteria of “bad science,” but how do we use the criteria to distinguish bad science from good science? Oreskes and Conway have an answer (emphasis in original):
“But while these scientific criteria may be clear in principle, knowing when they apply in practice is a judgment call. For this scientists rely on peer review. Peer review is a topic that is impossible to make sexy, but it’s crucial to understand, because it is what makes science science—and not just a form of opinion.”
Oreskes and Conway characterize “Potemkin village science” as the efforts of “merchants of doubt” to make their bad-science arguments look science-like using data and graphs — to fool the uninformed and contest the good science in the peer-reviewed literature. The good guys publish in peer reviewed publications, while the bad guys do not.
The idealization of peer review as the arbiter of good science is problematic for many reasons, but one is that it downplays the possibility that bad science can appear in the peer reviewed literature and good science outside of those outlets.