//
you're reading...

Behavior

What does it mean when scientists disagree?

In 2015, Senator James Inhofe brought a snowball onto the floor of the US Senate to demonstrate his skepticism of the widely held belief that the Earth’s climate is warming. While the stunt was lauded as ridiculous by the scientifically-literate community and generated a number of lasting cartoons and memes calling out the Senator’s obvious lack of scientific understanding (some of my favourites are included below), Inhofe’s snowball may actually have been an effective aid in the politician’s anti-science narrative.

Science is increasingly seen as just a set of stories and anecdotes, comparable to any other source of information, that we use to inform our worldviews. In a world where everybody’s opinion is viewed as valid and equal, scientific consensus is often painted as the result of either arrogance, being bought out by big business, or a conspiracy by the political elite. In the meantime, minority conspiracy theories lacking scientific evidence are brought to the forefront. As a result, we’re seeing a growing mistrust of scientific expertise in the developed world, where nearly eradicated deadly viruses are making a comeback, food scarcity that could be remedied by genetic technology is getting worse, and yes, where a growing number of educated middleclass citizens actually believe that the Earth is flat.

So, if most scientists agree that vaccines are safe, the earth is round, and that climate change is real, why the skepticism?

Well, basically, because we like to hear what we already believe. In a recent study, Dieckmann and Johnson show how demographics and previously held beliefs affect a person’s understanding of scientific disagreement. There are lots of reasons why two scientists may disagree on a topic, including differing methods, conflicting interests, and even sometimes intentional misrepresentation (where a supposed expert is not actually an expert). It’s also common for scientists to disagree about a small part of a topic, like how fast the ocean is warming, but agree on the larger conclusions, like the fact that the planet is warming.

Non-scientists, however, view these disagreements between scientists very differently. According to the research, Americans felt that scientists disagreed with one another because of 3 major reasons:

  1. the complexity or uncertainty of the topic (ie. there is a lot to know and a lot of ways to look at the issue so it’s just taking scientists some time to come to a consensus)
  2. the individual values or interests (conscious or not) of the scientists (ie. unconscious biases, corruption or a conflict of interest in where their funding comes from, etc)
  3. incompetence on part of one of the scientists (ie. they are using flawed methods or are in fact not real scientists)

Dieckmann and Johnson found that people who knew more about science and the scientific method assumed scientific disagreement to be the result of topic complexity or personal bias. On the other hand, those who had less understanding of the scientific process were more likely to reason that the scientists simply did a bad job. They also found that study participants who identified as conservative in political orientation were more likely to believe any of those three reasons for disagreement, indicating a greater mistrust for science in general.

While concerning, none of this should be surprising to today’s scientists. As a discipline, science is supposed to be dispassionate, unbiased, and shared in a standardized way that maintains its objective integrity. So scientists don’t often speak directly to the non-scientific public audience and, when they do, they tend to fill their narrative with cold facts and statements about scientific uncertainty that can be easy to misunderstand. Meanwhile, those with an interest or stake in promoting misinformation are  successfully targeting the values and underlying beliefs we already hold, using communication techniques that evoke an emotional response, and they’re telling more compelling and relatable stories as a result. 

The spread of non-scientifically supported claims skews public perspective, creating confusion and misunderstanding about these topics and what scientific consensus really is. Inhofe’s snowball essentially says, “there’s another side to this climate change story.”

But, what “other side”? There are a lot of climate scientists out there collecting a lot of data that suggest our climate is warming as a result of human activities, with very few scientists, and even less data, suggesting otherwise. Yet, when climate skeptics talk about the climate “debate”, they usually cite non-scientific sources as if they hold equal weight to the first. As Dieckmann and Johnson point out, they can do this because the general public isn’t trained to pick out poor statistical methods or the academic conflicts of interest that run rampant throughout the “other side” of climate science and, more importantly, because the narrative of the “other side” already fits within their worldviews and existing beliefs.

The problem with the idea that science is just another set of beliefs, equal to any other, is, well… that it’s just not true. Science isn’t just a collection of facts – it’s a process. It’s a way of looking at the world, asking questions, making observations, and interpreting those observations to come to some conclusion, and it is inherently logical. Furthermore, a scientific fact isn’t really the same thing as the everyday anecdotal observations that inform most people’s beliefs. Scientists don’t really use the word “fact”, because we’re so skeptical by nature, opting for “theory” instead., In the scientific sense, a “theory” is a rule that has become commonly accepted as true because hundreds, if not thousands or millions, of repeated observations continue to support the probability of its truth. Should the observations change (and only if A LOT of them change), then the fact might be updated to account for the new information. In this way, scientific theories are accepted as “fact” (at least until they’re proven otherwise).

Just because Senator Inhofe had a snowball, or even if the temperature in Washington that day was unusually cold (it wasn’t), does not mean that the [literally] millions of other climate observations that suggest a warming trend are now thrown out the window. That doesn’t make sense. That’s not how the scientific method – or logical thinking – works.

Interestingly, not all countries show the same distrust for science as was seen in America. Diekmann and Johnson found that in Germany, the public cited the same three reasons behind scientific disagreement (complexity, individual scientist values/interests, and incompetency) but interpreted the reasons differently. They separated differences in methodology out of the incompetency category. The German public seems to understand that multiple study methods might be equally valid and that good science doesn’t have to always be conducted in the same way. They separate this kind of process difference from the other unconscious and intentional biases scientists might have.

So, where does this leave us? Dieckmann and Johnson show that this lack of understanding of the scientific process obligates the general public to simply accept the word of the scientist they trust most (or maybe, distrust the least.) Would there be a benefit to teaching American youth more about the scientific method? Would a more thorough understanding of the scientific process result in a tougher stance on climate change, as has been put in place in Germany? And how can scientists better communicate with non-scientists?

These are all big questions slowly being illuminated by the burgeoning field of science communication. In a world so divided on seemingly every big science issue, understanding how and why we form our values and belief systems will be essential to spreading messages that will meaningfully inform the public.

Discussion

No comments yet.

Talk to us!

%d bloggers like this: