December 18, 2024
5 min read
The Public Distrusts Scientists’ Morals, Not Their Science
Reaction to a recent Pew survey on the public’s trust in science shows that the scientific community is not ready to address the real problem
Our overlapping Trump and COVID eras have seen a fairly sharp downturn in public trust in scientists. Around one in 10 Americans report less support for science now than they did before COVID.
That was a November survey finding by the Pew Research Center. In addition to this decline in support from pre-pandemic times, the survey found that people who trust scientists either “a great deal” or “a fair amount” remain more or less the same since 2021. In response, the president of the U.S. National Academy of Sciences said that the survey “gives us an opportunity to reexamine what we need to do to restore trust in science.”
But the diagnoses of the cause of a lack of trust by scientific leaders responding to the survey are variations on the same old ones, which is that the public does not understand science. That is a comfortable diagnosis for scientists, and therefore is unlikely to help with trust. The scientific community needs instead to consider that a lack of trust does not stem from the public’s view of scientists as fact-finders, but rather from the public not trusting scientists’ moral values.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Reactions to the report suggest that the scientific community is trying hard to not see this. A recent Washington Post news report stated that the public lost trust because they did not understand scientific claims about facts—about cures for COVID, about the utility of masks, about the origin of the virus, about the effect of social distancing, about whether vaccines would prevent infection. In a similar New York Times article the chief executive of American Association for the Advancement of Science says scientists have learned “hard lessons” from COVID, and were “now better equipped to communicate how data changes and evolves.” Yet another report proclaims that scientists need to be more humble about their ability to generate accurate scientific claims.
All of these responses reflect the long-held general belief by scientists that a lack of support by the public is a consequence of the public not understanding science well enough. This is known as the “knowledge deficit” model of science communication, which has been widely discredited as much of a factor in support for science.
It has long been plain to see that claims about scientific facts are not the problem. Consider the conflict in the U.S. between religion and science epitomized by 1925’s “Scopes Monkey Trial” and the 2005 “intelligent design” courtroom case. Scientists largely assume that such conflict results from religious people using sacred texts to make claims about the natural world, whereas science instead uses reason and observation. While that was plausibly true before the 20th century, today this is only the case for a minority of religious people in the U.S., such as those who follow conservative Protestantism traditions; and also only in disagreement about very specific areas, such as human origins. This was the situation in the Scopes trial.
In reality, sociological studies show that contemporary conflict between science and religion is actually over morals, not facts. For example, when it comes to debates about research on human embryos, no religious opponent says that scientists do not understand how embryos develop. Rather, they give a different moral status to embryos than do scientists.
Moreover, even stated opposition to scientific claims is often motivated by concern about morality. For example, fundamentalist William Jennings Bryan, the defender of the creationist position in the Scopes Trial, opposed scientific claims about human evolution because he wanted to “defend the Bible.” But, he also opposed evolution because he thought that Darwinian theory had corrupted the morals of German youth and was partly responsible for the outbreak of World War I. Moral conflict between the public and science did not begin with the first Trump administration.
We can also look at the parts of the Pew study that were left unexamined in news stories. In the survey, 36 percent of the public agree that scientists do not pay attention to the moral values of society. When given the choice between the idea that “scientists should focus on establishing sound scientific facts and stay out of policy debates” versus “take an active role in public policy debates about scientific issues,” the country is essentially split 50-50. That is, half of the public does not want scientists to move beyond establishing facts because, I would argue, they perceive scientists will insert their moral values in policy debate, and the public does not think they share those values.
But why would the public think scientists do not share their moral values? The idea that scientists are morally deficient goes back centuries, and is reinforced to this day by fictional accounts of scientists where the “mad scientist” remains a trope. Dr. Frankenstein is probably the most well-known scientist. The villagers were not upset with him because he had his facts wrong about how to create a monster, but because he ignored the moral values of the villagers in creating the monster.
So I think scientists took the wrong lesson from COVID. A decline in trust was not primarily a result of the public misunderstanding science, but because scientists became associated with a set of politicized moral choices about prioritizing public health over commerce, education and individual freedom. Perhaps the association with these choices was inevitable or necessary, but we should not think that a loss of trust was generated by the public not understanding how vaccines work.
One solution for building trust is for scientists to be trained to talk about their moral values, because silence makes it easier to project bad values onto scientists. Scientists’ moral values will not perfectly align with the public, but I think the shared values will outnumber the differences. To take the obvious example, scientists working on COVID were motivated by the moral value of reducing human suffering, and this is about as close to a universal value in the U.S. as we can get.
I understand why the scientific community is reluctant to talk about its moral values. Part of the norms of science is to be “value-free,” and part of what creates legitimate results is to examine the data dispassionately. Scientists generally have no training in academic debates about morals, values and ethics. But pretending that scientists are just about the facts—and above any moral questions—is not working.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.