The Effect of Public Perception on Science and Its Effectiveness

We hear a lot every day about how science and technology are transforming the world. Fast and cheap genome sequencing promises to revolutionize personalized medicine. The proliferation of mobile phones will finally level the playing field for the next billion users (or NBU, a common buzzword nowadays). Machine learning will eliminate all menial jobs and eventually usher in a post-scarcity society. While headlines like these can interest and convey important scientific discoveries to the general public, they’ve often been simplified for general consumption. They also tend to follow a monotonic narrative around science, that is, that science and technology are constantly improving and serve as agents of goodwill. This raises an important question that I’ve been pondering recently: how does public perception of science influence the work of scientists and its effectiveness?

The lone genius is a staple of modern science media. Against all odds, working fiercely through the night, the genius suddenly discovers something that stuns even the most esteemed of their peers. The Herculean might of one scientist has leaped the field further in a day than a thousand scientists could have in the next 30 years. Sound familiar? While great for storytelling and for generating public interest1, this largely ignores the culture of collaboration and creative exchange of ideas that presides in most scientific disciplines. There is thus this expectation that one has to be “that genius” in order to make great contributions, which can be especially discouraging for many (especially minorities) contemplating careers in science. Including a few research superstars on a grant, which is much easier to do at an established university, can drastically increase the chance of acceptance. Researchers and science popularizers often do a lot of good, but it becomes dangerous when they start developing a cult of personality. And a further consequence is that it’s possible to use one dissenter’s word to “discredit” the consensus of the broader community2,3.

Science provides a way of systematically refining our understanding of the world. However, the danger lies in believing that the objectivity inherent to the methodology extends to its underlying forces and motivations (i.e. proposed solutions, sources of research funding). Scientists generally work within an existing framework of assumptions and incrementally test hypotheses that they believe will yield interesting results. Believing that science and technology are somehow separate from human society can be disastrous. Social media companies can absolve themselves of wrongdoing by claiming “they’re just building a platform for ideas”4,5 that they claim (through their algorithms6) is morally neutral. The values of a liberal arts education can be argued away by contrasting it with STEM, which “objectively explains how the real world works”. We can say that women are unsuited for engineering because statistics supposedly doesn’t discriminate. And we risk creating a cult of objectivity claiming to have the answers to the world’s problems that comes across as just another religion (and which dissenters ironically dismiss by claiming that science is objective and apolitical7).

There are many ways in which this broader portrayal of science can play a part in the research work, and vice-versa. For instance, tapping into existing narratives about AI (whether somewhat truthful or deeply misleading) can be very effective at generating attention and funding for one’s work. This is a situation where the benefits of exploiting current perceptions (e.g. “we are on the cusp of attaining human-level AI”) largely outweigh those of being upfront and realistic about the limitations of the work. As a consequence, the public has a very confused and distorted understanding of the actual state-of-the-art, leading to singularity articles featuring scenes from Terminator and interesting public disagreements8 that miss the subtleties surrounding AI risk9. Likewise, the need for constant progress seemed to cause deep learning researchers working on language modeling to be somewhat lax in how they compared their models to common baselines10. The results of a recent study11 showed that standard LSTMs, when properly tuned, outperformed many newer models on several widely-used datasets. The fact that there are no incentives for publishing negative results leads to a lot of wasted effort (multiple groups trying the same experiment that doesn’t work) and publications getting retracted when it turns out their results are non-reproducible (AKA false positive acceptances). It’s encouraging to see that there are some outlets12 investigating how to publish rigorous negative results. Perhaps this could be one way of publishing them in a way that still counts as “science”.

There are a few things to work on. There should be a wider recognition that science is a useful tool whose capabilities are in the hands of the wielder and whose consequences are tied to society at large. Researchers need to be wary of unintentionally incorporating biases into their work and of deluding themselves into thinking they’re making progress. We have to carefully straddle the line between doubt and confidence in science13 and make sure that the public understands the difference between the two. Lastly, moving towards more realistic depictions of science in media could help establish a more honest and fruitful dialogue between the scientific community and the general public.