This Year’s Nobel Prizes Are a Warning about AI


The awards ceremony for the Nobel prizes took place in December this year, celebrating both work relating to artificial intelligence and efforts by the group Nihon Hidankyo to end nuclear war.

It was a striking juxtaposition, one not lost on me, a mathematician studying how deep learning works. In the first half of the 20th century, the Nobel Committees awarded prizes in physics and chemistry for discoveries that uncovered the structure of atoms. This work also enabled the development and subsequent deployment of nuclear weapons. Decades later the Nobel committees awarded this year’s Peace Prize for work trying to counteract one way nuclear science ended up being used.

There are parallels between the development of nuclear weapons from basic physics research, and the risks posed by applications of AI emerging from work that began as fundamental research in computer science. These include the incoming Trump administration’s push for “Manhattan Projects” for AI, as well as a wider spectrum of societal risks, including misinformation, job displacement, and surveillance.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


I am concerned that my colleagues and I are insufficiently connected to the effects our work could have. Will the Nobel Committees be awarding a Peace Prize in the next century to the people cleaning up the mess AI scientists leave behind? I am determined that we not repeat the nuclear weapons story.

About 80 years ago hundreds of the world’s top scientists joined the Manhattan Project in a race to build an atomic weapon before the Nazis did. Yet after the German bomb effort stopped in 1944 and even after Germany surrendered the next year, the work in Los Alamos continued without pause.

Even when the Nazi threat had ended, only one Manhattan Project scientist—Joseph Rotblat—left the project. Looking back, Rotblat explained: “You get yourself involved in a certain way and forget that you are a human being. It becomes an addiction and you just go on for the sake of producing a gadget, without thinking about the consequences. And then, having done this, you find some justification for having produced it. Not the other way around.”

The U.S. military carried out the first nuclear test soon after. Then U.S. leaders authorized the twin bombings of Hiroshima and Nagasaki on August 6 and 9. The bombs killed hundreds of thousands of Japanese civilians, some immediately. Others died years and even decades later from the effects of radiation poisoning.

Though Rotblat’s words were written decades ago, they are an eerily accurate description of the prevailing ethos in AI research today.

I first began to see parallels between nuclear weapons and artificial intelligence while working at Princeton’s Institute for Advanced Study, where the haunting closing scene of Christopher Nolan’s film Oppenheimer was set. Having made some progress in understanding the mathematical innards of artificial neural networks, I was also beginning to have concerns about the eventual social implications of my work. On a colleague’s suggestion I went to talk to the then director of the institute, physicist Robbert Dijkgraaf.

He suggested I look to J. Robert Oppenheimer’s life story for guidance. I read one biography, then another. I tried to guess what Dijkgraaf had in mind, but I didn’t see anything appealing in Oppenheimer’s path, and by the time I finished the third biography the only thing that was clear to me was that I did not want my own life to mirror his. I did not want to reach the end of my life with a burden like Oppenheimer’s weighing on me.

Oppenheimer is often quoted as saying that when scientists “see something that is technically sweet, [they] go ahead and do it.” In fact, Geoff Hinton, one of the winners of the 2024 Nobel prize in physics, has referenced this. This is not universally true. The preeminent woman physicist of the time, Lise Meitner, was asked to join the Manhattan project. Despite being Jewish and having narrowly escaped the Nazi occupation, she flatly refused, saying, “I will have nothing to do with a bomb!”

Rotblat, too, provides another model for how scientists can navigate the challenge of exercising talent without losing sight of values. After the war he returned to physics, focusing on medical uses of radiation. He also became a leader in the nuclear antiproliferation movement through the Pugwash Conferences on Science and World Affairs, a group that he co-founded in 1957. In 1995, he and his colleagues were recognized with a Nobel Peace Prize for this work.

Now, as then, there are thoughtful, grounded individuals who stand out in the development of AI. Taking a stance evocative of Rotblat, Ed Newton-Rex resigned last year from his position leading the music generation team at Stability AI, over the company’s insistence on creating generative AI models trained on copyrighted data without paying for that use. This year, Suchir Balaji resigned as a researcher at OpenAI over similar concerns.

In an echo of Meitner’s refusal to work on military applications of her discoveries, at a 2018 internal company town hall, Meredith Whittaker voiced worker concerns about Project Maven, a Department of Defense contract to develop AI to power military drone targeting and surveillance. Eventually, workers succeeded in pressuring Google, where 2024 Nobel physics prize laurate Demis Hassabis works, to drop the project.

There are many ways in which society influences how scientists work. A direct one is financial; collectively we choose what research to fund, and individually we choose which products coming out of that research we pay for.

An indirect but very effective one is prestige. Most scientists care about their legacy. When we look back on the nuclear era—when we choose, for instance, to make a movie about Oppenheimer, among other scientists of that age—we send a signal to scientists today about what we value. When the Nobel Prize Committees choose which people among those working on AI today to reward with Nobel Prizes, they set a powerful incentive for the AI researchers of today and tomorrow.

It is too late to change the events of the 20th century, but we can hope for better outcomes for AI. We can start by looking past those in machine learning focused on rapid development of capabilities, instead following the lead of those like Newton-Rex and Whittaker, who insist on engaging with the context of their work and who have the capacity not only to evaluate but also respond to changing circumstances. Paying attention to what scientists like them are saying will provide the best hope for positive scientific development, now and into the future.

As a society, we have the choice of whom to elevate, emulate and hold up as role models for the next generation. As the nuclear era teaches us, right now is the time to carefully evaluate what applications of scientific discovery, and whom among today’s scientists, reflect the values not of the world in which we currently live, but the one which we hope to inhabit.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.



Source link

About The Author

Scroll to Top