J. Robert Oppenheimer’s Defense of Humanity
After helping invent the atomic bomb, the physicist spent decades thinking about how to preserve civilization from technological dangers, offering crucial lessons for the age of AI
From the moment the atomic bomb was dropped on Hiroshima in August 1945 until his death in 1967, J. Robert Oppenheimer was perhaps the most recognizable physicist on the planet. During World War II, Oppenheimer directed Los Alamos Laboratory, “Site Y” of the Manhattan Project, the successful American effort to build an atomic bomb. He went on to serve for almost 20 years as director of the Institute for Advanced Study in Princeton, N.J., home to some of the world’s leading scientists, including Albert Einstein.
In the popular imagination, Einstein came to represent unalloyed optimism about the capacity of human genius to uncover the secrets of the cosmos. Oppenheimer played a grimmer role, standing for the dangers of advancing science. After the successful test of the “Gadget,” as the first atomic bomb was called, he is said to have quoted the Bhagavad Gita: “Now I am become death, the destroyer of worlds.” Much of his subsequent career would be spent advising humanity how not to be annihilated by the powers of the atom he had conquered. The advice was not always well received: The Atomic Energy Commission stripped him of his security clearance in 1954, in part because of his advocacy for arms control. (The Department of Energy posthumously reversed that decision last year.)
In July, director Christopher Nolan’s biopic “Oppenheimer” will bring his story to theaters at a timely moment, when the world is once again worried that a new technology threatens the future of humanity. Advances in machine learning and artificial intelligence, including the explosive success of ChatGPT, have provoked attention to questions that were once the province of science fiction. Might artificial intelligence programs go rogue and enslave or eliminate humanity? Less apocalyptically, will AI take over our jobs, our decision making, our economies, our governments? How can we ensure that the new technologies work for rather than against the values and interests of humanity?
Oppenheimer sensed that humanity was at a technological turning point that might bring about its destruction.
To answer these questions, the most important part of Oppenheimer’s life isn’t his work on the atomic bomb but his less dramatic tenure running the Institute for Advanced Study. When Oppenheimer arrived as director in 1947, Life magazine published “The Thinkers,” a story about the Institute calling it “the most important building on earth.” That was hyperbole, but it is true that Oppenheimer joined a community of giants, many of whom shared the sense that humanity was at a technological turning point that might bring about its destruction.
Einstein, a professor at the Institute from 1933 until his death in 1955, dedicated much of his final decade to the political and ethical questions raised by the new physics of fission and fusion. Another faculty member who merits a biopic is the Hungarian immigrant John von Neumann, who worked on both the atomic bomb and its more powerful successor, the hydrogen bomb. After the war, he built the world’s first stored-program computer—work that started in the basement under Oppenheimer’s office.
Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
...