Regarding our human circumstances, are we paying attention to their most relevant insights?
When we think of Albert Einstein or Stephen Hawking, what comes to our minds? We would probably first think of Einstein’s glowing white hair and Hawking’s famous computer-generated voice, and that both are world-renowned physicists, geniuses in their field. And perhaps more particularly, in Albert Einstein’s case, we may think of his general theory of relativity and in regards to Stephen Hawking it could be his work on radiation emission of black holes.
But in regards to our human circumstances and destiny, are we paying attention to the most relevant aspects of their insights? How many of us know that Einstein had very skeptical views of our current technology culture, or that Hawking and fellow scientists issued a staunch warning against the hasty development of artificial intelligence?
Our current culture of technology generates and depends on fast cycles of technology innovation and application. The economic and political interests that back the corresponding scientific and engineering leaps both initiate and require fast technological progression as it is a key way to further wealth and power. And as consumers we have become used to the quick succession of technologically enabled benefits and comforts. But what about the negative consequences – both actual and potential – of this over-focus on technology’s advantages, on technological efficiency and functionality?
Below is a selection of Albert Einstein quotes that show his critical view of unmindful use of technology and rash technological progress, followed by Hawking’s piece which warns of unmindful and hurried implementation of our nascent knowledge in artificial intelligence.
Albert Einstein quotes that show his critical view on technological progress
- "Our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal."
Letter to Heinrich Zangger (1917), as quoted in A Sense of the Mysterious: Science and the Human Spirit by Alan Lightman (2005) and in Albert Einstein: A Biography by Albrecht Fölsing (1997). Sometimes paraphrased as "Technological progress is like an axe in the hands of a pathological criminal."
- "Perfection of means and confusion of goals seem - in my opinion - to characterize our age."
"The Common Language of Science", a broadcast for Science, Conference London, September 28th, 1941. Published in Advancement of Science, London, Vol 2, No 5.
- "Why does this magnificent applied science, which saves work and makes life easier, bring us so little happiness? The simple answer runs: Because we have not yet learned to make sensible use of it. In war it serves that we may poison and mutilate each other. In peace it has made our lives hurried and uncertain. Instead of freeing us in great measure from spiritually exhausting labor, it has made men into slaves of machinery, who for the most part complete their monotonous long day's work with disgust and must continually tremble for their poor rations. ... It is not enough that you should understand about applied science in order that your work may increase man's blessings. Concern for the man himself and his fate must always form the chief interest of all technological endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations."
Speech to students at the California Institute of Technology, in "Einstein Sees Lack in Applying Science", The New York Times (February 16th, 1931)
- "Our time is distinguished by wonderful achievements in the fields of scientific understanding and the technical application of those insights. Who would not be cheered by this? But let us not forget that human knowledge and skills alone cannot lead humanity to a happy and dignified life. Humanity has every reason to place the proclaimers of high moral standards and values above the discoverers of objective truth. What humanity owes to personalities like Buddha, Moses, and Jesus ranks for me higher than all the achievements of the enquiring and constructive mind. What those blessed men have given us we must guard and try to keep alive with all our strength if humanity is not to lose its dignity, the security of its existence, and its joy of living."
Written statement, September 1937, p. 70
- "I believe, indeed, that overemphasis on the purely intellectual attitude, often directed solely to the practical and factual, in our education, has led directly to the impairment of ethical values. I am not thinking so much of the dangers with which technical progress has directly confronted man, as of the stifling of mutual human considerations by a "matter-of-fact" habit of thought which has come to lie like a killing frost upon human relations. ... The frightful dilemma of the political world situation has much to do with this sin of omission on the part of our civilization. Without "ethical culture," there is no salvation for humanity."
"The Need for Ethical Culture" celebrating the seventy-fifth anniversary of the Ethical Culture Society, January 5th, 1951
- "May they not forgot to keep pure the great heritage that puts them ahead of the West: the artistic configuration of life, the simplicity and modesty of personal needs, and the purity and the serenity of the Japanese soul."
Comment made after a six-week trip to Japan in November-December 1922, published in Kaizo 5, no. 1 (January 1923)
Stephen Hawking: "Success in creating AI would be the biggest event in human history. Unfortunately it might also be the last ..."
By Stephen Hawking, Stuart Russell, Max Tegmark, Frank Wilczek, Thurday, May 1st, 2014
(From The Independent, UK)
With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.
Artificial-intelligence (AI) is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistant Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasized by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both wealth and great dislocation.
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here - we'll leave the lights on"? Probably not - but this is more or less what is happening with AI. Although we are facing potentially the best or the worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profits institutes such as the Cambridge Center for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.
Stephen Hawking is the director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer-science professor at the University of California, Berkeley and a co-author of 'Artificial Intelligence: A Modern Approach'. Max Tegmark is a physics professor at the Massachusetts Institute of Technology (MIT) and the author of 'Our Mathematical Universe'. Frank Wilczek is a physics professor at the MIT and a 2004 Nobel laureate for his work on the strong nuclear force.