Open Letter On Artificial Intelligence
   HOME

TheInfoList



OR:

In January 2015,
Stephen Hawking Stephen William Hawking (8January 194214March 2018) was an English theoretical physics, theoretical physicist, cosmologist, and author who was director of research at the Centre for Theoretical Cosmology at the University of Cambridge. Between ...
,
Elon Musk Elon Reeve Musk ( ; born June 28, 1971) is a businessman. He is known for his leadership of Tesla, SpaceX, X (formerly Twitter), and the Department of Government Efficiency (DOGE). Musk has been considered the wealthiest person in th ...
, and dozens of
artificial intelligence Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.


Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the
Future of Life Institute The Future of Life Institute (FLI) is a nonprofit organization which aims to steer wikt:transformative, transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from artificial general ...
, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community, and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015. The letter was made public on January 12.


Purpose

The letter highlights both the positive and negative effects of artificial intelligence. According to
Bloomberg Business ''Bloomberg Businessweek'', previously known as ''BusinessWeek'' (and before that ''Business Week'' and ''The Business Week''), is an American monthly business magazine published 12 times a year. The magazine debuted in New York City in Septembe ...
, Professor
Max Tegmark Max Erik Tegmark (born 5 May 1967) is a Swedish-American physicist, machine learning researcher and author. He is best known for his book ''Life 3.0'' about what the world might look like as artificial intelligence continues to improve. Tegmark i ...
of
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant
existential risk A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, endangering or even destroying Modernity, modern civilization. Existential risk is a related term limited to even ...
, and signatories such as Professor
Oren Etzioni Oren Etzioni (born 1964) is Professor Emeritus of Computer Science at the University of Washington, and founding CEO of the Allen Institute for Artificial Intelligence (AI2). Etzioni is a co-founder oVercept an AI startup. Etzioni is the found ...
, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. The letter contends that:
The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
One of the signatories, Professor
Bart Selman Bart Selman is a Dutch-American professor of computer science at Cornell University. He is also co-founder and principal investigator of the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, led ...
of
Cornell University Cornell University is a Private university, private Ivy League research university based in Ithaca, New York, United States. The university was co-founded by American philanthropist Ezra Cornell and historian and educator Andrew Dickson W ...
, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist. Another signatory, Professor
Francesca Rossi Francesca Rossi (born December 7, 1962) is an Italian computer scientist, currently working at the IBM Thomas J. Watson Research Center (New York, USA) as an IBM Fellow and the IBM AI Ethics Global Leader. Education and career Francesca Rossi re ...
, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".


Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, such as computer security and
formal verification In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal ver ...
. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").


Short-term concerns

Some near-term concerns relate to autonomous vehicles, from civilian drones and
self-driving car A self-driving car, also known as an autonomous car (AC), driverless car, robotic car or robo-car, is a car that is capable of operating with reduced or no human input. They are sometimes called robotaxis, though this term refers specifica ...
s. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned? Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.


Long-term concerns

The document closes by echoing
Microsoft Microsoft Corporation is an American multinational corporation and technology company, technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the History of personal computers#The ear ...
research director
Eric Horvitz Eric Joel Horvitz () is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Re ...
's concerns that:
we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?
Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem".


Signatories

Signatories include physicist
Stephen Hawking Stephen William Hawking (8January 194214March 2018) was an English theoretical physics, theoretical physicist, cosmologist, and author who was director of research at the Centre for Theoretical Cosmology at the University of Cambridge. Between ...
, business magnate
Elon Musk Elon Reeve Musk ( ; born June 28, 1971) is a businessman. He is known for his leadership of Tesla, SpaceX, X (formerly Twitter), and the Department of Government Efficiency (DOGE). Musk has been considered the wealthiest person in th ...
, the entrepreneurs behind
DeepMind DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Go ...
and
Vicarious Vicarious may refer to: * Vicariousness, experiencing through another person * Vicarious learning, observational learning In law * Vicarious liability, a term in common law * Vicarious liability (criminal), a term in criminal law Religion * Subst ...
,
Google Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
's director of research
Peter Norvig Peter Norvig (born 14 December 1956) is an American computer scientist and Distinguished Education Fellow at the Stanford Institute for Human-Centered AI. He previously served as a director of research and search quality at Google. Norvig is th ...
, Professor Stuart J. Russell of the
University of California, Berkeley The University of California, Berkeley (UC Berkeley, Berkeley, Cal, or California), is a Public university, public Land-grant university, land-grant research university in Berkeley, California, United States. Founded in 1868 and named after t ...
, and other AI experts, robot makers, programmers, and ethicists. The original signatory count was over 150 people, including academics from Cambridge, Oxford, Stanford, Harvard, and MIT.


Notes


External links


Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
{{Existential risk from artificial intelligence Open letters Computing and society Existential risk from artificial general intelligence