P(doom)
   HOME

TheInfoList



OR:

P(doom) is a term in
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...
that refers to the
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
of existentially catastrophic outcomes (or "doom") as a result of
artificial intelligence Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
. The exact outcomes in question differ from one prediction to another, but generally allude to the
existential risk from artificial general intelligence Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe. One argument for the importance of this r ...
. Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of
GPT-4 Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model trained and created by OpenAI and the fourth in its series of GPT foundation models. It was launched on March 14, 2023, and made publicly available via the p ...
, as high-profile figures such as
Geoffrey Hinton Geoffrey Everest Hinton (born 1947) is a British-Canadian computer scientist, cognitive scientist, and cognitive psychologist known for his work on artificial neural networks, which earned him the title "the Godfather of AI". Hinton is Univer ...
and
Yoshua Bengio Yoshua Bengio (born March 5, 1964) is a Canadian-French computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute Montreal In ...
began to warn of the risks of AI. In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%.


Notable P(doom) values


Criticism

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of
artificial general intelligence Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks. Some researchers argue that sta ...
, the time frame, and the precise meaning of "doom".


In popular culture

* In 2024, Australian rock band
King Gizzard & the Lizard Wizard King Gizzard & the Lizard Wizard (KGLW) are an Australian rock band formed in 2010 in Melbourne, Victoria. The band's current lineup consists of Stu Mackenzie (vocals, guitar), Ambrose Kenny-Smith (vocals, harmonica, keyboards), Cook Craig ( ...
launched their new label, named p(doom) Records.


See also

*
Existential risk from artificial general intelligence Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe. One argument for the importance of this r ...
* Statement on AI risk of extinction *
AI alignment In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered ''aligned'' if it advances the intended objectives. A '' ...
*
AI takeover An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies o ...
*
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...


Notes


References

{{reflist Existential risk from artificial general intelligence