A superintelligence is a hypothetical
agent
Agent may refer to:
Espionage, investigation, and law
*, spies or intelligence officers
* Law of agency, laws involving a person authorized to act on behalf of another
** Agent of record, a person with a contractual agreement with an insuran ...
that possesses
intelligence
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the a ...
surpassing that of the
brightest and most
gifted
Intellectual giftedness is an intellectual ability significantly higher than average. It is a characteristic of children, variously defined, that motivates differences in school programming. It is thought to persist as a trait into adult life, wit ...
human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an
intelligence explosion
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
and associated with a
technological singularity
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
.
University of Oxford
The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world's second-oldest university in contin ...
philosopher
Nick Bostrom
Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and th ...
defines ''superintelligence'' as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program
Fritz
Fritz originated as a German nickname for Friedrich, or Frederick (''Der Alte Fritz'', and ''Stary Fryc'' were common nicknames for King Frederick II of Prussia and Frederick III, German Emperor) as well as for similar names including Fridolin ...
falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.
Technological researchers disagree about how likely present-day
human intelligence
Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. High intelligence is associated with better outcomes in life.
Through intelligence, humans ...
is to be surpassed. Some argue that advances in
artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
(AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several
future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to
interface with computers, or
upload their minds to computers, in a way that enables substantial intelligence amplification.
Some researchers believe that superintelligence will likely follow shortly after the development of
artificial general intelligence
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
It is a primary goal of some artificial intelligence research and a common topic in science fict ...
. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of
perfect recall, a vastly superior knowledge base, and the ability to
multitask in ways not possible to biological entities. This may allow them to — either as a single being or as a new
species
In biology, a species is the basic unit of Taxonomy (biology), classification and a taxonomic rank of an organism, as well as a unit of biodiversity. A species is often defined as the largest group of organisms in which any two individuals of ...
— become much more powerful than humans, and displace them.
Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of
human and machine cognitive enhancement, because of the potential social impact of such technologies.
Feasibility of artificial superintelligence

The feasibility of artificial superintelligence (ASI) has been a topic of increasing discussion in recent years, particularly with the rapid advancements in
artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
(AI) technologies.
Progress in AI and claims of AGI
Recent developments in AI, particularly in
large language models (LLMs) based on the
transformer
A transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer' ...
architecture, have led to significant improvements in various tasks. Models like
GPT-3
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt.
The architecture is a standa ...
,
GPT-4
Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI and the fourth in its GPT series. It was released on March 14, 2023, and has been made publicly available in a limited form via ChatGPT Plus, ...
,
Claude 3.5 and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of
artificial general intelligence
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
It is a primary goal of some artificial intelligence research and a common topic in science fict ...
(AGI).
However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and are primarily sophisticated pattern matching systems.
Pathways to superintelligence
Philosopher
David Chalmers
David John Chalmers (; born 20 April 1966) is an Australian philosopher and cognitive scientist specializing in the areas of philosophy of mind and philosophy of language. He is a professor of philosophy and neural science at New York Univer ...
argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to human intelligence, be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks.
More recent research has explored various potential pathways to superintelligence:
# Scaling current AI systems – Some researchers argue that continued scaling of existing AI architectures, particularly transformer-based models, could lead to AGI and potentially ASI.
# Novel architectures – Others suggest that new AI architectures, potentially inspired by neuroscience, may be necessary to achieve AGI and ASI.
# Hybrid systems – Combining different AI approaches, including symbolic AI and neural networks, could potentially lead to more robust and capable systems.
Computational advantages
Artificial systems have several potential advantages over biological intelligence:
# Speed – Computer components operate much faster than biological neurons. Modern microprocessors (~2 GHz) are seven orders of magnitude faster than neurons (~200 Hz).
# Scalability – AI systems can potentially be scaled up in size and computational capacity more easily than biological brains.
# Modularity – Different components of AI systems can be improved or replaced independently.
# Memory – AI systems can have perfect recall and vast knowledge bases. It is also much less constrained than humans when it comes to working memory.
# Multitasking – AI can perform multiple tasks simultaneously in ways not possible for biological entities.
Potential path through transformer models
Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI.
Some experts even argue that current large language models like GPT-4 may already exhibit early signs of AGI or ASI capabilities. This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI.
However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains.
The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies remains ongoing, with significant implications for AI development strategies and safety considerations.
Challenges and uncertainties
Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI:
#
Ethical
Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of morality, right and wrong action (philosophy), behavior".''Internet Encyclopedia of Philosophy'' The field of ethics, alo ...
and
safety
Safety is the state of being "safe", the condition of being protected from harm or other danger. Safety can also refer to the control of recognized hazards in order to achieve an acceptable level of risk.
Meanings
There are two slightly di ...
concerns – The development of ASI raises numerous ethical questions and potential risks that need to be addressed.
# Computational requirements – The computational resources required for ASI might be far beyond current capabilities.
# Fundamental limitations – There may be fundamental limitations to intelligence that apply to both artificial and biological systems.
# Unpredictability – The path to ASI and its consequences are highly uncertain and difficult to predict.
As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community.
Feasibility of biological superintelligence
Carl Sagan
Carl Edward Sagan (; ; November 9, 1934December 20, 1996) was an American astronomer, planetary scientist, cosmologist, astrophysicist, astrobiologist, author, and science communicator. His best known scientific contribution is research on ...
suggested that the advent of
Caesarean section
Caesarean section, also known as C-section or caesarean delivery, is the surgical procedure by which one or more babies are delivered through an incision in the mother's abdomen, often performed because vaginal delivery would put the baby or ...
s and
''in vitro'' fertilization may permit humans to evolve larger heads, resulting in improvements via
natural selection
Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Cha ...
in the
heritable
Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic inf ...
component of
human intelligence
Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. High intelligence is associated with better outcomes in life.
Through intelligence, humans ...
. By contrast,
Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long
reduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.
Selective breeding
Selective breeding (also called artificial selection) is the process by which humans use animal breeding and plant breeding to selectively develop particular phenotypic traits (characteristics) by choosing which typically animal or plant ma ...
,
nootropics
Nootropics ( , or ) (colloquial: smart drugs and cognitive enhancers, similar to adaptogens) are a wide range of natural or synthetic supplements or drugs and other substances that are claimed to improve cognitive function or to promote rel ...
,
epigenetic modulation, and
genetic engineering
Genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism's genes using technology. It is a set of technologies used to change the genetic makeup of cells, including ...
could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve
collective superintelligence.
Alternatively, collective intelligence might be constructional by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a
global brain
The global brain is a neuroscience-inspired and futurological vision of the planetary information and communications technology network that interconnects all humans and their technological artifacts. As this network stores ever more informatio ...
with capacities far exceeding its component agents. If this systemic superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based
superorganism
A superorganism or supraorganism is a group of synergetically interacting organisms of the same species. A community of synergetically interacting organisms of different species is called a holobiont.
Concept
The term superorganism is used mo ...
. A
prediction market
Prediction markets (also known as betting markets, information markets, decision markets, idea futures or event derivatives) are open markets where specific outcomes can be predicted using financial incentives. Essentially, they are exchange-trad ...
is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).
A final method of intelligence amplification would be to directly
enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using
nootropics
Nootropics ( , or ) (colloquial: smart drugs and cognitive enhancers, similar to adaptogens) are a wide range of natural or synthetic supplements or drugs and other substances that are claimed to improve cognitive function or to promote rel ...
, somatic
gene therapy
Gene therapy is a medical field which focuses on the genetic modification of cells to produce a therapeutic effect or the treatment of disease by repairing or reconstructing defective genetic material. The first attempt at modifying human D ...
, or
brain−computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent
cyborg
A cyborg ()—a portmanteau of ''cybernetic'' and ''organism''—is a being with both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline. interface is an
AI-complete
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solv ...
problem.
Forecasts
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006
AI@50
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years" (July 13–15, 2006), was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugur ...
conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.
In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no
global catastrophe
A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanent ...
occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.
In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.
In 2023,
OpenAI
OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promo ...
leaders
Sam Altman
Samuel H. Altman ( ; born April 22, 1985) is an American entrepreneur, investor, programmer, and blogger. He is the CEO of OpenAI and the former president of Y Combinator.
Early life and education
Altman grew up in St. Louis, Missouri; his mo ...
,
Greg Brockman and
Ilya Sutskever
Ilya Sutskever is a computer scientist working in machine learning, who co-founded and serves as Chief Scientist of OpenAI.
He has made several major contributions to the field of deep learning. He is the co-inventor, with Alex Krizhevsky an ...
published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2024, Ilya Sutskever left OpenAI to cofound the startup ''Safe Superintelligence'', which focuses solely on creating a superintelligence that is
safe
A safe (also called a strongbox or coffer) is a secure Lock (security device), lockable box used for securing valuable objects against theft or fire. A safe is usually a hollow cuboid or cylinder, with one face being removable or hinged to form ...
by design, while avoiding "distraction by management overhead or product cycles".
Design considerations
The design of superintelligent AI systems raises critical questions about what values and goals these systems should have. Several proposals have been put forward:
Value alignment proposals
*
Coherent extrapolated volition
Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the impro ...
(CEV) – The AI should have the values upon which humans would converge if they were more knowledgeable and rational.
* Moral rightness (MR) – The AI should be programmed to do what is morally right, relying on its superior cognitive abilities to determine ethical actions.
* Moral permissibility (MP) – The AI should stay within the bounds of moral permissibility while otherwise pursuing goals aligned with human values (similar to CEV).
Bostrom elaborates on these concepts:
instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR)...
MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong...
One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on ''moral permissibility'': the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.
Recent developments
Since Bostrom's analysis, new approaches to AI value alignment have emerged:
* Inverse Reinforcement Learning (IRL) – This technique aims to infer human preferences from observed behavior, potentially offering a more robust approach to value alignment.
*
Constitutional AI – Proposed by Anthropic, this involves training AI systems with explicit ethical principles and constraints.
* Debate and amplification – These techniques, explored by OpenAI, use AI-assisted debate and iterative processes to better understand and align with human values.
Transformer LLMs and ASI
The rapid advancement of transformer-based LLMs has led to speculation about their potential path to ASI. Some researchers argue that scaled-up versions of these models could exhibit ASI-like capabilities:
* Emergent abilities – As LLMs increase in size and complexity, they demonstrate unexpected capabilities not present in smaller models.
* In-context learning – LLMs show the ability to adapt to new tasks without fine-tuning, potentially mimicking general intelligence.
* Multi-modal integration – Recent models can process and generate various types of data, including text, images, and audio.
However, critics argue that current LLMs lack true understanding and are merely sophisticated pattern matchers, raising questions about their suitability as a path to ASI.
Other perspectives on artificial superintelligence
Additional viewpoints on the development and implications of superintelligence include:
*
Recursive self-improvement
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
–
I. J. Good
Irving John Good (9 December 1916 – 5 April 2009)The Times of 16-apr-09, http://www.timesonline.co.uk/tol/comment/obituaries/article6100314.ece
was a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing. Afte ...
proposed the concept of an "intelligence explosion", where an AI system could rapidly improve its own intelligence, potentially leading to superintelligence.
* Orthogonality thesis – Bostrom argues that an AI's level of intelligence is orthogonal to its final goals, meaning a superintelligent AI could have any set of motivations.
*
Instrumental convergence
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) m ...
– Certain instrumental goals (e.g., self-preservation, resource acquisition) might be pursued by a wide range of AI systems, regardless of their final goals.
Challenges and ongoing research
The pursuit of value-aligned AI faces several challenges:
* Philosophical uncertainty in defining concepts like "moral rightness"
* Technical complexity in translating ethical principles into precise algorithms
* Potential for unintended consequences even with well-intentioned approaches
Current research directions include multi-stakeholder approaches to incorporate diverse perspectives, developing methods for scalable oversight of AI systems, and improving techniques for robust value learning.
Al research progresses is rapidly progressing towards superintelligence, addressing these design challenges remains crucial for creating ASI systems that are both powerful and aligned with human interests.
Potential threat to humanity
The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat:
Intelligence explosion and control problem
Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965:
This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences. Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control.
Unintended consequences and goal misalignment
Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk:
Stuart Russell offers another illustrative scenario:
These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment.
Potential mitigation strategies
Researchers have proposed various approaches to mitigate risks associated with ASI:
*
Capability control – Limiting an ASI's ability to influence the world, such as through physical isolation or restricted access to resources.
* Motivational control – Designing ASIs with goals that are fundamentally aligned with human values.
* Ethical AI – Incorporating ethical principles and decision-making frameworks into ASI systems.
* Oversight and governance – Developing robust international frameworks for the development and deployment of ASI technologies.
Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development.
Debate and skepticism
Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like
Rodney Brooks
Rodney Allen Brooks (born 30 December 1954) is an Australian roboticist, Fellow of the Australian Academy of Science, author, and robotics entrepreneur, most known for popularizing the actionist approach to robotics. He was a Panasonic Profe ...
, argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress. Others, such as
Joanna Bryson, contend that
anthropomorphizing
Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology.
Personification is the related attribution of human form and characteristics t ...
AI systems leads to misplaced concerns about their potential threats.
Recent developments and current perspectives
The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities.
* LLM capabilities – Recent LLMs like GPT-4 have demonstrated unexpected abilities in areas such as reasoning, problem-solving, and multi-modal understanding, leading some to speculate about their potential path to ASI.
* Emergent behaviors – Studies have shown that as AI models increase in size and complexity, they can exhibit emergent capabilities not present in smaller models, potentially indicating a trend towards more general intelligence.
* Rapid progress – The pace of AI advancement has led some to argue that we may be closer to ASI than previously thought, with potential implications for existential risk.
A minority of researchers and observers, including some in the AI development community, believe that current AI systems may already be at or near AGI levels, with ASI potentially following in the near future. This view, while not widely accepted in the scientific community, is based on observations of rapid progress in AI capabilities and unexpected emergent behaviors in large models.
However, many experts caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence.
They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.
The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance.
See also
*
Artificial general intelligence
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
It is a primary goal of some artificial intelligence research and a common topic in science fict ...
*
AI safety
AI is artificial intelligence, intellectual ability in machines and robots.
Ai, AI or A.I. may also refer to:
Animals
* Ai (chimpanzee), an individual experimental subject in Japan
* Ai (sloth) or the pale-throated sloth, northern Amazonian ma ...
*
AI takeover
An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible ...
*
Artificial brain
An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain.
Research investigating "artificial brains" and brain emulation plays three important roles in science:
#An o ...
*
Artificial intelligence arms race A military artificial intelligence arms race is a competition or arms race between two or more states to have their military forces equipped with the best artificial intelligence (AI). Since the mid-2010s many analysts have noted the emergence of su ...
*
Effective altruism
Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, cal ...
*
Ethics of artificial intelligence
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artifici ...
*
Existential risk
A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanent ...
*
Friendly artificial intelligence
Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the impro ...
*
Future of Humanity Institute
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and th ...
*
Intelligent agent
In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or c ...
*
Machine ethics
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, other ...
*
Machine Intelligence Research Institute
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artif ...
*
Machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
*
*
*
Outline of artificial intelligence
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how t ...
*
Posthumanism
Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st century thought. It encompasses a wide variety of bran ...
*
Robotics
Robotics is an interdisciplinarity, interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist human ...
*
Self-replication
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and c ...
*
Self-replicating machine
A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of ...
* ''
Superintelligence: Paths, Dangers, Strategies''
References
Papers
* .
*
*
*
*
Books
*
*
*
*
*
External links
*
Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence"'
*
Will Superintelligent Machines Destroy Humanity?'
*
Apple Co-founder Has Sense of Foreboding About Artificial Superintelligence'
{{Existential risk from artificial intelligence
Hypothetical technology
Singularitarianism
Intelligence
Existential risk from artificial general intelligence