In
futurology
Futures studies, futures research, futurism or futurology is the systematic, interdisciplinary and holistic study of social and technological advancement, and other environmental trends, often for the purpose of exploring how people will li ...
, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by
Nick Bostrom
Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and th ...
.
Overview
An
artificial general intelligence
Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
It is a primary goal of some artificial intelligence research and a common topic in science fict ...
having undergone an
intelligence explosion
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
could form a singleton, as could a
world government
World government is the concept of a single political authority with jurisdiction over all humanity. It is conceived in a variety of forms, from tyrannical to democratic, which reflects its wide array of proponents and detractors.
A world gove ...
armed with mind control and social
surveillance technologies. A singleton need not directly micromanage everything in its domain; it could allow diverse forms of organization within itself, albeit guaranteed to function within strict parameters. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.
A singleton doesn't necessarily need to be an organisation, government or AI. It could form through all people accepting the same values or goals. These people coming together would form an "agency" in the broad sense of the term according to Bostrom.
[Nick Bostrom (2006)]
"What is a Singleton?"
''Linguistic and Philosophical Investigations'' 5(2): 48-54.
A singleton has both potential risks and potential benefits. Notably, a suitable singleton could solve world coordination problems that would not otherwise be solvable, opening up otherwise unavailable developmental trajectories for civilization. For example,
Ben Goertzel
Ben Goertzel is a cognitive scientist, artificial intelligence researcher, CEO and founder of SingularityNET, leader of the OpenCog Foundation, and the AGI Society, and chair of Humanity+. He helped popularize the term 'artificial general inte ...
, an AGI researcher, suggests humans may instead decide to create an "
AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from
existential risks
A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanen ...
like
nanotechnology
Nanotechnology, also shortened to nanotech, is the use of matter on an atomic, molecular, and supramolecular scale for industrial purposes. The earliest, widespread description of nanotechnology referred to the particular technological goal o ...
and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved. Furthermore, Bostrom suggests that a singleton could hold
Darwinian evolution
Darwinism is a theory of biological evolution developed by the English naturalist Charles Darwin (1809–1882) and others, stating that all species of organisms arise and develop through the natural selection of small, inherited variations that ...
ary pressures in check, preventing agents interested only in reproduction from coming to dominate.
Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk. The very stability of a singleton makes the installation of a ''bad'' singleton especially catastrophic, since the consequences can never be undone.
Bryan Caplan
Bryan Douglas Caplan (born April 8, 1971) is an American economist and author. Caplan is a professor of economics at George Mason University, research fellow at the Mercatus Center, adjunct scholar at the Cato Institute, and former contributor ...
writes that "perhaps an eternity of totalitarianism would be worse than extinction".
Similarly
Hans Morgenthau
Hans Joachim Morgenthau (February 17, 1904 – July 19, 1980) was a German-American jurist and political scientist who was one of the major 20th-century figures in the study of international relations. Morgenthau's works belong to the tradition ...
stressed that the mechanical development of weapons, transportation, and communication makes "the conquest of the world technically possible, and they make it technically possible to keep the world in that conquered state". Its lack was the reason why great ancient empires, though vast, failed to complete universal conquest of their world and perpetuate the conquest. Now, however, this is possible. Technology undoes both geographic and climatic barriers. "Today no technological obstacle stands in the way of a world-wide empire", as "modern technology makes it possible to extend the control of mind and action to every corner of the globe regardless of geography and season."
[ Politics Among Nations: The Struggle for Power and Peace, 4th edition, New York: Alfred A. Knopf, 1967, p 358-365.] Morgenthau continued on the technological progress:
See also
*
AI takeover
An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible ...
*
Existential risk
A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanent ...
*
Friendly artificial intelligence
Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the impro ...
*
Superintelligence
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language ...
*
Superpower
A superpower is a state with a dominant position characterized by its extensive ability to exert influence or project power on a global scale. This is done through the combined means of economic, military, technological, political and cultural ...
References
{{Future of Humanity Institute
Singularitarianism
Globalism
International relations
Risk analysis
Supranational unions
World government
Global issues
Government by algorithm