Artificial Intelligence In Myths And Legends
   HOME

TheInfoList



OR:

The history of artificial intelligence ( AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain. The field of AI research was founded at a
workshop Beginning with the Industrial Revolution era, a workshop may be a room, rooms or building which provides both the area and tools (or machinery) that may be required for the manufacture or repair of manufactured goods. Workshops were the only ...
held on the campus of
Dartmouth College Dartmouth College ( ) is a Private university, private Ivy League research university in Hanover, New Hampshire, United States. Established in 1769 by Eleazar Wheelock, Dartmouth is one of the nine colonial colleges chartered before the America ...
in 1956. Attendees of the workshop became the leaders of AI research for decades. Many of them predicted that machines as intelligent as humans would exist within a generation. The
U.S. government The Federal Government of the United States of America (U.S. federal government or U.S. government) is the national government of the United States. The U.S. federal government is composed of three distinct branches: legislative, executi ...
provided millions of dollars with the hope of making this vision come true. Eventually, it became obvious that researchers had grossly underestimated the difficulty of this feat. In 1974, criticism from
James Lighthill Sir Michael James Lighthill (23 January 1924 – 17 July 1998) was a British applied mathematician, known for his pioneering work in the field of aeroacoustics and for writing the Lighthill report in 1973, which pessimistically stated t ...
and pressure from the U.S.A. Congress led the U.S. and
British Government His Majesty's Government, abbreviated to HM Government or otherwise UK Government, is the central government, central executive authority of the United Kingdom of Great Britain and Northern Ireland.
s to stop funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the
Japanese Government The Government of Japan is the central government of Japan. It consists of legislative, executive and judiciary branches and functions under the framework established by the Constitution of Japan. Japan is a unitary state, containing forty- ...
and the success of
expert system In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ...
s reinvigorated investment in AI, and by the late 1980s, the industry had grown into a billion-dollar enterprise. However, investors' enthusiasm waned in the 1990s, and the field was criticized in the press and avoided by industry (a period known as an "
AI winter In the history of artificial intelligence (AI), an AI winter is a period of reduced funding and interest in AI research.machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
was applied to a wide range of problems in academia and industry. The success was due to the availability of powerful computer hardware, the collection of immense data sets, and the application of solid mathematical methods. Soon after,
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive
generative AI Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and str ...
applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of
large language models A large language model (LLM) is a language model trained with Self-supervised learning, self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially Natural language generation, language g ...
(LLMs) like
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
. These models exhibit human-like traits of knowledge, attention, and creativity, and have been integrated into various sectors, fueling exponential investment in AI. However, concerns about the potential risks and ethical implications of advanced AI have also emerged, causing debate about the future of AI and its impact on society.


Precursors


Mythical, fictional, and speculative precursors


Myth and legend

In
Greek mythology Greek mythology is the body of myths originally told by the Ancient Greece, ancient Greeks, and a genre of ancient Greek folklore, today absorbed alongside Roman mythology into the broader designation of classical mythology. These stories conc ...
,
Talos In Greek mythology, Talos, also spelled Talus (; , ''Tálōs'') or Talon (; , ''Tálōn''), was a man of bronze who protected Crete from pirates and invaders. Despite the popular idea that he was a giant, no ancient source states this explicitl ...
was a creature made of bronze who acted as guardian for the island of Crete. He would throw boulders at the ships of invaders and would complete 3 circuits around the island's perimeter daily. According to
pseudo-Apollodorus The ''Bibliotheca'' (Ancient Greek: ), is a compendium of Greek myths and heroic legends, genealogical tables and histories arranged in three books, generally dated to the first or second century AD. The work is commonly described as having been ...
' '' Bibliotheke'', Hephaestus forged Talos with the aid of a cyclops and presented the
automaton An automaton (; : automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions. Some automata, such as bellstrikers i ...
as a gift to
Minos Main injector neutrino oscillation search (MINOS) was a particle physics experiment designed to study the phenomena of neutrino oscillations, first discovered by a Super-Kamiokande (Super-K) experiment in 1998. Neutrinos produced by the NuMI ...
. In the
Argonautica The ''Argonautica'' () is a Greek literature, Greek epic poem written by Apollonius of Rhodes, Apollonius Rhodius in the 3rd century BC. The only entirely surviving Hellenistic civilization, Hellenistic epic (though Aetia (Callimachus), Callim ...
,
Jason Jason ( ; ) was an ancient Greek mythological hero and leader of the Argonauts, whose quest for the Golden Fleece is featured in Greek literature. He was the son of Aeson, the rightful king of Iolcos. He was married to the sorceress Med ...
and the
Argonauts The Argonauts ( ; ) were a band of heroes in Greek mythology, who in the years before the Trojan War (around 1300 BC) accompanied Jason to Colchis in his quest to find the Golden Fleece. Their name comes from their ship, ''Argo'', named after it ...
defeated Talos by removing a plug near his foot, causing the vital
ichor In Greek mythology, ichor () is the ethereal fluid that is the blood of the gods and/or immortals. The Ancient Greek word () is of uncertain etymology, and has been suggested to be a foreign word, possibly the Pre-Greek substrate. In classic ...
to flow out from his body and rendering him lifeless. Pygmalion was a legendary king and sculptor of Greek mythology, famously represented in
Ovid Publius Ovidius Naso (; 20 March 43 BC – AD 17/18), known in English as Ovid ( ), was a Augustan literature (ancient Rome), Roman poet who lived during the reign of Augustus. He was a younger contemporary of Virgil and Horace, with whom he i ...
's ''
Metamorphoses The ''Metamorphoses'' (, , ) is a Latin Narrative poetry, narrative poem from 8 Common Era, CE by the Ancient Rome, Roman poet Ovid. It is considered his ''Masterpiece, magnum opus''. The poem chronicles the history of the world from its Cre ...
''. In the 10th book of Ovid's narrative poem, Pygmalion becomes disgusted with women when he witnesses the way in which the Propoetides prostitute themselves. Despite this, he makes offerings at the temple of Venus asking the goddess to bring to him a woman just like a statue he carved.


Medieval legends of artificial beings

In ''Of the Nature of Things'', the Swiss alchemist
Paracelsus Paracelsus (; ; 1493 – 24 September 1541), born Theophrastus von Hohenheim (full name Philippus Aureolus Theophrastus Bombastus von Hohenheim), was a Swiss physician, alchemist, lay theologian, and philosopher of the German Renaissance. H ...
describes a procedure that he claims can fabricate an "artificial man". By placing the "sperm of a man" in horse dung, and feeding it the "Arcanum of Mans blood" after 40 days, the concoction will become a living infant. The earliest written account regarding golem-making is found in the writings of Eleazar ben Judah of Worms in the early 13th century. During the Middle Ages, it was believed that the animation of a
Golem A golem ( ; ) is an animated Anthropomorphism, anthropomorphic being in Jewish folklore, which is created entirely from inanimate matter, usually clay or mud. The most famous golem narrative involves Judah Loew ben Bezalel, the late 16th-century ...
could be achieved by insertion of a piece of paper with any of God's names on it, into the mouth of the clay figure. Unlike legendary automata like Brazen Heads, a
Golem A golem ( ; ) is an animated Anthropomorphism, anthropomorphic being in Jewish folklore, which is created entirely from inanimate matter, usually clay or mud. The most famous golem narrative involves Judah Loew ben Bezalel, the late 16th-century ...
was unable to speak. '' Takwin'', the artificial creation of life, was a frequent topic of
Ismaili Ismailism () is a branch of Shia Islam. The Isma'ili () get their name from their acceptance of Imam Isma'il ibn Jafar as the appointed spiritual successor ( imām) to Ja'far al-Sadiq, wherein they differ from the Twelver Shia, who accept ...
alchemical manuscripts, especially those attributed to
Jabir ibn Hayyan Abū Mūsā Jābir ibn Ḥayyān (Arabic: , variously called al-Ṣūfī, al-Azdī, al-Kūfī, or al-Ṭūsī), died 806−816, is the purported author of a large number of works in Arabic, often called the Jabirian corpus. The treatises that ...
. Islamic alchemists attempted to create a broad range of life through their work, ranging from plants to animals. In Faust: The Second Part of the Tragedy by
Johann Wolfgang von Goethe Johann Wolfgang (von) Goethe (28 August 1749 – 22 March 1832) was a German polymath who is widely regarded as the most influential writer in the German language. His work has had a wide-ranging influence on Western literature, literary, Polit ...
, an alchemically fabricated
homunculus A homunculus ( , , ; "little person", : homunculi , , ) is a small human being. Popularized in 16th-century alchemy and 19th-century fiction, it has historically referred to the creation of a miniature, fully formed human. The concept has root ...
, destined to live forever in the flask in which he was made, endeavors to be born into a full human body. Upon the initiation of this transformation, however, the flask shatters and the homunculus dies.


Modern fiction

By the 19th century, ideas about artificial men and thinking machines became a popular theme in fiction. Notable works like
Mary Shelley Mary Wollstonecraft Shelley ( , ; ; 30 August 1797 – 1 February 1851) was an English novelist who wrote the Gothic novel ''Frankenstein, Frankenstein; or, The Modern Prometheus'' (1818), which is considered an History of science fiction# ...
's ''
Frankenstein ''Frankenstein; or, The Modern Prometheus'' is an 1818 Gothic novel written by English author Mary Shelley. ''Frankenstein'' tells the story of Victor Frankenstein, a young scientist who creates a Sapience, sapient Frankenstein's monster, crea ...
'' and
Karel Čapek Karel Čapek (; 9 January 1890 – 25 December 1938) was a Czech writer, playwright, critic and journalist. He has become best known for his science fiction, including his novel '' War with the Newts'' (1936) and play '' R.U.R.'' (''Rossum' ...
's ''
R.U.R. (Rossum's Universal Robots) ''R.U.R.'' is a 1920 science fiction play by the Czech writer Karel Čapek. "R.U.R." stands for (Rossum's Universal Robots, a phrase that has been used as a subtitle in English versions). The play had its world premiere on 2 January 1921 in ...
'' explored the concept of artificial life. Speculative essays, such as Samuel Butler's "
Darwin among the Machines Darwin may refer to: Common meanings * Charles Darwin (1809–1882), English naturalist and writer, best known as the originator of the theory of biological evolution by natural selection * Darwin, Northern Territory, a capital city in Australia, ...
", and Edgar Allan Poe's " Maelzel's Chess Player" reflected society's growing interest in machines with artificial intelligence. AI remains a common topic in science fiction today.


Automata

Realistic humanoid
automata An automaton (; : automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions. Some automata, such as bellstrikers i ...
were built by craftsman from many civilizations, including Yan Shi,
Hero of Alexandria Hero of Alexandria (; , , also known as Heron of Alexandria ; probably 1st or 2nd century AD) was a Greek mathematician and engineer who was active in Alexandria in Egypt during the Roman era. He has been described as the greatest experimental ...
,
Al-Jazari Badīʿ az-Zaman Abu l-ʿIzz ibn Ismāʿīl ibn ar-Razāz al-Jazarī (1136–1206, , ) was a Muslim polymath: a scholar, inventor, mechanical engineer, artisan and artist from the Artuqid Dynasty of Jazira in Mesopotamia. He is best known for ...
, Haroun al-Rashid,
Jacques de Vaucanson Jacques de Vaucanson (; February 24, 1709 – November 21, 1782) was a French inventor and artist who built the first all-metal lathe. This invention was crucial for the Industrial Revolution. The lathe is known as the mother of machine tools, a ...
, Leonardo Torres y Quevedo,
Pierre Jaquet-Droz Pierre Jaquet-Droz (; 1721–1790) was a watchmaker of the late eighteenth century. He was born on 28 July 1721 in La Chaux-de-Fonds, in the Principality of Neuchâtel, which was then part of the Kingdom of Prussia. He lived in Paris, London, and ...
and Wolfgang von Kempelen. The oldest known automata were the sacred statues of
ancient Egypt Ancient Egypt () was a cradle of civilization concentrated along the lower reaches of the Nile River in Northeast Africa. It emerged from prehistoric Egypt around 3150BC (according to conventional Egyptian chronology), when Upper and Lower E ...
and
Greece Greece, officially the Hellenic Republic, is a country in Southeast Europe. Located on the southern tip of the Balkan peninsula, it shares land borders with Albania to the northwest, North Macedonia and Bulgaria to the north, and Turkey to th ...
. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—
Hermes Trismegistus Hermes Trismegistus (from , "Hermes the Thrice-Greatest") is a legendary Hellenistic period figure that originated as a syncretic combination of the Greek god Hermes and the Egyptian god Thoth.A survey of the literary and archaeological eviden ...
wrote that "by discovering the true nature of the gods, man has been able to reproduce it". English scholar Alexander Neckham asserted that the Ancient Roman poet
Virgil Publius Vergilius Maro (; 15 October 70 BC21 September 19 BC), usually called Virgil or Vergil ( ) in English, was an ancient Rome, ancient Roman poet of the Augustan literature (ancient Rome), Augustan period. He composed three of the most fa ...
had built a palace with automaton statues. During the early modern period, these legendary automata were said to possess the magical ability to answer questions put to them. The late medieval alchemist and proto-Protestant
Roger Bacon Roger Bacon (; or ', also '' Rogerus''; ), also known by the Scholastic accolades, scholastic accolade ''Doctor Mirabilis'', was a medieval English polymath, philosopher, scientist, theologian and Franciscans, Franciscan friar who placed co ...
was purported to have fabricated a brazen head, having developed a legend of having been a wizard. These legends were similar to the Norse myth of the Head of
Mímir Mímir or Mim is a figure in Norse mythology, renowned for his knowledge and wisdom, who is beheaded during the Æsir–Vanir War. Afterward, the god Odin carries around Mímir's head and it recites secret knowledge and counsel to him. Mímir ...
. According to legend, Mímir was known for his intellect and wisdom, and was beheaded in the Æsir-Vanir War.
Odin Odin (; from ) is a widely revered god in Norse mythology and Germanic paganism. Most surviving information on Odin comes from Norse mythology, but he figures prominently in the recorded history of Northern Europe. This includes the Roman Em ...
is said to have "embalmed" the head with herbs and spoke incantations over it such that Mímir's head remained able to speak wisdom to Odin. Odin then kept the head near him for counsel.


Formal reasoning

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical—or "formal"—reasoning has a long history. Chinese, Indian and
Greek Greek may refer to: Anything of, from, or related to Greece, a country in Southern Europe: *Greeks, an ethnic group *Greek language, a branch of the Indo-European language family **Proto-Greek language, the assumed last common ancestor of all kno ...
philosophers all developed structured methods of formal deduction by the first millennium BCE. Their ideas were developed over the centuries by philosophers such as
Aristotle Aristotle (; 384–322 BC) was an Ancient Greek philosophy, Ancient Greek philosopher and polymath. His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, economics, politics, psychology, a ...
(who gave a formal analysis of the
syllogism A syllogism (, ''syllogismos'', 'conclusion, inference') is a kind of logical argument that applies deductive reasoning to arrive at a conclusion based on two propositions that are asserted or assumed to be true. In its earliest form (defin ...
),
Euclid Euclid (; ; BC) was an ancient Greek mathematician active as a geometer and logician. Considered the "father of geometry", he is chiefly known for the '' Elements'' treatise, which established the foundations of geometry that largely domina ...
(whose '' Elements'' was a model of formal reasoning),
al-Khwārizmī Muhammad ibn Musa al-Khwarizmi , or simply al-Khwarizmi, was a mathematician active during the Islamic Golden Age, who produced Arabic-language works in mathematics, astronomy, and geography. Around 820, he worked at the House of Wisdom in B ...
(who developed
algebra Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic ope ...
and gave his name to the word ''
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
'') and European scholastic philosophers such as
William of Ockham William of Ockham or Occam ( ; ; 9/10 April 1347) was an English Franciscan friar, scholastic philosopher, apologist, and theologian, who was born in Ockham, a small village in Surrey. He is considered to be one of the major figures of medie ...
and
Duns Scotus John Duns Scotus ( ; , "Duns the Scot";  – 8 November 1308) was a Scottish Catholic priest and Franciscan friar, university professor, philosopher and theologian. He is considered one of the four most important Christian philosopher-t ...
. Spanish philosopher
Ramon Llull Ramon Llull (; ; – 1316), sometimes anglicized as ''Raymond Lully'', was a philosopher, theologian, poet, missionary, Christian apologist and former knight from the Kingdom of Majorca. He invented a philosophical system known as the ''Art ...
(1232–1315) developed several ''logical machines'' devoted to the production of knowledge by logical means; Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. Llull's work had a great influence on
Gottfried Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Isaac Newton, Sir Isaac Newton, with the creation of calculus in ad ...
, who redeveloped his ideas. In the 17th century,
Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Sir Isaac Newton, with the creation of calculus in addition to many ...
,
Thomas Hobbes Thomas Hobbes ( ; 5 April 1588 – 4 December 1679) was an English philosopher, best known for his 1651 book ''Leviathan (Hobbes book), Leviathan'', in which he expounds an influential formulation of social contract theory. He is considered t ...
and
René Descartes René Descartes ( , ; ; 31 March 1596 – 11 February 1650) was a French philosopher, scientist, and mathematician, widely considered a seminal figure in the emergence of modern philosophy and Modern science, science. Mathematics was paramou ...
explored the possibility that all rational thought could be made as systematic as algebra or geometry.
Hobbes Thomas Hobbes ( ; 5 April 1588 – 4 December 1679) was an English philosopher, best known for his 1651 book ''Leviathan'', in which he expounds an influential formulation of social contract theory. He is considered to be one of the founders ...
famously wrote in ''Leviathan'': "For ''reason'' ... is nothing but ''reckoning'', that is adding and subtracting".
Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Sir Isaac Newton, with the creation of calculus in addition to many ...
envisioned a universal language of reasoning, the ''
characteristica universalis The Latin term ''characteristica universalis'', commonly interpreted as ''universal characteristic'', or ''universal character'' in English, is a universal and formal language imagined by Gottfried Leibniz able to express mathematical, scienti ...
'', which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): ''Let us calculate''." These philosophers had begun to articulate the
physical symbol system A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions. The physical symbol system hypothesis (PSSH ...
hypothesis that would become the guiding faith of AI research. The study of
mathematical logic Mathematical logic is the study of Logic#Formal logic, formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory (also known as computability theory). Research in mathematical logic com ...
provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as
Boole George Boole ( ; 2 November 1815 – 8 December 1864) was a largely self-taught English mathematician, philosopher and logician, most of whose short career was spent as the first professor of mathematics at Queen's College, Cork in Ireland. ...
's ''
The Laws of Thought ''An Investigation of the Laws of Thought: on Which are Founded the Mathematical Theories of Logic and Probabilities'' by George Boole, published in 1854, is the second of Boole's two monographs on algebraic logic. Boole was a professor of mathe ...
'' and
Frege Friedrich Ludwig Gottlob Frege (; ; 8 November 1848 – 26 July 1925) was a German philosopher, logician, and mathematician. He was a mathematics professor at the University of Jena, and is understood by many to be the father of analytic philos ...
's ''
Begriffsschrift ''Begriffsschrift'' (German for, roughly, "concept-writing") is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book. ''Begriffsschrift'' is usually translated as ''concept writing'' or ''concept notati ...
''. Building on
Frege Friedrich Ludwig Gottlob Frege (; ; 8 November 1848 – 26 July 1925) was a German philosopher, logician, and mathematician. He was a mathematics professor at the University of Jena, and is understood by many to be the father of analytic philos ...
's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the ''
Principia Mathematica The ''Principia Mathematica'' (often abbreviated ''PM'') is a three-volume work on the foundations of mathematics written by the mathematician–philosophers Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1 ...
'' in 1913. Inspired by Russell's success,
David Hilbert David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician and philosopher of mathematics and one of the most influential mathematicians of his time. Hilbert discovered and developed a broad range of fundamental idea ...
challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?" His question was answered by Gödel's incompleteness proof,
Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical compute ...
's
machine A machine is a physical system that uses power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromol ...
and
Church Church may refer to: Religion * Church (building), a place/building for Christian religious activities and praying * Church (congregation), a local congregation of a Christian denomination * Church service, a formalized period of Christian comm ...
's
Lambda calculus In mathematical logic, the lambda calculus (also written as ''λ''-calculus) is a formal system for expressing computability, computation based on function Abstraction (computer science), abstraction and function application, application using var ...
. Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, ''any'' form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as ''0'' and ''1'', could imitate any conceivable process of mathematical deduction. The key insight was the
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
—a simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.


Computer science

Calculating machines were designed or built in antiquity and throughout history by many people, including
Gottfried Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Isaac Newton, Sir Isaac Newton, with the creation of calculus in ad ...
,
Joseph Marie Jacquard Joseph Marie Charles ''dit'' (called or nicknamed) Jacquard (; 7 July 1752 – 7 August 1834) was a French weaver and merchant. He played an important role in the development of the earliest programmable loom (the "Jacquard loom"), which in tur ...
,
Charles Babbage Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered ...
,
Percy Ludgate Percy Edwin Ludgate (2 August 1883 – 16 October 1922) was an Ireland, Irish amateur scientist who designed the second analytical engine (general-purpose Turing-complete computer) in history. Life Ludgate was born on 2 August 1883 in Skibb ...
,
Leonardo Torres Quevedo Leonardo Torres Quevedo (; 28 December 1852 – 18 December 1936) was a Spanish civil engineer, mathematician and inventor, known for his numerous engineering innovations, including Aerial tramway, aerial trams, airships, catamarans, and remote ...
,
Vannevar Bush Vannevar Bush ( ; March 11, 1890 – June 28, 1974) was an American engineer, inventor and science administrator, who during World War II, World War II headed the U.S. Office of Scientific Research and Development (OSRD), through which almo ...
, and others.
Ada Lovelace Augusta Ada King, Countess of Lovelace (''née'' Byron; 10 December 1815 – 27 November 1852), also known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage's proposed mechanical general-pur ...
speculated that Babbage's machine was "a thinking or ... reasoning machine", but warned "It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers" of the machine. The first modern computers were the massive machines of the
Second World War World War II or the Second World War (1 September 1939 – 2 September 1945) was a World war, global conflict between two coalitions: the Allies of World War II, Allies and the Axis powers. World War II by country, Nearly all of the wo ...
(such as
Konrad Zuse Konrad Ernst Otto Zuse (; ; 22 June 1910 – 18 December 1995) was a German civil engineer, List of pioneers in computer science, pioneering computer scientist, inventor and businessman. His greatest achievement was the world's first programm ...
's Z3,
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
's
Heath Robinson William Heath Robinson (31 May 1872 – 13 September 1944) was an English cartoonist, illustrator and artist who drew whimsically elaborate machines to achieve simple objectives. The earliest citation in the ''Oxford English Dictionary'' f ...
and
Colossus Colossus, Colossos, or the plural Colossi or Colossuses, may refer to: Statues * Any exceptionally large statue; colossal statues, are generally taken to mean a statue at least twice life-size ** List of tallest statues ** :Colossal statues * ...
, Atanasoff and
Berry A berry is a small, pulpy, and often edible fruit. Typically, berries are juicy, rounded, brightly colored, sweet, sour or tart, and do not have a stone or pit although many pips or seeds may be present. Common examples of berries in the cul ...
's
ABC ABC are the first three letters of the Latin script. ABC or abc may also refer to: Arts, entertainment and media Broadcasting * Aliw Broadcasting Corporation, Philippine broadcast company * American Broadcasting Company, a commercial American ...
and
ENIAC ENIAC (; Electronic Numerical Integrator and Computer) was the first Computer programming, programmable, Electronics, electronic, general-purpose digital computer, completed in 1945. Other computers had some of these features, but ENIAC was ...
at the
University of Pennsylvania The University of Pennsylvania (Penn or UPenn) is a Private university, private Ivy League research university in Philadelphia, Pennsylvania, United States. One of nine colonial colleges, it was chartered in 1755 through the efforts of f ...
).
ENIAC ENIAC (; Electronic Numerical Integrator and Computer) was the first Computer programming, programmable, Electronics, electronic, general-purpose digital computer, completed in 1945. Other computers had some of these features, but ENIAC was ...
was based on the theoretical foundation laid by
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
and developed by
John von Neumann John von Neumann ( ; ; December 28, 1903 – February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, in ...
, and proved to be the most influential.


Birth of artificial intelligence (1941-56)

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in
neurology Neurology (from , "string, nerve" and the suffix wikt:-logia, -logia, "study of") is the branch of specialty (medicine) , medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous syst ...
had shown that the brain was an electrical network of
neuron A neuron (American English), neurone (British English), or nerve cell, is an membrane potential#Cell excitability, excitable cell (biology), cell that fires electric signals called action potentials across a neural network (biology), neural net ...
s that fired in all-or-nothing pulses.
Norbert Wiener Norbert Wiener (November 26, 1894 – March 18, 1964) was an American computer scientist, mathematician, and philosopher. He became a professor of mathematics at the Massachusetts Institute of Technology ( MIT). A child prodigy, Wiener late ...
's
cybernetic Cybernetics is the transdisciplinary study of circular causal processes such as feedback and recursion, where the effects of a system's actions (its outputs) return as inputs to that system, influencing subsequent action. It is concerned with ...
s described control and stability in electrical networks.
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of th ...
's
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
described digital signals (i.e., all-or-nothing signals).
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
's
theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., app ...
showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an "electronic brain". In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) explored several research directions that would be vital to later AI research. Alan Turing was among the first people to seriously investigate the theoretical possibility of "machine intelligence". The field of " artificial intelligence research" was founded as an academic discipline in 1956.


Turing Test

In 1950 Turing published a landmark paper "
Computing Machinery and Intelligence "Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in ''Mind (journal), Mind'', was the first to introduce his concept of what is now known as th ...
", in which he speculated about the possibility of creating machines that think. In the paper, he noted that "thinking" is difficult to define and devised his famous
Turing Test The Turing test, originally called the imitation game by Alan Turing in 1949,. Turing wrote about the ‘imitation game’ centrally and extensively throughout his 1950 text, but apparently retired the term thereafter. He referred to ‘ iste ...
: If a machine could carry on a conversation (over a
teleprinter A teleprinter (teletypewriter, teletype or TTY) is an electromechanical device that can be used to send and receive typed messages through various communications channels, in both point-to-point (telecommunications), point-to-point and point- ...
) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least ''plausible'' and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the
philosophy of artificial intelligence The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, conscious ...
.


Neuroscience and Hebbian theory

Donald Hebb Donald Olding Hebb (July 22, 1904 – August 20, 1985) was a Canadian psychologist who was influential in the area of neuropsychology, where he sought to understand how the function of neurons contributed to psychological processes such as learn ...
was a Canadian psychologist whose work laid the foundation for modern neuroscience, particularly in understanding learning, memory, and neural plasticity. His most influential book, The Organization of Behavior (1949), introduced the concept of Hebbian learning, often summarized as "cells that fire together wire together." Hebb began formulating the foundational ideas for this book in the early 1940s, particularly during his time at the Yerkes Laboratories of Primate Biology from 1942 to 1947. He made extensive notes between June 1944 and March 1945 and sent a complete draft to his mentor Karl Lashley in 1946. The manuscript for The Organization of Behavior wasn’t published until 1949. The delay was due to various factors, including World War II and shifts in academic focus. By the time it was published, several of his peers had already published related ideas, making Hebb’s work seem less groundbreaking at first glance. However, his synthesis of psychological and neurophysiological principles became a cornerstone of neuroscience and machine learning.


Artificial neural networks

Walter Pitts Walter Harry Pitts, Jr. (April 23, 1923 – May 14, 1969) was an American logician who worked in the field of computational neuroscience.Smalheiser, Neil R"Walter Pitts", ''Perspectives in Biology and Medicine'', Volume 43, Number 2, Wint ...
and
Warren McCulloch Warren Sturgis McCulloch (November 16, 1898 – September 24, 1969) was an American neurophysiologist and cybernetician known for his work on the foundation for certain brain theories and his contribution to the cybernetics movement.Ken Aizawa ...
analyzed networks of idealized
artificial neuron An artificial neuron is a mathematical function conceived as a model of a biological neuron in a neural network. The artificial neuron is the elementary unit of an ''artificial neural network''. The design of the artificial neuron was inspired ...
s and showed how they might perform simple logical functions in 1943. They were the first to describe what later researchers would call a
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
. The paper was influenced by Turing's paper ' On Computable Numbers' from 1936 using similar two-state boolean 'neurons', but was the first to apply it to neuronal function. One of the students inspired by Pitts and McCulloch was
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
who was a 24-year-old graduate student at the time. In 1951 Minsky and Dean Edmonds built the first neural net machine, the SNARC. Minsky would later become one of the most important leaders and innovators in AI.


Cybernetic robots

Experimental robots such as W. Grey Walter's
turtles Turtles are reptiles of the order Testudines, characterized by a special shell developed mainly from their ribs. Modern turtles are divided into two major groups, the Pleurodira (side necked turtles) and Cryptodira (hidden necked turtle ...
and the Johns Hopkins Beast, were built in the 1950s. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.


Game AI

In 1951, using the
Ferranti Mark 1 The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commer ...
machine of the
University of Manchester The University of Manchester is a public university, public research university in Manchester, England. The main campus is south of Manchester city centre, Manchester City Centre on Wilmslow Road, Oxford Road. The University of Manchester is c ...
,
Christopher Strachey Christopher S. Strachey (; 16 November 1916 – 18 May 1975) was a British computer scientist. He was one of the founders of denotational semantics, and a pioneer in programming language design and computer time-sharing.F. J. Corbató, et al., T ...
wrote a checkers program and
Dietrich Prinz Dietrich Gunther Prinz (March 29, 1903 – December 1989) was a computer science pioneer, notable for his work on early British computers at Ferranti, and in particular for developing the first limited chess program in 1951. Biography He was born ...
wrote one for chess.
Arthur Samuel Arthur Samuel may refer to: * Arthur Samuel (computer scientist) Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and artificial intelligence. He popularized the term "machine learni ...
's checkers program, the subject of his 1959 paper "Some Studies in Machine Learning Using the Game of Checkers", eventually achieved sufficient skill to challenge a respectable amateur. Samuel's program was among the first uses of what would later be called
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
.
Game AI In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part ...
would continue to be used as a measure of progress in AI throughout its history.


Symbolic reasoning and the Logic Theorist

When access to
digital computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (''computation''). Modern digital electronic computers can perform generic sets of operations known as ''programs'', wh ...
s became possible in the mid-fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines. In 1955,
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and D ...
and future Nobel Laureate
Herbert A. Simon Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American scholar whose work influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organi ...
created the "Logic Theorist", with help from J. C. Shaw. The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's ''
Principia Mathematica The ''Principia Mathematica'' (often abbreviated ''PM'') is a three-volume work on the foundations of mathematics written by the mathematician–philosophers Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1 ...
'', and find new and more elegant proofs for some. Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind." The symbolic reasoning paradigm they introduced would dominate AI research and funding until the middle 90s, as well as inspire the
cognitive revolution The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, ...
.


Dartmouth Workshop

The Dartmouth workshop of 1956 was a pivotal event that marked the formal inception of AI as an academic discipline.
Dartmouth workshop The Dartmouth Summer Research Project on Artificial Intelligence was a 1956 summer workshop widely consideredKline, Ronald R., "Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence", ''IEEE Annals of the History ...
: * * * *
It was organized by
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
and John McCarthy, with the support of two senior scientists
Claude Shannon Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist, cryptographer and inventor known as the "father of information theory" and the man who laid the foundations of th ...
and Nathan Rochester of
IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
. The proposal for the conference stated they intended to test the assertion that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it". The term "Artificial Intelligence" was introduced by John McCarthy at the workshop. The participants included
Ray Solomonoff Ray Solomonoff (July 25, 1926 – December 7, 2009) was an American mathematician who invented algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference),Samuel Rathmanner and Marcus Hutter. ...
,
Oliver Selfridge Oliver Gordon Selfridge (10 May 1926 – 3 December 2008) was a mathematician and computer scientist who pioneered the early foundations of modern artificial intelligence. He is mostly known for his 1959 paper, ''Pandemonium: A paradigm for lea ...
, Trenchard More,
Arthur Samuel Arthur Samuel may refer to: * Arthur Samuel (computer scientist) Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and artificial intelligence. He popularized the term "machine learni ...
,
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and D ...
and
Herbert A. Simon Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American scholar whose work influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organi ...
, all of whom would create important programs during the first decades of AI research. At the workshop Newell and Simon debuted the "Logic Theorist". The workshop was the moment that AI gained its name, its mission, its first major success and its key players, and is widely considered the birth of AI.


Cognitive revolution

In the autumn of 1956, Newell and Simon also presented the Logic Theorist at a meeting of the Special Interest Group in Information Theory at the
Massachusetts Institute of Technology The Massachusetts Institute of Technology (MIT) is a Private university, private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of moder ...
(MIT). At the same meeting,
Noam Chomsky Avram Noam Chomsky (born December 7, 1928) is an American professor and public intellectual known for his work in linguistics, political activism, and social criticism. Sometimes called "the father of modern linguistics", Chomsky is also a ...
discussed his
generative grammar Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists (), ...
, and George Miller described his landmark paper "
The Magical Number Seven, Plus or Minus Two "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" is one of the most highly cited papers in psychology. It was written by the cognitive psychologist George A. Miller of Harvard University's ...
". Miller wrote "I left the symposium with a conviction, more intuitive than rational, that experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes were all pieces from a larger whole." This meeting was the beginning of the "
cognitive revolution The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, ...
"—an interdisciplinary
paradigm shift A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist a ...
in psychology, philosophy, computer science and neuroscience. It inspired the creation of the sub-fields of
symbolic artificial intelligence Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing * Symbolic computation, a scientific area concerned with computing with mathematical formulas * Symbolic dynamic ...
,
generative linguistics Generative grammar is a research tradition in linguistics that aims to explain the cognition, cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generat ...
,
cognitive science Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Mental faculties of concern to cognitive scientists include percep ...
,
cognitive psychology Cognitive psychology is the scientific study of human mental processes such as attention, language use, memory, perception, problem solving, creativity, and reasoning. Cognitive psychology originated in the 1960s in a break from behaviorism, whi ...
,
cognitive neuroscience Cognitive neuroscience is the scientific field that is concerned with the study of the Biology, biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental ...
and the philosophical schools of
computationalism In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of comp ...
and functionalism. All these fields used related tools to model the mind and results discovered in one field were relevant to the others. The cognitive approach allowed researchers to consider "mental objects" like thoughts, plans, goals, facts or memories, often analyzed using high level symbols in functional networks. These objects had been forbidden as "unobservable" by earlier paradigms such as
behaviorism Behaviorism is a systematic approach to understand the behavior of humans and other animals. It assumes that behavior is either a reflex elicited by the pairing of certain antecedent stimuli in the environment, or a consequence of that indivi ...
. Symbolic mental objects would become the major focus of AI research and funding for the next several decades.


Early successes (1956-1974)

The programs developed in the years after the
Dartmouth Workshop The Dartmouth Summer Research Project on Artificial Intelligence was a 1956 summer workshop widely consideredKline, Ronald R., "Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence", ''IEEE Annals of the History ...
were, to most people, simply "astonishing": computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. Government agencies like the
Defense Advanced Research Projects Agency The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
(DARPA, then known as "ARPA") poured money into the field. Artificial Intelligence laboratories were set up at a number of British and US universities in the latter 1950s and early 1960s.


Approaches

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:


Reasoning, planning and problem solving as search

Many early AI programs used the same basic
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze,
backtracking Backtracking is a class of algorithms for finding solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it de ...
whenever they reached a dead end. The principal difficulty was that, for many problems, the number of possible paths through the "maze" was astronomical (a situation known as a "
combinatorial explosion In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to the way its combinatorics depends on input, constraints and bounds. Combinatorial explosion is sometimes used to justify the intractability of cert ...
"). Researchers would reduce the search space by using
heuristics A heuristic or heuristic technique (''problem solving'', '' mental shortcut'', ''rule of thumb'') is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless ...
that would eliminate paths that were unlikely to lead to a solution. Newell and Simon tried to capture a general version of this algorithm in a program called the "
General Problem Solver General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell ( RAND Corporation) intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, ...
". Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as
Herbert Gelernter Herbert Leo Gelernter (December 17, 1929 – May 28, 2015)American Men and Women of Science, 21st edition, vol. 3, Thomson/ Gale, 2009, p. 76 was a professor in the Computer Science Department of Stony Brook University. Short biography Having ta ...
's Geometry Theorem Prover (1958) and Symbolic Automatic Integrator (SAINT), written by
Minsky's ''Minsky's'' is a musical by Bob Martin (book), Charles Strouse (music), and Susan Birkenhead (lyrics), and is loosely based on the 1968 movie ''The Night They Raided Minsky's''. Set during the Great Depression era in Manhattan, the story cente ...
student James Slagle in 1961. Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at
Stanford Leland Stanford Junior University, commonly referred to as Stanford University, is a private research university in Stanford, California, United States. It was founded in 1885 by railroad magnate Leland Stanford (the eighth governor of and th ...
to control the behavior of the robot Shakey.


Natural language

An important goal of AI research is to allow computers to communicate in
natural languages A natural language or ordinary language is a language that occurs naturally in a human community by a process of use, repetition, and Language change, change. It can take different forms, typically either a spoken language or a sign language. Na ...
like English. An early success was Daniel Bobrow's program
STUDENT A student is a person enrolled in a school or other educational institution, or more generally, a person who takes a special interest in a subject. In the United Kingdom and most The Commonwealth, commonwealth countries, a "student" attends ...
, which could solve high school algebra word problems. A
semantic net Semantics is the study of linguistic Meaning (philosophy), meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction betwee ...
represents concepts (e.g. "house", "door") as nodes, and relations among concepts as links between the nodes (e.g. "has-a"). The first AI program to use a semantic net was written by Ross Quillian and the most successful (and controversial) version was
Roger Schank Roger Carl Schank (March 12, 1946 – January 29, 2023) was an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur. Beginning in the late 1960s, he pioneered conceptual d ...
's
Conceptual dependency theory Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence. This model was extens ...
.
Joseph Weizenbaum Joseph Weizenbaum (8 January 1923 – 5 March 2008) was a German-American computer scientist and a professor at Massachusetts Institute of Technology, MIT. He is the namesake of the Weizenbaum Award and the Weizenbaum Institute. Life and career ...
's
ELIZA ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and ...
could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a computer program (see
ELIZA effect In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1 ...
). But in fact, ELIZA simply gave a
canned response Canned responses are predetermined responses to common questions. In fields such as technical support, canned responses to frequently asked questions may be an effective solution for both the customer and the technical adviser, as they offer the ...
or repeated back what was said to it, rephrasing its response with a few grammar rules. ELIZA was the first
chatbot A chatbot (originally chatterbot) is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of main ...
.


Micro-worlds

In the late 60s,
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
and
Seymour Papert Seymour Aubrey Papert (; 29 February 1928 – 31 July 2016) was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT. He was one of the pioneers of artif ...
of the
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "
blocks world The blocks world is a planning domain in artificial intelligence. It consists of a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at ...
," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.
Blocks world The blocks world is a planning domain in artificial intelligence. It consists of a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at ...
: * * * *
This paradigm led to innovative work in
machine vision Machine vision is the technology and methods used to provide image, imaging-based automation, automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision ...
by
Gerald Sussman Gerald Jay Sussman (born February 8, 1947) is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research ha ...
, Adolfo Guzman, David Waltz (who invented "
constraint propagation In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier t ...
"), and especially
Patrick Winston Patrick Henry Winston (February 5, 1943 – July 19, 2019) was an American computer scientist and professor at the Massachusetts Institute of Technology. Winston was director of the MIT Artificial Intelligence Laboratory from 1972 to 1997, succe ...
. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. Terry Winograd's
SHRDLU SHRDLU is an early natural-language understanding computer program that was developed by Terry Winograd at MIT in 1968–1970. In the program, the user carries on a conversation with the computer, moving objects, naming collections and query ...
could communicate in ordinary English sentences about the micro-world, plan operations and execute them.


Perceptrons and early neural networks

In the 1960s funding was primarily directed towards laboratories researching
symbolic AI Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing * Symbolic computation, a scientific area concerned with computing with mathematical formulas * Symbolic dynamic ...
, however several people still pursued research in neural networks. The
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
, a single-layer
neural network A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
was introduced in 1958 by
Frank Rosenblatt Frank Rosenblatt (July 11, 1928July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning for his pioneering work on artificial neural networks. Life and career ...
(who had been a schoolmate of
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
at the
Bronx High School of Science The Bronx High School of Science is a State school, public Specialized high schools in New York City, specialized high school in the Bronx in New York City. It is operated by the New York City Department of Education. Admission to Bronx Science ...
). Like most AI researchers, he was optimistic about their power, predicting that a perceptron "may eventually be able to learn, make decisions, and translate languages." Rosenblatt was primarily funded by
Office of Naval Research The Office of Naval Research (ONR) is an organization within the United States Department of the Navy responsible for the science and technology programs of the U.S. Navy and Marine Corps. Established by Congress in 1946, its mission is to plan ...
.
Bernard Widrow Bernard Widrow (born December 24, 1929) is a U.S. professor of electrical engineering at Stanford University. He is the co-inventor of the Widrow–Hoff least mean squares filter (LMS) adaptive algorithm with his then doctoral student Ted Hoff ...
and his student Ted Hoff built ADALINE (1960) and
MADALINE Madaline may refer to: Computing * MADALINE (from "Many ADALINE"), a neural network architecture People called Madaline * Madaline Lee (1912–1974), American actress * Madaline A. Williams (1894–1968), American politician * Madlaine Traver ...
(1962), which had up to 1000 adjustable weights. A group at
Stanford Research Institute SRI International (SRI) is a nonprofit organization, nonprofit scientific research, scientific research institute and organization headquartered in Menlo Park, California, United States. It was established in 1946 by trustees of Stanford Univer ...
led by Charles A. Rosen and Alfred E. (Ted) Brain built two neural network machines named MINOS I (1960) and II (1963), mainly funded by U.S. Army Signal Corps. MINOS II had 6600 adjustable weights, and was controlled with an SDS 910 computer in a configuration named MINOS III (1968), which could classify symbols on army maps, and recognize hand-printed characters on Fortran coding sheets. Most of neural network research during this early period involved building and using bespoke hardware, rather than simulation on digital computers. However, partly due to lack of results and partly due to competition from
symbolic AI Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing * Symbolic computation, a scientific area concerned with computing with mathematical formulas * Symbolic dynamic ...
research, the MINOS project ran out of funding in 1966. Rosenblatt failed to secure continued funding in the 1960s. In 1969, research came to a sudden halt with the publication of Minsky and Papert's 1969 book '' Perceptrons''. It suggested that there were severe limitations to what perceptrons could do and that Rosenblatt's predictions had been grossly exaggerated. The effect of the book was that virtually no research was funded in
connectionism Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many "waves" since its beginnings. The first ...
for 10 years. The competition for government funding ended with the victory of symbolic AI approaches over neural networks. Minsky (who had worked on SNARC) became a staunch objector to pure connectionist AI. Widrow (who had worked on ADALINE) turned to adaptive signal processing. The
SRI Shri (; , ) is a Sanskrit term denoting resplendence, wealth and prosperity, primarily used as an honorific. The word is widely used in South and Southeast Asian languages such as Assamese, Meitei ( Manipuri), Marathi, Malay (including In ...
group (which worked on MINOS) turned to symbolic AI and robotics. The main problem was the inability to train multilayered networks (versions of
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
had already been used in other fields but it was unknown to these researchers). The AI community became aware of backpropogation in the 80s, and, in the 21st century, neural networks would become enormously successful, fulfilling all of Rosenblatt's optimistic predictions. Rosenblatt did not live to see this, however, as he died in a boating accident in 1971.


Optimism

The first generation of AI researchers made these predictions about their work: * 1958, H. A. Simon and
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and D ...
: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem." * 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do." * 1967,
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
: "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved." * 1970, Marvin Minsky (in ''Life'' magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."


Financing

In June 1963,
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
received a $2.2 million grant from the newly created Advanced Research Projects Agency (ARPA, later known as
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
). The money was used to fund
project MAC Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology The Massachusetts Institute of Technology (MIT) is a Private university, private research university in ...
which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide $3 million each year until the 70s. DARPA made similar grants to Newell and Simon's program at
Carnegie Mellon University Carnegie Mellon University (CMU) is a private research university in Pittsburgh, Pennsylvania, United States. The institution was established in 1900 by Andrew Carnegie as the Carnegie Technical Schools. In 1912, it became the Carnegie Institu ...
and to
Stanford University Leland Stanford Junior University, commonly referred to as Stanford University, is a Private university, private research university in Stanford, California, United States. It was founded in 1885 by railroad magnate Leland Stanford (the eighth ...
's
AI Lab Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Lab ...
, founded by John McCarthy in 1963. Another important AI laboratory was established at
Edinburgh University The University of Edinburgh (, ; abbreviated as ''Edin.'' in post-nominals) is a public research university based in Edinburgh, Scotland. Founded by the town council under the authority of a royal charter from King James VI in 1582 and offi ...
by
Donald Michie Donald Michie (; 11 November 1923 – 7 July 2007) was a British researcher in artificial intelligence. During World War II, Michie worked for the Government Code and Cypher School at Bletchley Park, contributing to the effort to solve " Tunny ...
in 1965. These four institutions would continue to be the main centers of AI research and funding in academia for many years. The money was given with few strings attached:
J. C. R. Licklider Joseph Carl Robnett Licklider (; March 11, 1915 – June 26, 1990), known simply as J. C. R. or "Lick", was an American psychologistMiller, G. A. (1991), "J. C. R. Licklider, psychologist", ''Journal of the Acoustical Society of Am ...
, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them. This created a freewheeling atmosphere at MIT that gave birth to the
hacker culture The hacker culture is a subculture of individuals who enjoy—often in collective effort—the intellectual challenge of creatively overcoming the limitations of software systems or electronic hardware (mostly digital electronics), ...
, but this "hands off" approach did not last.


First AI Winter (1974–1980)

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised public expectations impossibly high, and when the promised results failed to materialize, funding targeted at AI was severely reduced. The lack of success indicated the techniques being used by AI researchers at the time were insufficient to achieve their goals. These setbacks did not affect the growth and progress of the field, however. The funding cuts only impacted a handful of major laboratories and the critiques were largely ignored. General public interest in the field continued to grow, the number of researchers increased dramatically, and new ideas were explored in
logic programming Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applyin ...
, commonsense reasoning and many other areas. Historian Thomas Haigh argued in 2023 that there was no winter, and AI researcher Nils Nilsson described this period as the most "exciting" time to work in AI.


Problems

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys". AI researchers had begun to run into several limits that would be only conquered decades later, and others that still stymie the field in the 2020s: * Limited computer power: There was not enough memory or processing speed to accomplish anything truly useful. For example: Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only 20 words, because that was all that would fit in memory.
Hans Moravec Hans Peter Moravec (born November 30, 1948, Kautzen, Austria) is a computer scientist and an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial inte ...
argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require
horsepower Horsepower (hp) is a unit of measurement of power, or the rate at which work is done, usually in reference to the output of engines or motors. There are many different standards and types of horsepower. Two common definitions used today are t ...
. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy. "With enough horsepower," he wrote, "anything will fly". * Intractability and the
combinatorial explosion In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to the way its combinatorics depends on input, constraints and bounds. Combinatorial explosion is sometimes used to justify the intractability of cert ...
: In 1972
Richard Karp Richard Manning Karp (born January 3, 1935) is an American computer scientist and computational theorist at the University of California, Berkeley. He is most notable for his research in the theory of algorithms, for which he received a Turin ...
(building on
Stephen Cook Stephen Arthur Cook (born December 14, 1939) is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at ...
's 1971
theorem In mathematics and formal logic, a theorem is a statement (logic), statement that has been Mathematical proof, proven, or can be proven. The ''proof'' of a theorem is a logical argument that uses the inference rules of a deductive system to esta ...
) showed there are many problems that can only be solved in
exponential time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations p ...
. Finding optimal solutions to these problems requires extraordinary amounts of computer time, except when the problems are trivial. This limitation applied to all symbolic AI programs that used search trees and meant that many of the "toy" solutions used by AI would never scale to useful systems. * Moravec's paradox: Early AI research had been very successful at getting computers to do "intelligent" tasks like proving theorems, solving geometry problems and playing chess. Their success at these intelligent tasks convinced them that the problem of intelligent behavior had been largely solved. However, they utterly failed to make progress on "unintelligent" tasks like recognizing a face or crossing a room without bumping into anything. By the 1980s, researchers would realize that symbolic reasoning was utterly unsuited for these perceptual and sensorimotor tasks and that there were limits to this approach. * The breadth of
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial gener ...
: Many important artificial intelligence applications like
vision Vision, Visions, or The Vision may refer to: Perception Optical perception * Visual perception, the sense of sight * Visual system, the physical mechanism of eyesight * Computer vision, a field dealing with how computers can be made to gain und ...
or
natural language A natural language or ordinary language is a language that occurs naturally in a human community by a process of use, repetition, and change. It can take different forms, typically either a spoken language or a sign language. Natural languages ...
require enormous amounts of information about the world: the program needs to have some idea of what it might be looking at or what it is talking about. This requires that the program know most of the same things about the world that a child does. Researchers soon discovered that this was a vast amount of information with billions of atomic facts. No one in 1970 could build a database large enough and no one knew how a program might learn so much information. * Representing commonsense reasoning: A number of related problems appeared when researchers tried to represent commonsense reasoning using formal logic or symbols. Descriptions of very ordinary deductions tended to get longer and longer the more one worked on them, as more and more exceptions, clarifications and distinctions were required. However, when people thought about ordinary concepts they did not rely on precise definitions, rather they seemed to make hundreds of imprecise assumptions, correcting them when necessary using their entire body of commonsense knowledge.
Gerald Sussman Gerald Jay Sussman (born February 8, 1947) is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research ha ...
observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."


Decrease in funding

The agencies which funded AI research, such as the
British government His Majesty's Government, abbreviated to HM Government or otherwise UK Government, is the central government, central executive authority of the United Kingdom of Great Britain and Northern Ireland.
,
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
and the National Research Council (NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected AI research. The pattern began in 1966 when the Automatic Language Processing Advisory Committee (ALPAC) report criticized machine translation efforts. After spending $20 million, the NRC ended all support. In 1973, the
Lighthill report __NOTOC__ ''Artificial Intelligence: A General Survey'', commonly known as the Lighthill report, is a scholarly article by James Lighthill, published in ''Artificial Intelligence: a paper symposium'' in 1973. It was compiled by Lighthill for the ...
on the state of AI research in the UK criticized the failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country. (The report specifically mentioned the
combinatorial explosion In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to the way its combinatorics depends on input, constraints and bounds. Combinatorial explosion is sometimes used to justify the intractability of cert ...
problem as a reason for AI's failings.) DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of $3 million.
Hans Moravec Hans Peter Moravec (born November 30, 1948, Kautzen, Austria) is a computer scientist and an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial inte ...
blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration." However, there was another issue: since the passage of the Mansfield Amendment in 1969,
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA, which instead directed money at specific projects with clear objectives, such as
autonomous In developmental psychology and moral, political, and bioethical philosophy, autonomy is the capacity to make an informed, uncoerced decision. Autonomous organizations or institutions are independent or self-governing. Autonomy can also be defi ...
tanks and
battle management Battle command (BC) is the discipline of visualizing, describing, directing, and leading forces in operations against a hostile, thinking, and adaptive enemy. Battle command applies leadership to translate decision into actions, by synchronizing ...
systems. The major laboratories (MIT, Stanford, CMU and Edinburgh) had been receiving generous support from their governments, and when it was withdrawn, these were the only places that were seriously impacted by the budget cuts. The thousands of researchers outside these institutions and the many more thousands that were joining the field were unaffected.


Philosophical and ethical critiques

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a
formal system A formal system is an abstract structure and formalization of an axiomatic system used for deducing, using rules of inference, theorems from axioms. In 1921, David Hilbert proposed to use formal systems as the foundation of knowledge in ma ...
(such as a computer program) could never see the truth of certain statements, while a human being could.
Hubert Dreyfus Hubert Lederer Dreyfus ( ; October 15, 1929 – April 22, 2017) was an American philosopher and a professor of philosophy at the University of California, Berkeley. His main interests included phenomenology, existentialism and the philosophy of ...
ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied,
instinct Instinct is the inherent inclination of a living organism towards a particular complex behaviour, containing innate (inborn) elements. The simplest example of an instinctive behaviour is a fixed action pattern (FAP), in which a very short to me ...
ive, unconscious " know how".
John Searle John Rogers Searle (; born July 31, 1932) is an American philosopher widely noted for contributions to the philosophy of language, philosophy of mind, and social philosophy. He began teaching at UC Berkeley in 1959 and was Willis S. and Mario ...
's
Chinese Room The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 19 ...
argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "
intentionality Intentionality is the mental ability to refer to or represent something. Sometimes regarded as the ''mark of the mental'', it is found in mental states like perceptions, beliefs or desires. For example, the perception of a tree has intentionality ...
"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking". These critiques were not taken seriously by AI researchers. Problems like intractability and
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial gener ...
seemed much more immediate and serious. It was unclear what difference " know how" or "
intentionality Intentionality is the mental ability to refer to or represent something. Sometimes regarded as the ''mark of the mental'', it is found in mental states like perceptions, beliefs or desires. For example, the perception of a tree has intentionality ...
" made to an actual computer program. MIT's Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored." Dreyfus, who also taught at
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."
Joseph Weizenbaum Joseph Weizenbaum (8 January 1923 – 5 March 2008) was a German-American computer scientist and a professor at Massachusetts Institute of Technology, MIT. He is the namesake of the Weizenbaum Award and the Weizenbaum Institute. Life and career ...
, the author of
ELIZA ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and ...
, was also an outspoken critic of Dreyfus' positions, but he "deliberately made it plain that is AI colleagues' treatment of Dreyfuswas not the way to treat a human being," and was unprofessional and childish. Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct
psychotherapeutic Psychotherapy (also psychological therapy, talk therapy, or talking therapy) is the use of psychological methods, particularly when based on regular personal interaction, to help a person change behavior, increase happiness, and overcome prob ...
dialogue" based on ELIZA. Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published '' Computer Power and Human Reason'' which argued that the misuse of artificial intelligence has the potential to devalue human life.


Logic at Stanford, CMU and Edinburgh

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal. In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and
unification Unification or unification theory may refer to: Computer science * Unification (computer science), the act of identifying two terms with a suitable substitution * Unification (graph theory), the computation of the most general graph that subs ...
algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems. A more fruitful approach to logic was developed in the 1970s by
Robert Kowalski Robert Anthony Kowalski (born 15 May 1941) is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most ...
at the
University of Edinburgh The University of Edinburgh (, ; abbreviated as ''Edin.'' in Post-nominal letters, post-nominals) is a Public university, public research university based in Edinburgh, Scotland. Founded by the City of Edinburgh Council, town council under th ...
, and soon this led to the collaboration with French researchers
Alain Colmerauer Alain Colmerauer (24 January 1941 – 12 May 2017) was a French computer scientist. He was a professor at Aix-Marseille University, and the creator of the logic programming language Prolog. Early life Alain Colmerauer was born on 24 January 1941 ...
and who created the successful logic programming language
Prolog Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving, and computational linguistics. Prolog has its roots in first-order logic, a formal logic. Unlike many other programming language ...
. Prolog uses a subset of logic (
Horn clause In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are ...
s, closely related to "
rules Rule or ruling may refer to: Human activity * The exercise of political or personal control by someone with authority or power * Business rule, a rule pertaining to the structure or behavior internal to a business * School rule, a rule tha ...
" and " production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for
Edward Feigenbaum Edward Albert Feigenbaum (born January 20, 1936) is a computer scientist working in the field of artificial intelligence, and joint winner of the 1994 ACM Turing Award. He is often called the "father of expert systems". Education and early life ...
's
expert systems In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by Automated reasoning system, reasoning through bodies of knowl ...
and the continuing work by
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and D ...
and
Herbert A. Simon Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American scholar whose work influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organi ...
that would lead to Soar and their unified theories of cognition. Critics of the logical approach noted, as Hubert Dreyfus, Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Cathcart Wason, Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof. McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.


MIT's "anti-logic" approach

Among the critics of John McCarthy (computer scientist), McCarthy's approach were his colleagues across the country at
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
.
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
,
Seymour Papert Seymour Aubrey Papert (; 29 February 1928 – 31 July 2016) was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT. He was one of the pioneers of artif ...
and
Roger Schank Roger Carl Schank (March 12, 1946 – January 29, 2023) was an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur. Beginning in the late 1960s, he pioneered conceptual d ...
were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. MIT chose instead to focus on writing programs that solved a given task without using high-level abstract definitions or general theories of cognition, and measured performance by iterative testing, rather than arguments from first principles. Roger Schank, Schank described their "anti-logic" approaches as Neats vs. scruffies, ''scruffy'', as opposed to the Neats vs. scruffies, ''neat'' paradigm used by McCarthy, Robert Kowalski, Kowalski, Edward Feigenbaum, Feigenbaum, Newell and Simon. In 1975, in a seminal paper, Minsky noted that many of his fellow researchers were using the same kind of tool: a framework that captures all our commonsense knowledge, common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on (none of which are true for all birds). Minsky associated these assumptions with the general category and they could be ''inheritance (object-oriented programming), inherited'' by the frames for subcategories and individuals, or over-ridden as necessary. He called these structures ''Frame (Artificial intelligence), frames''. Roger Schank, Schank used a version of frames he called "Scripts (artificial intelligence), scripts" to successfully answer questions about short stories in English. Frames would eventually be widely used in software engineering under the name object-oriented programming. The logicians rose to the challenge. Pat Hayes claimed that "most of 'frames' is just a new syntax for parts of first-order logic." But he noted that "there are one or two apparently minor details which give a lot of trouble, however, especially defaults". Raymond Reiter, Ray Reiter admitted that "conventional logics, such as first-order logic, lack the expressive power to adequately represent the knowledge required for reasoning by default". He proposed augmenting first-order logic with a closed world assumption that a conclusion holds (by default) if its contrary cannot be shown. He showed how such an assumption corresponds to the common sense assumption made in reasoning with frames. He also showed that it has its "procedural equivalent" as negation as failure in
Prolog Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving, and computational linguistics. Prolog has its roots in first-order logic, a formal logic. Unlike many other programming language ...
. The closed world assumption, as formulated by Reiter, "is not a first-order notion. (It is a meta notion.)" However, Keith Clark (computer scientist), Keith Clark showed that negation as ''finite failure'' can be understood as reasoning implicitly with definitions in first-order logic including a unique name assumption that different terms denote different individuals. During the late 1970s and throughout the 1980s, a variety of logics and extensions of first-order logic were developed both for negation as failure in
logic programming Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applyin ...
and for default reasoning more generally. Collectively, these logics have become known as non-monotonic logics.


Boom (1980–1987)

In the 1980s, a form of AI program called "
expert system In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ...
s" was adopted by corporations around the world and knowledge representation, knowledge became the focus of mainstream AI research. Governments provided substantial funding, such as Japan's fifth generation computer project and the U.S. Strategic Computing Initiative. "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988."


Expert systems become widely used

An
expert system In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ...
is a program that answers questions or solves problems about a specific domain of knowledge, using logical Production system (computer science), rules that are derived from the knowledge of experts. The earliest examples were developed by
Edward Feigenbaum Edward Albert Feigenbaum (born January 20, 1936) is a computer scientist working in the field of artificial intelligence, and joint winner of the 1994 ACM Turing Award. He is often called the "father of expert systems". Education and early life ...
and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach. Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial gener ...
problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be ''useful'': something that AI had not been able to achieve up to this point. In 1980, an expert system called R1 (expert system), R1 was completed at Carnegie Mellon University, CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp (Software), IntelliCorp and Cleverpath AION Business Rules Expert, Aion.


Government funding increases

In 1981, the Ministry of International Trade and Industry, Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. Much to the chagrin of neats vs. scruffies, scruffies, they initially chose
Prolog Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving, and computational linguistics. Prolog has its roots in first-order logic, a formal logic. Unlike many other programming language ...
as the primary computer language for the project.{{sfn, Crevier, 1993, p=195 Other countries responded with new programs of their own. The UK began the £350 million Alvey project.{{sfn, Russell, Norvig, 2021, p=23 A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology.{{sfn, Crevier, 1993, p=240{{sfn, Russell, Norvig, 2021, p=23
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.{{sfn, McCorduck, 2004, pp=426–432{{sfn, NRC, 1999, loc=under "Shift to Applied Research Increases Investment"


Knowledge revolution

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of Principle of parsimony, parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,"{{sfn, McCorduck, 2004, p=299 writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay".{{sfn, McCorduck, 2004, p=421 Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s. It was hoped that vast databases would solve the
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial gener ...
problem and provide the support that commonsense reasoning required. In the 1980s some researchers attempted to attack the commonsense reasoning, commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started a database called Cyc, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand.


New directions in the 1980s

Although symbolic Knowledge representation and reasoning, knowledge representation and logical reasoning produced useful applications in the 80s and received massive amounts of funding, it was still unable to solve problems in perception, robotics, machine learning, learning and common sense reasoning, common sense. A small number of scientists and engineers began to doubt that the symbolic approach would ever be sufficient for these tasks and developed other approaches, such as "
connectionism Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many "waves" since its beginnings. The first ...
", robotics, soft computing, "soft" computing and reinforcement learning. Nils Nilsson called these approaches "sub-symbolic".


Revival of neural networks: "connectionism"

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information, and provably converges after enough time under any fixed condition. It was a breakthrough, as it was previously thought that nonlinear networks would, in general, evolve chaotically.{{sfn, Sejnowski, 2018 Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called "
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
".{{efn, Versions of backpropagation had been developed in several fields, most directly as the reverse mode of automatic differentiation published by Seppo Linnainmaa (1970). It was applied to neural networks in the 1970s by Paul Werbos.{{sfn, Schmidhuber, 2022 These two developments helped to revive the exploration of artificial neural networks.{{sfn, Russell, Norvig, 2021, p=24{{sfn, Crevier, 1993, pp=214–215 Neural networks, along with several other similar models, received widespread attention after the 1986 publication of the ''Parallel Distributed Processing'', a two volume collection of papers edited by David Rumelhart, Rumelhart and psychologist James McClelland (psychologist), James McClelland. The new field was christened "
connectionism Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many "waves" since its beginnings. The first ...
" and there was a considerable debate between advocates of
symbolic AI Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing * Symbolic computation, a scientific area concerned with computing with mathematical formulas * Symbolic dynamic ...
and the "connectionists".{{sfn, Russell, Norvig, 2021, p=24 Hinton called symbols the "luminous aether of AI" – that is, an unworkable and misleading model of intelligence.{{sfn, Russell, Norvig, 2021, p=24 This was a direct attack on the principles that inspired the
cognitive revolution The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, ...
. Neural networks started to advance state of the art in some specialist areas such as protein structure prediction. Following pioneering work from Terry Sejnowski,{{cite journal , last1= Qian, first1=Ning , last2=Sejnovski , first2=Terry J , title=Predicting the secondary structure of globular proteins using neural network models , url= https://www.sciencedirect.com/science/article/pii/0022283688905645, journal= Journal of Molecular Biology, volume= 202, issue= 4, date= 1988, pages=865–884 , doi=10.1016/0022-2836(88)90564-5 cascading multilayer perceptrons such as PhD{{cite journal , last1= Rost, first1=Burkhard , last2=Sander , first2=Chris , date= 1993, title=Improved prediction of protein secondary structure by use of sequence profiles and neural networks, url= https://www.pnas.org/doi/10.1073/pnas.90.16.7558, journal= Proceedings of the National Academy of Sciences, volume= 90, issue= 16 , pages=7558–7562 , doi=10.1073/pnas.90.16.7558, pmc=47181 and PsiPred{{cite journal , last1= McGuffin, first1=Liam J , last2=Bryson , first2=Kevin, last3=Jones, first3=David T , date= 2000, title=The PSIPRED protein structure prediction server., url= https://academic.oup.com/bioinformatics/article/16/4/404/187312, journal= Bioinformatics, volume= 16, issue= 4, pages=404–405 , doi=10.1093/bioinformatics/16.4.404 reached near-theoretical maximum accuracy in predicting secondary structure. In 1990, Yann LeCun at Bell Labs used convolutional neural networks to recognize handwritten digits. The system was used widely in 90s, reading zip codes and personal checks. This was the first genuinely useful application of neural networks.{{sfn, Russell, Norvig, 2021, p=26{{sfn, Christian, 2020, pp=21-22


Robotics and embodied reason

{{Main, Nouvelle AI, behavior-based AI, situated AI, embodied cognitive science Rodney Brooks,
Hans Moravec Hans Peter Moravec (born November 30, 1948, Kautzen, Austria) is a computer scientist and an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial inte ...
and others argued that, in order to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world.{{sfn, McCorduck, 2004, pp=454–462 Sensorimotor skills are essential to higher level skills such as commonsense reasoning. They can't be efficiently implemented using abstract symbolic reasoning, so AI should solve the problems of perception, mobility, manipulation and survival without using symbolic representation at all. These robotics researchers advocated building intelligence "from the bottom up".{{efn,
Hans Moravec Hans Peter Moravec (born November 30, 1948, Kautzen, Austria) is a computer scientist and an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial inte ...
wrote: "I am confident that this bottom-up route to artificial intelligence will one date meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."{{sfn, Moravec, 1988, p=20 A precursor to this idea was David Marr (neuroscientist), David Marr, who had come to
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
in the late 1970s from a successful background in theoretical neuroscience to lead the group studying
vision Vision, Visions, or The Vision may refer to: Perception Optical perception * Visual perception, the sense of sight * Visual system, the physical mechanism of eyesight * Computer vision, a field dealing with how computers can be made to gain und ...
. He rejected all symbolic approaches (both John McCarthy (computer scientist), McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.){{sfn, Crevier, 1993, pp=183–190 In his 1990 paper "Elephants Don't Play Chess,"{{sfn, Brooks, 1990 robotics researcher Brooks took direct aim at the physical symbol system, physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."{{sfn, Brooks, 1990, p=3 In the 1980s and 1990s, many cognitive science, cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the "embodied mind thesis".


Soft computing and probabilistic reasoning

Soft computing uses methods that work with incomplete and imprecise information. They do not attempt to give precise, logical answers, but give results that are only "probably" correct. This allowed them to solve problems that precise symbolic methods could not handle. Press accounts often claimed these tools could "think like a human".{{sfn, Pollack, 1984{{sfn, Pollack, 1989 Judea Pearl's ''Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference'', an influential 1988 book{{sfn, Pearl, 1988 brought probability and decision theory into AI.{{sfn, Russell, Norvig, 2021, p=25 Fuzzy logic, developed by Lofti Zadeh in the 60s, began to be more widely used in AI and robotics. Evolutionary computation and artificial neural networks also handle imprecise information, and are classified as "soft". In the 90s and early 2000s many other soft computing tools were developed and put into use, including Bayesian networks,{{sfn, Russell, Norvig, 2021, p=25 hidden Markov models,{{sfn, Russell, Norvig, 2021, p=25
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
and stochastic modeling. These tools in turn depended on advanced mathematical techniques such as classical optimization (mathematics), optimization. For a time in the 1990s and early 2000s, these soft tools were studied by a subfield of AI called "computational intelligence".{{sfn, Poole, Mackworth, Goebel, 1998


Reinforcement learning

Reinforcement learning{{sfn, Russell, Norvig, 2021, loc=Section 23 gives an agent a reward every time it performs a desired action well, and may give negative rewards (or "punishments") when it performs poorly. It was described in the first half of the twentieth century by psychologists using animal models, such as Edward Thorndike, Thorndike,{{sfn, Christian, 2020, pp=120-124{{sfn, Russell, Norvig, 2021, p=819 Ivan Pavlov, Pavlov{{sfn, Christian, 2020, p=124 and B.F. Skinner, Skinner.{{sfn, Christian, 2020, pp=152-156 In the 1950s,
Alan Turing Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
{{sfn, Russell, Norvig, 2021, p=819{{sfn, Christian, 2020, p=125 and
Arthur Samuel Arthur Samuel may refer to: * Arthur Samuel (computer scientist) Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and artificial intelligence. He popularized the term "machine learni ...
{{sfn, Russell, Norvig, 2021, p=819 foresaw the role of reinforcement learning in AI. A successful and influential research program was led by Richard S. Sutton, Richard Sutton and Andrew Barto beginning 1972. Their collaboration revolutionized the study of reinforcement learning and decision making over the four decades.{{sfn, Christian, 2020, pp=127-129{{sfn, Russell, Norvig, 2021, pp=25, 820 In 1988, Sutton described machine learning in terms of decision theory (i.e., the Markov decision process). This gave the subject a solid theoretical foundation and access to a large body of theoretical results developed in the field of operations research.{{sfn, Russell, Norvig, 2021, pp=25, 820 Also in 1988, Sutton and Barto developed the "Temporal difference learning, temporal difference" (TD) learning algorithm, where the agent is rewarded only when its predictions about the future show improvement. It significantly outperformed previous algorithms.{{sfn, Christian, 2020, p=140 TD-learning was used by Gerald Tesauro in 1992 in the program TD-Gammon, which played backgammon as well as the best human players. The program learned the game by playing against itself with zero prior knowledge.{{sfn, Christian, 2020, p=141 In an interesting case of interdisciplinary convergence, neurologists discovered in 1997 that the reward system, dopamine reward system in brains also uses a version of the TD-learning algorithm.{{sfn, Christian, 2020, p=?{{sfn, Russell, Norvig, 2021, p=820{{sfn, Schultz, Dayan, Montague, 1997 TD learning would be become highly influential in the 21st century, used in both AlphaGo and AlphaZero.{{sfn, Russell, Norvig, 2021, p=822


Second AI winter

The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. As dozens of companies failed, the perception in the business world was that the technology was not viable.{{sfn, Newquist, 1994, pp=501, 511 The damage to AI's reputation would last into the 21st century. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence".{{sfn, McCorduck, 2004, p=424 Over the next 20 years, AI consistently delivered working solutions to specific isolated problems. By the late 1990s, it was being used throughout the technology industry, although somewhat behind the scenes. The success was due to Moore's law, increasing computer power, by collaboration with other fields (such as mathematical optimization and statistics) and using the highest standards of scientific accountability. By 2000, AI had achieved some of its oldest goals. The field was both more cautious and more successful than it had ever been.


AI winter

The term "
AI winter In the history of artificial intelligence (AI), an AI winter is a period of reduced funding and interest in AI research.AI winter In the history of artificial intelligence (AI), an AI winter is a period of reduced funding and interest in AI research.IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, and they were "brittle (software), brittle" (i.e., they could make grotesque mistakes when given unusual inputs). Expert systems proved useful, but only in a few special contexts. In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally". New leadership at
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.{{sfn, McCorduck, 2004, pp=430–431 By 1991, the impressive list of goals penned in 1981 for Japan's fifth generation computer, Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" would not be accomplished for another 30 years. As with other AI projects, expectations had run much higher than what was actually possible.{{efn, McCorduck writes "Two and a half decades later, we can see that the Japanese didn't quite meet all of those ambitious goals."{{sfn, McCorduck, 2004, p=441 Over 300 AI companies had shut down, gone bankrupt, or been acquired by the end of 1993, effectively ending the first commercial wave of AI.{{sfn, Newquist, 1994, p=440 In 1994, HP Newquist stated in ''The Brain Makers'' that "The immediate future of artificial intelligence—in its commercial form—seems to rest in part on the continued success of neural networks."{{sfn, Newquist, 1994, p=440


AI behind the scenes

In the 1990s, algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems{{efn, See {{slink, Applications of artificial intelligence, Computer science and their solutions proved to be useful throughout the technology industry,{{sfn, NRC, 1999, loc=Artificial Intelligence in the 90s{{sfn, Kurzweil, 2005, p=264 such as data mining, industrial robots, industrial robotics, logistics, speech recognition,{{sfn, The Economist, 2007 banking software,{{sfn, CNN, 2006 medical diagnosis{{sfn, CNN, 2006 and Google's search engine.{{sfn, Olsen, 2004{{sfn, Olsen, 2006 The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science. Nick Bostrom explains: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."{{sfn, CNN, 2006 Many researchers in AI in the 1990s deliberately called their work by other names, such as Informatics (academic field), informatics, knowledge-based systems, "cognitive systems" or computational intelligence. In part, this may have been because they considered their field to be fundamentally different from AI, but also the new names help to procure funding.{{sfn, The Economist, 2007{{sfn, Tascarella, 2006{{sfn, Newquist, 1994, p=532 In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the ''New York Times'' reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{sfn, Markoff, 2005


Mathematical rigor, greater collaboration and a narrow focus

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.{{sfn, McCorduck, 2004, pp=486–487{{sfn, Russell, Norvig, 2021, pp=24–25 Most of the new directions in AI relied heavily on mathematical models, including artificial neural networks, probabilistic reasoning, soft computing and reinforcement learning. In the 90s and 2000s, many other highly mathematical tools were adapted for AI. These tools were applied to machine learning, perception and mobility. There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like statistics, mathematics, electrical engineering, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Another key reason for the success in the 90s was that AI researchers focussed on specific problems with verifiable solutions (an approach later derided as ''narrow AI''). This provided useful tools in the present, rather than speculation about the future.


Intelligent agents

A new paradigm called "intelligent agents" became widely accepted during the 1990s.{{sfn, McCorduck, 2004, pp=471–478{{sfn, Russell, Norvig, 2021, loc=chpt. 2{{efn, Russell and Norvig wrote "The whole-agent view is now widely accepted."{{sfn, Russell, Norvig, 2021, p=61 Although earlier researchers had proposed modular "divide and conquer" approaches to AI,{{efn, Carl Hewitt's Actor model anticipated the modern definition of intelligent agents. {{Harv, Hewitt, Bishop, Steiger, 1973 Both John Doyle {{Harv, Doyle, 1983 and
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
's popular classic ''The Society of Mind'' {{Harv, Minsky, 1986 used the word "agent". Other "modular" proposals included Rodney Brooks, Rodney Brook's subsumption architecture, object-oriented programming and others. the intelligent agent did not reach its modern form until Judea Pearl,
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and D ...
, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI.{{sfn, Russell, Norvig, 2021, p=61 When the economics, economist's definition of a rational agent was married to computer science's definition of an object-oriented programming, object or module (programming), module, the intelligent agent paradigm was complete. An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent, intelligent agent paradigm defines AI research as "the study of intelligent agents".{{efn, This is how the most widely used textbooks of the 21st century define artificial intelligence, such as Russell and Norvig, 2021; Padgham and Winikoff, 2004; Jones, 2007; Poole and Mackworth, 2017.{{sfn, Russell, Norvig, 2021, p=61 This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence. The paradigm gave researchers license to study isolated problems and to disagree about methods, but still retain hope that their work could be combined into an agent architecture that would be capable of general intelligence.{{sfn, McCorduck, 2004, p=478


Milestones and Moore's law

On May 11, 1997, IBM Deep Blue, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.{{sfn, McCorduck, 2004, pp=480–483 In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an urban environment while responding to traffic hazards and adhering to traffic laws.{{sfn, Russell, Norvig, 2021, p=28 These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computers by the 90s.{{efn, Ray Kurzweil wrote that the improvement in computer chess "is governed only by the brute force expansion of computer hardware."{{sfn, Kurzweil, 2005, p=274 In fact, IBM Deep Blue, Deep Blue's computer was 10 million times faster than the
Ferranti Mark 1 The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commer ...
that
Christopher Strachey Christopher S. Strachey (; 16 November 1916 – 18 May 1975) was a British computer scientist. He was one of the founders of denotational semantics, and a pioneer in programming language design and computer time-sharing.F. J. Corbató, et al., T ...
taught to play chess in 1951.{{efn, Cycle time of
Ferranti Mark 1 The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commer ...
was 1.2 milliseconds, which is arguably equivalent to about 833 flops. IBM Deep Blue, Deep Blue ran at 11.38 gigaflops (and this does not even take into account Deep Blue's special-purpose hardware for chess). ''Very'' approximately, these differ by a factor of 107.{{citation needed, date=August 2024 This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome.


Big data, deep learning, AGI (2005–2017)

In the first decades of the 21st century, access to large amounts of data (known as "big data"), Moore's law, cheaper and faster computers and advanced
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
techniques were successfully applied to many problems throughout the economy. A turning point was the success of
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
around 2012 which improved the performance of machine learning on many tasks, including image and video processing, text analysis, and speech recognition.{{sfn, LeCun, Bengio, Hinton, 2015 Investment in AI increased along with its capabilities, and by 2016, the market for AI-related products, hardware, and software reached more than $8 billion, and the New York Times reported that interest in AI had reached a "frenzy".{{sfn, Lohr, 2016 In 2002, Ben Goertzel and others became concerned that AI had largely abandoned its original goal of producing versatile, fully intelligent machines, and argued in favor of more direct research into artificial general intelligence. By the mid-2010s several companies and institutions had been founded to pursue Artificial General Intelligence (AGI), such as OpenAI and Google's DeepMind. During the same period, new insights into superintelligence raised concerns that AI was an Existential risk from artificial general intelligence, existential threat. The risks and unintended consequences of AI technology became an area of serious academic research after 2016.


Big data and big machines

{{See also, List of datasets for machine-learning research The success of machine learning in the 2000s depended on the availability of vast amounts of training data and faster computers.{{sfn, Russell, Norvig, 2021, pp=26-27 Russell and Norvig wrote that the "improvement in performance obtained by increasing the size of the data set by two or three orders of magnitude outweighs any improvement that can be made by tweaking the algorithm."{{sfn, Russell, Norvig, 2021, p=26 Geoffrey Hinton recalled that back in the 90s, the problem was that "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow." This was no longer true by 2010. The most useful data in the 2000s came from curated, labeled data sets created specifically for machine learning and AI. In 2007, a group at University of Massachusetts Amherst, UMass Amherst released Labeled Faces in the Wild, an annotated set of images of faces that was widely used to train and test face recognition systems for the next several decades.{{sfn, Christian, 2020, p=31 Fei-Fei Li developed ImageNet, a database of three million images captioned by volunteers using the Amazon Mechanical Turk. Released in 2009, it was a useful body of training data and a benchmark for testing for the next generation of image processing systems.{{sfn, Christian, 2020, pp=22-23{{sfn, Russell, Norvig, 2021, p=26 Google released word2vec in 2013 as an open source resource. It used large amounts of data text scraped from the internet and word embedding to create a numeric vector to represent each word. Users were surprised at how well it was able to capture word meanings, for example, ordinary vector addition would give equivalences like China + River = Yangtze, London-England+France = Paris.{{sfn, Christian, 2020, p=6 This database in particular would be essential for the development of large language models in the late 2010s. The explosive growth of the internet gave machine learning programs access to billions of pages of text and images that could be data scraping, scraped. And, for specific problems, large privately held databases contained the relevant data. McKinsey Global Institute reported that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".{{sfn, McKinsey & Co, 2011 This collection of information was known in the 2000s as ''big data''. In a ''Jeopardy!'' exhibition match in February 2011,
IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
's question answering system IBM Watson, Watson defeated the two best ''Jeopardy!'' champions, Brad Rutter and Ken Jennings, by a significant margin.{{sfn, Markoff, 2011 Watson's expertise would have been impossible without the information available on the internet.{{sfn, Russell, Norvig, 2021, p=26


Deep learning

{{Main, Deep learning In 2012, AlexNet, a
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
model,{{efn, AlexNet had 650,000 neurons and trained using ImageNet, augmented with reversed, cropped and tinted images. The model also used Geoffrey Hinton's dropout (neural networks), dropout technique and a Rectifier (neural networks), rectified linear output function, both relatively new developments at the time.{{sfn, Christian, 2020, pp=23-24 developed by Alex Krizhevsky, won the ImageNet Large Scale Visual Recognition Challenge, with significantly fewer errors than the second-place winner.{{sfn, Christian, 2020, p=24{{sfn, Russell, Norvig, 2021, p=26 Krizhevsky worked with Geoffrey Hinton at the University of Toronto.{{efn, Several other laboratories had developed systems that, like AlexNet, used GPU chips and performed nearly as well as AlexNet,{{sfn, Schmidhuber, 2022 but AlexNet proved to be the most influential. This was a turning point in machine learning: over the next few years dozens of other approaches to image recognition were abandoned in favor of
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
.{{sfn, Russell, Norvig, 2021, pp=26-27 Deep learning uses a multi-layer
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
. Although this architecture has been known since the 60s, getting it to work requires powerful hardware and large amounts of training data.{{sfn, Russell, Norvig, 2021, p=27 Before these became available, improving performance of image processing systems required hand-crafted ''ad hoc'' features that were difficult to implement.{{sfn, Russell, Norvig, 2021, p=27 Deep learning was simpler and more general.{{efn, See {{section link, History of AI, The problems above, where Hans Moravec predicted that raw power would eventually make AI "easy". Deep learning was applied to dozens of problems over the next few years (such as speech recognition, machine translation, medical diagnosis, and game playing). In every case it showed enormous gains in performance.{{sfn, Russell, Norvig, 2021, pp=26-27 Investment and interest in AI boomed as a result.{{sfn, Russell, Norvig, 2021, pp=26-27


The alignment problem

It became fashionable in the 2000s to begin talking about the future of AI again and several popular books considered the possibility of superintelligent machines and what they might mean for human society. Some of this was optimistic (such as Ray Kurzweil's ''The Singularity is Near''), but others warned that a sufficiently powerful AI was existential risk of artificial general intelligence, existential threat to humanity, such as Nick Bostrom and Eliezer Yudkowsky.{{sfn, Russell, Norvig, 2021, pp=33, 1004 The topic became widely covered in the press and many leading intellectuals and politicians commented on the issue. AI programs in the 21st century are defined by their utility function, goals – the specific measures that they are designed to optimize. Nick Bostrom's influential 2005 book ''Superintelligence (book), Superintelligence'' argued that, if one isn't careful about defining these goals, the machine may cause harm to humanity in the process of achieving a goal. Stuart J. Russell used the example of an intelligent robot that kills its owner to prevent it from being unplugged, reasoning "you can't fetch the coffee if you're dead".{{sfn, Russell, 2020 (This problem is known by the technical term "instrumental convergence".) The solution is to align the machine's goal function with the goals of its owner and humanity in general. Thus, the problem of mitigating the risks and unintended consequences of AI became known as "the value alignment problem" or AI alignment.{{sfn, Russell, Norvig, 2021, pp=5, 33, 1002-1003 At the same time, machine learning systems had begun to have disturbing unintended consequences. Cathy O'Neil explained how statistical algorithms had been among the causes of the great recession, 2008 economic crash,{{sfn, O'Neill, 2016 Julia Angwin of ProPublica argued that the COMPAS (software), COMPAS system used by the criminal justice system exhibited racial bias under some measures,{{sfn, Christian, 2020, pp=60-61{{efn, Later research showed that there was no way for system to avoid a measurable racist bias -- fixing one form of bias would necessarily introduce another.{{sfn, Christian, 2020, pp=67-70 others showed that many machine learning systems exhibited some form of racial algorithmic bias, bias,{{sfn, Christian, 2020, pp=6-7, 25 and there were many other examples of dangerous outcomes that had resulted from machine learning systems.{{efn, A short summary of topics would include privacy, surveillance, copyright, misinformation and deep fakes, filter bubbles and partisanship, algorithmic bias, misleading results that go undetected without algorithmic transparency, the right to an explanation, misuse of autonomous weapons and technological unemployment. See {{section link, Artificial intelligence, Ethics In 2016, the election of Donald Trump and the controversy over the COMPAS system illuminated several problems with the current technological infrastructure, including misinformation, social media algorithms designed to maximize engagement, the misuse of personal data and the trustworthiness of predictive models.{{sfn, Christian, 2020, p=67 Issues of fairness (machine learning), fairness and unintended consequences became significantly more popular at AI conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The AI alignment, value alignment problem became a serious field of academic study.{{sfn, Christian, 2020, pp=67, 73, 117{{efn, Brian Christian wrote "ProPublica's study [of COMPAS in 2015] legitimated concepts like fairness as valid topics for research"{{sfn, Christian, 2020, p=73


Artificial general intelligence research

In the early 2000s, several researchers became concerned that mainstream AI was too focused on "measurable performance in specific applications"{{sfn, Russell, Norvig, 2021, p=32 (known as "narrow AI") and had abandoned AI's original goal of creating versatile, fully intelligent machines. An early critic was Nils Nilsson in 1995, and similar opinions were published by AI elder statesmen John McCarthy, Marvin Minsky, and Patrick Winston in 2007–2009. Minsky organized a symposium on "human-level AI" in 2004.{{sfn, Russell, Norvig, 2021, p=32 Ben Goertzel adopted the term "artificial general intelligence" for the new sub-field, founding a journal and holding conferences beginning in 2008.{{sfn, Russell, Norvig, 2021, p=33 The new field grew rapidly, buoyed by the continuing success of artificial neural networks and the hope that it was the key to AGI. Several competing companies, laboratories and foundations were founded to develop AGI in the 2010s. DeepMind was founded in 2010 by three English scientists, Demis Hassabis, Shane Legg and Mustafa Suleyman, with funding from Peter Thiel and later Elon Musk. The founders and financiers were deeply concerned about AI safety and the existential risk of AI. DeepMind's founders had a personal connection with Yudkowsky and Musk was among those who was actively raising the alarm.{{sfn, Metz, Weise, Grant, Isaac, 2023 Hassabis was both worried about the dangers of AGI and optimistic about its power; he hoped they could "solve AI, then solve everything else."{{sfn, Russell, Norvig, 2021, p=31 The New York Times wrote in 2023 "At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about AI are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep AI from endangering Earth."{{sfn, Metz, Weise, Grant, Isaac, 2023 In 2012, Geoffrey Hinton (who been leading neural network research since the 80s) was approached by Baidu, which wanted to hire him and all his students for an enormous sum. Hinton decided to hold an auction and, at a Lake Tahoe AI conference, they sold themselves to Google for a price of $44 million. Hassabis took notice and sold DeepMind to Google in 2014, on the condition that it would not accept military contracts and would be overseen by an ethics board.{{sfn, Metz, Weise, Grant, Isaac, 2023 Larry Page of Google, unlike Musk and Hassabis, was an optimist about the future of AI. Musk and Paige became embroiled in an argument about the risk of AGI at Musk's 2015 birthday party. They had been friends for decades but stopped speaking to each other shortly afterwards. Musk attended the one and only meeting of the DeepMind's ethics board, where it became clear that Google was uninterested in mitigating the harm of AGI. Frustrated by his lack of influence he founded OpenAI in 2015, enlisting Sam Altman to run it and hiring top scientists. OpenAI began as a non-profit, "free from the economic incentives that were driving Google and other corporations."{{sfn, Metz, Weise, Grant, Isaac, 2023 Musk became frustrated again and left the company in 2018. OpenAI turned to Microsoft for continued financial support and Altman and OpenAI formed a for-profit version of the company with more than $1 billion in financing.{{sfn, Metz, Weise, Grant, Isaac, 2023 In 2021, Dario Amodei and 14 other scientists left OpenAI over concerns that the company was putting profits above safety. They formed Anthropic, which soon had $6 billion in financing from Microsoft and Google.{{sfn, Metz, Weise, Grant, Isaac, 2023


Large language models, AI boom (2017–present)

{{Main, AI boom The AI boom started with the initial development of key architectures and algorithms such as the transformer architecture in 2017, leading to the scaling and development of large language models exhibiting human-like traits of knowledge, attention and creativity. The new AI era began since 2020, with the public release of scaled large language models (LLMs) such as
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
.


Transformer architecture and large language models

{{Main, Large language models In 2017, the Transformer (machine learning model), transformer architecture was proposed by Google researchers. It exploits an Attention (machine learning), attention mechanism and became widely used in large language models.{{sfn, Murgia, 2023 Large language models, based on the transformer, were developed by AGI companies: OpenAI released GPT-3 in 2020, and DeepMind released Gato (DeepMind), Gato in 2022. These are foundation models: they are trained on vast quantities of unlabeled data and can be adapted to a wide range of downstream tasks.{{citation needed, date=August 2024 These models can discuss a huge number of topics and display general knowledge. The question naturally arises: are these models an example of artificial general intelligence? Bill Gates was skeptical of the new technology and the hype that surrounded AGI. However, Altman presented him with a live demo of GPT-4, ChatGPT4 passing an advanced biology test. Gates was convinced.{{sfn, Metz, Weise, Grant, Isaac, 2023 In 2023, Microsoft Research tested the model with a large variety of tasks, and concluded that "it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system".{{sfn, Bubeck, Chandrasekaran, Eldan, Gehrke, 2023 In 2024, OpenAI o3, a type of advanced reasoning model developed by OpenAI was announced. On the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) benchmark developed by François Chollet in 2019, the model achieved an unofficial score of 87.5% on the semi-private test, surpassing the typical human score of 84%. The benchmark is supposed to be a necessary, but not sufficient test for AGI. Speaking of the benchmark, Chollet has said "You’ll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible."


AI boom

{{Main, AI boom Investment in AI grew exponentially after 2020, with venture capital funding for generative AI companies increasing dramatically. Total AI investments rose from $18 billion in 2014 to $119 billion in 2021, with generative AI accounting for approximately 30% of investments by 2023. According to metrics from 2017 to 2021, the United States outranked the rest of the world in terms of venture capital funding, number of startups, and AI patents granted.{{Cite web , last=Frank , first=Michael , date=September 22, 2023 , title=US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order , url=https://thediplomat.com/2023/09/us-leadership-in-artificial-intelligence-can-shape-the-21st-century-global-order/ , access-date=2023-12-08 , website=The Diplomat , language=en-US The commercial AI scene became dominated by American Big Tech companies, whose investments in this area surpassed those from U.S.-based venture capitalists. OpenAI's valuation reached $86 billion by early 2024, while NVIDIA's market capitalization surpassed $3.3 trillion by mid-2024, making it the world's largest company by market capitalization as the demand for AI-capable GPUs surged. 15.ai, launched in March 2020 by an anonymous
MIT The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and sc ...
researcher, was one of the earliest examples of
generative AI Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and str ...
gaining widespread public attention during the initial stages of the AI boom. The free web application demonstrated the ability to clone character voices using neural networks with minimal training data, requiring as little as 15 seconds of audio to reproduce a voice—a capability later corroborated by OpenAI in 2024. The service went viral phenomenon, viral on social media platforms in early 2021, allowing users to generate speech for characters from popular media franchises, and became particularly notable for its pioneering role in popularizing deep learning speech synthesis, AI voice synthesis for content creation, creative content and Internet meme, memes. {{Quote box , quote=Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. , author=''Pause Giant AI Experiments: An Open Letter'' , source= , align=right , width=500px


Advent of AI for public use

ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
was launched on November 30, 2022, marking a pivotal moment in artificial intelligence's public adoption. Within days of its release it went viral, gaining over 100 million users in two months and becoming the fastest-growing consumer software application in history.{{Cite news , last=Milmo , first=Dan , date=December 2, 2023 , title=ChatGPT reaches 100 million users two months after launch , language=en-GB , work=The Guardian The chatbot's ability to engage in human-like conversations, write code, and generate creative content captured public imagination and led to rapid adoption across various sectors including AI in education, education, Artificial intelligence in industry, business, and research. ChatGPT's success prompted unprecedented responses from major technology companies—Google declared a "code red" and rapidly launched Google Gemini, Gemini (formerly known as Google Bard), while Microsoft incorporated the technology into Bing Chat. The rapid adoption of these AI technologies sparked intense debate about their implications. Notable AI researchers and industry leaders voiced both optimism and concern about the accelerating pace of development. In March 2023, over 20,000 signatories, including computer scientist Yoshua Bengio, Elon Musk, and Apple Inc., Apple co-founder Steve Wozniak, signed Pause Giant AI Experiments: An Open Letter, an open letter calling for a pause in advanced AI development, citing "Existential risk from artificial intelligence, profound risks to society and humanity." However, other prominent researchers like Juergen Schmidhuber took a more optimistic view, emphasizing that the majority of AI research aims to make "human lives longer and healthier and easier."{{cite news, url=https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says, title=Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says, last1=Taylor, first1=Josh, date=May 7, 2023, work=The Guardian By mid-2024, however, the financial sector began to scrutinize AI companies more closely, particularly questioning their capacity to produce a return on investment commensurate with their massive valuations. Some prominent investors raised concerns about market expectations becoming disconnected from fundamental business realities. Jeremy Grantham, co-founder of GMO LLC, warned investors to "be quite careful" and drew parallels to previous technology-driven market bubbles. Similarly, Jeffrey Gundlach, CEO of DoubleLine Capital, explicitly compared the AI boom to the dot-com bubble of the late 1990s, suggesting that investor enthusiasm might be outpacing realistic near-term capabilities and revenue potential. These concerns were amplified by the substantial market capitalizations of AI-focused companies, many of which had yet to demonstrate sustainable profitability models. In March 2024, Anthropic released the Claude (AI), Claude 3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus. The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google. In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis.


2024 Nobel Prizes

In 2024, the Royal Swedish Academy of Sciences awarded Nobel Prizes in recognition of groundbreaking contributions to artificial intelligence. The recipients included: * In physics: John Hopfield for his work on physics-inspired Hopfield networks, and Geoffrey Hinton for foundational contributions to Boltzmann machines and
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
. * In chemistry: David Baker (biochemist), David Baker, Demis Hassabis, and John M. Jumper, John Jumper for their advancements in Protein structure prediction, protein folding predictions. See AlphaFold.


Further Study and development of AI

In January 2025, OpenAI announced a new AI, ChatGPT-Gov, which would be specifically designed for US government agencies to use securely.ChatGPT Gov
ChatGPT Gov is designed to streamline government agencies’ access to OpenAI’s frontier models, OpnAI website.
Open AI said that agencies could utilize ChatGPT Gov on a Microsoft Azure cloud or Azure Government cloud, "on top of Microsoft’s Azure’s OpenAI Service." OpenAI's announcement stated that "Self-hosting ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance requirements, such as stringent cybersecurity frameworks (IL5, CJIS, ITAR, FedRAMP High). Additionally, we believe this infrastructure will expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data."


Robotic Integration and Practical Applications of Artificial Intelligence (2025–present)

Advanced artificial intelligence (AI) systems, capable of understanding and responding to human dialogue with high accuracy, have matured to enable seamless integration with robotics, transforming industries such as manufacturing, home automation, household automation, healthcare, public services, and materials research. Applications of artificial intelligence also accelerates scientific research through advanced data analysis and hypothesis generation. Countries including China, the United States, and Japan have invested significantly in policies and funding to deploy Autonomous robot, AI-powered robots, addressing labor shortages, boosting innovation, and enhancing efficiency, while implementing Regulation of artificial intelligence, regulatory frameworks to ensure ethical and safe development.


China

The year 2025 has been heralded as the "Year of AI Robotics," marking a pivotal moment in the seamless integration of artificial intelligence (AI) and robotics. In 2025, China invested approximately 730 billion yuan (roughly $100 billion USD) to advance AI and robotics in smart manufacturing and healthcare. The "14th Five-Year Plan" (2021–2025) prioritized service robots, with AI systems enabling robots to perform complex tasks like assisting in surgeries or automating factory assembly lines. For example, AI-powered humanoid robots in Chinese hospitals can interpret patient requests, deliver supplies, and assist nurses with routine tasks, demonstrating that existing AI conversational capabilities are robust enough for practical robotic applications. Starting in September 2025, China mandated labeling of AI-generated content to ensure transparency and public trust in these technologies.


United States

In January 2025, a significant development in AI infrastructure investment occurred with the formation of Stargate LLC. The joint venture, created by OpenAI, SoftBank Group, SoftBank, Oracle Corporation, Oracle, and MGX Fund Management Limited, MGX, announced plans to invest US$500 billion in AI infrastructure across the United States by 2029, starting with US$100 billion, in order to support the re-industrialization of the United States and provide a strategic capability to protect the national security of America and its allies. The venture was formally announced by U.S. President Donald Trump on January 21, 2025, with SoftBank CEO Masayoshi Son appointed as chairman.{{Cite news , title=OpenAI, SoftBank, Oracle to invest US$500 BILLION in AI, Trump says. , url=https://www.reuters.com/technology/artificial-intelligence/openai-softbank-oracle-invest-500-bln-ai-trump-says-2025-01-21/ , access-date=January 22, 2025 , work=Reuters The U.S. government allocated approximately $2 billion to integrate AI and robotics in manufacturing and logistics, leveraging AI's ability to process natural language and execute user instructions in 2025. State governments supplemented this with funding for service robots, such as those deployed in warehouses to fulfill verbal commands for inventory management or in eldercare facilities to respond to residents' requests for assistance. These applications highlight that merging advanced AI, already proficient in human interaction, with robotic hardware is a practical step forward. Some funds were directed to defense, including Lethal autonomous weapon and Military robot. In January 2025, Executive Order 14179 established an "AI Action Plan" to accelerate innovation and deployment of these technologies.


Impact

In the 2020s, increased investments in AI by governments and organizations worldwide have accelerated the advancement of artificial intelligence, driving scientific breakthroughs, boosting workforce productivity, and transforming industries through the automation of complex tasks.{{Cite web , url=https://www.reuters.com/world/china/chinas-ai-powered-humanoid-robots-aim-transform-manufacturing-2025-05-13/ , title=China's AI-powered humanoid robots aim to transform manufacturing , publisher=Reuters , date=2025-05-13 , access-date=2025-05-30 , language=en By seamlessly integrating advanced AI systems into various sectors, these developments are poised to revolutionize smart manufacturing and service industries, fundamentally transforming everyday life.


See also

* History of artificial neural networks * History of knowledge representation and reasoning * History of natural language processing * Outline of artificial intelligence * Progress in artificial intelligence * Timeline of artificial intelligence * Timeline of machine learning


Notes

{{notelist {{Reflist


References

{{refbegin {{divcol * {{citation , last1 = Bonner , first1 = Anthonny , title = The Art and Logic of Ramón Llull: A User's Guide , year = 2007 , publisher = Brill , isbn = 978-9004163256 * {{cite book , last1 = Bonner , first1 = Anthony , title = Doctor Illuminatus. A Ramon Llull Reader , chapter = Llull's Influence: The History of Lullism , year = 1985 , publisher = Princeton University Press * {{Citation , first = Rodney , last = Brooks , author-link = Rodney Brooks , year = 2002 , title = Flesh and Machines , publisher=Pantheon Books * {{Cite arXiv , title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 , first1=Sébastien, last1=Bubeck , first2=Varun, last2=Chandrasekaran , first3=Ronen, last3=Eldan , first4=Johannes, last4=Gehrke , first5=Eric, last5=Horvitz , first6=Ece, last6=Kamar , first7=Peter, last7=Lee , first8=Yin Tat, last8=Lee , first9=Yuanzhi, last9=Li , first10=Scott, last10=Lundberg , first11=Harsha, last11=Nori , first12=Hamid, last12=Palangi , first13=Marco Tulio, last13=Ribeiro , first14=Yi, last14=Zhang , date=22 March 2023, class=cs.CL , eprint=2303.12712 * {{citation , last1 = Carreras y Artau , first1 = Tomás , title = Historia de la filosofía española. Filosofía cristiana de los siglos XIII al XV , language= Spanish , year = 2018 , orig-year = 1939 , publisher = Forgotten Books , publication-place = Madrid , isbn =9781390433708 , volume = 1 * {{Cite book , last=Butler , first= E. M. (Eliza Marian), title=The myth of the magus, date=1979 , orig-date=1948, publisher=Cambridge University Press, isbn=0-521-22564-7, location=London, oclc=5063114 * {{Cite web , last = Clark , first = Scott , date = December 21, 2023 , title=The Era of AI: 2023's Landmark Year , url = https://www.cmswire.com/digital-experience/the-era-of-ai-end-of-year-ai-recap/ , access-date=28 January 2024 , website=CMSWire.com , language=en * {{cite web , last = Copeland , first = Jack , year = 1999 , title = A Brief History of Computing , url = http://www.alanturing.net/turing_archive/pages/Reference%20Articles/BriefHistofComp.html , website = AlanTuring.net * {{Cite journal , last1=Cave, first1=Stephen , last2=Dihal, first2=Kanta, date=2019, title=Hopes and fears for intelligent machines in fiction and reality, url=https://www.nature.com/articles/s42256-019-0020-9, journal=Nature Machine Intelligence, language=en, volume=1, issue=2, pages=74–78, doi=10.1038/s42256-019-0020-9, s2cid=150700981, issn=2522-5839 * {{cite book , last1=Cave , first1=S. , last2=Dihal , first2=K. , last3=Dillon , first3=S. , title=AI Narratives: A History of Imaginative Thinking about Intelligent Machines , publisher=Oxford University Press , year=2020 , isbn=978-0-19-884666-6 , url=https://books.google.com/books?id=T53SDwAAQBAJ&pg=PA56 , access-date=2 May 2023 * {{Cite book , last=Christian , first=Brian , author-link = Brian Christian , title=The Alignment Problem: Machine learning and human values , publisher=W. W. Norton & Company , year=2020 , isbn=978-0-393-86833-3 , oclc=1233266753 * {{cite book , last=Clark , first=K.L. , title=Logic and Data Bases , chapter=Negation as Failure , author-link=Keith Clark (computer scientist) , date=1977 , pages=293–322 , doi=10.1007/978-1-4684-3384-5_11 , location=Boston, MA , publisher=Springer US, isbn=978-1-4684-3386-9 * {{Cite web , last = Gates , first = Bill , author-link = Bill Gates , date = December 21, 2023 , title=This year signaled the start of a new era , url=https://www.linkedin.com/pulse/year-signaled-start-new-era-bill-gates-qbpfc , access-date=28 January 2024 , website=www.linkedin.com , language=en * {{Cite book , last=Goethe, first=Johann Wolfgang von , title=Faust; a tragedy. Translated, in the original metres ... by Bayard Taylor. Authorised ed., published by special arrangement with Mrs. Bayard Taylor. With a biographical introd, date=1890, publisher=London Ward, Lock , url=https://archive.org/details/fausttragedytran00goetuoft * {{Cite journal , last1=Hart , first1=Peter E. , last2=Nilsson , first2=Nils J. , last3=Perrault , first3=Ray , last4=Mitchell , first4=Tom , last5=Kulikowski , first5=Casimir A. , last6=Leake , first6=David B. , date=15 March 2003 , title=In Memoriam: Charles Rosen, Norman Nielsen, and Saul Amarel , url=https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1683 , journal=AI Magazine , language=en , volume=24 , issue=1 , pages=6 , doi=10.1609/aimag.v24i1.1683 , issn=2371-9621 * {{cite book , last1=Hayes, first1=P.J. , year=1981, chapter=The logic of frames, title=Readings in artificial intelligence, language=en, pages=451–458, editor-first=Morgan, editor-last=Kaufmann * {{citation , ref={{harvid, Jewish Encyclopedia, loc=GOLEM , title = GOLEM , website = The Jewish Encyclopedia , url=http://www.jewishencyclopedia.com/articles/6777-golem , access-date=15 March 2020 * {{Cite book , last=Hollander, first=Lee M., title=Heimskringla; history of the kings of Norway., publisher=Published for the American-Scandinavian Foundation by the University of Texas Press, year=1991, orig-year=1964, isbn=0-292-73061-6, location=Austin, oclc=638953 * {{Cite web , last=Kressel, first=Matthew , date=October 1, 2015 , url=https://www.matthewkressel.net/2015/10/01/36-days-of-judaic-myth-day-24-the-golem-of-prague/ , title=36 Days of Judaic Myth: Day 24, The Golem of Prague 2015, website=Matthew Kressel, language=en, access-date=15 March 2020 * {{Cite journal , last1=LeCun, first1=Yann , last2=Bengio, first2=Yoshua , last3=Hinton, first3=Geoffrey , title=Deep learning , journal=Nature, volume=521, issue=7553, pages=436–444 , doi=10.1038/nature14539 , pmid=26017442 , bibcode=2015Natur.521..436L , year=2015 , s2cid=3074096 , url=https://hal.science/hal-04206682/file/Lecun2015.pdf * {{Cite web , last=Lee , first=Adrienne , date=23 January 2024 , title=UT Designates 2024 'The Year of AI' , url=https://news.utexas.edu/2024/01/23/ut-designates-2024-the-year-of-ai/ , access-date=28 January 2024 , website=UT News , language=en-US * {{Cite book , last=Linden , first=Stanton J. , title=The alchemy reader : from Hermes Trismegistus to Isaac Newton, date=2003, publisher=Cambridge University Press, isbn=0-521-79234-7, location=New York, pages=Ch. 18, oclc=51210362 * {{Citation, work=New York Times , title = IBM Is Counting on Its Bet on Watson, and Paying Big Money for It , first=Steve , last=Lohr , date=October 17, 2016 , url=https://www.nytimes.com/2016/10/17/technology/ibm-is-counting-on-its-bet-on-watson-and-paying-big-money-for-it.html?emc=edit_th_20161017&nl=todaysheadlines&nlid=62816440 * {{cite news , url=https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html , work=The New York Times , first=John , last=Markoff , title=On 'Jeopardy!' Watson Win Is All but Trivial , date=16 February 2011 * {{Cite web , last=Marr , first=Bernard , date=March 20, 2023 , title=Beyond The Hype: What You Really Need To Know About AI In 2023 , url=https://www.forbes.com/sites/bernardmarr/2023/03/20/beyond-the-hype-what-you-really-need-to-know-about-ai-in-2023/ , access-date=27 January 2024 , website=Forbes , language=en * {{cite journal , last=McCarthy , first=John , author-link=John McCarthy (computer scientist) , title=Review of ''The Question of Artificial Intelligence'' , journal=Annals of the History of Computing , volume=10 , number=3 , year=1988 , pages=224–229 , ref=none, collected in {{cite book , last=McCarthy , first=John , author-link=John McCarthy (computer scientist) , title=Defending AI Research: A Collection of Essays and Reviews , publisher=CSLI , year=1996 , chapter=10. Review of ''The Question of Artificial Intelligence'' * {{Cite journal , last1=McCulloch , first1=Warren S., last2=Pitts, first2=Walter, date=1 December 1943, title=A logical calculus of the ideas immanent in nervous activity, journal=Bulletin of Mathematical Biophysics, language=en, volume=5, issue=4, pages=115–133 , doi=10.1007/BF02478259, issn=1522-9602 * {{cite web , ref = {{harvid, McKinsey & Co, 2011 , date = May 1, 2011 , title = Big data: The next frontier for innovation, competition, and productivity , website = McKinsey.com , url = https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/big-data-the-next-frontier-for-innovation * {{cite news , first1 = Cade , last1 = Metz , first2 = Karen , last2 = Weise , first3 = Nico , last3 = Grant , first4 = Mike , last4 = Isaac , title = Ego, Fear and Money: How the A.I. Fuse Was Lit , date = December 3, 2023 , newspaper = The New York Times , url = https://www.nytimes.com/2023/12/03/technology/ai-openai-musk-page-altman.html * {{Cite journal , last = Miller , first = George , author-link = George Armitage Miller , title = The cognitive revolution: a historical perspective , journal = Trends in Cognitive Sciences , date = 2003 , volume = 7 , issue = 3 , pages = 141–144 , doi = 10.1016/s1364-6613(03)00029-9 , pmid = 12639696 , url = https://www.cs.princeton.edu/~rit/geo/Miller.pdf * {{cite book , last = Moravec , first = Hans , author-link = Hans Moravec , title = Robot: Mere Machine to Transcendent Mind , date = May 18, 2000 , publisher = Oxford University Press , isbn=9780195136302 * {{Cite book , last=Morford , first=Mark , title=Classical mythology , language=en , year=2007 , isbn=978-0-19-085164-4 , publisher=Oxford University Press , location=Oxford , pages=184 , oclc=1102437035 * {{Cite web , last=Murgia , first=Madhumita , date=23 July 2023 , title=Transformers: the Google scientists who pioneered an AI revolution , url=https://www.ft.com/content/37bb01af-ee46-4483-982f-ef3921436a50 , access-date=10 December 2023 , website=www.ft.com * {{Cite book , last=O'Neill , first=Cathy , date=September 6, 2016 , title=Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , publisher = Crown , isbn =978-0553418811 * {{Cite book , last=Nielson , first=Donald L. , title=A HERITAGE OF INNOVATION SRI's First Half Century , date=1 January 2005 , chapter=Chapter 4: The Life and Times of a Successful SRI Laboratory: Artificial Intelligence and Robotics , publisher=SRI International , edition=1st , language=English , isbn=978-0-9745208-0-3 , url=https://www.sri.com/publication/a-heritage-of-innovation-sris-first-half-century/ , chapter-url=https://www.sri.com/wp-content/uploads/2022/08/A-heritage-of-innovation-The-Life-and-Times-of-a-Successful-SRI-Laboratory-Artificial-Intelligence-and-Robotics.pdf * {{cite web , last = Nilsson , first = Nils J. , author-link = Nils J. Nilsson , year = 1984 , url = https://www.sri.com/wp-content/uploads/2021/12/635.pdf , title = The SRI Artificial Intelligence Center: A Brief History , publisher = Artificial Intelligence Center, SRI International , archive-url = https://web.archive.org/web/20220810142945/https://www.sri.com/wp-content/uploads/2021/12/635.pdf , archive-date = 10 August 2022 * {{Cite thesis , last = Olazaran Rodriguez , first = Jose Miguel , title = A historical sociology of neural network research] , year = 1991 , institution = University of Edinburgh , url = https://era.ed.ac.uk/bitstream/handle/1842/20075/Olazaran-RodriguezJM_1991redux.pdf?sequence=1&isAllowed=y , archive-url = https://web.archive.org/web/20221111165150/https://era.ed.ac.uk/bitstream/handle/1842/20075/Olazaran-RodriguezJM_1991redux.pdf?sequence=1&isAllowed=y , url-status = dead , archive-date = 2022-11-11 See especially Chapter 2 and 3. * {{Cite journal , last=Piccinini, first=Gualtiero , date=1 August 2004, title=The First Computational Theory of Mind and Brain: A Close Look at Mcculloch and Pitts's "Logical Calculus of Ideas Immanent in Nervous Activity", journal=Synthese, language=en, volume=141, issue=2, pages=175–215, doi=10.1023/B:SYNT.0000043018.52445.3e, s2cid=10442035, issn=1573-0964 * {{cite book , last=Porterfield , first=A. , title=The Protestant Experience in America , publisher=Greenwood Press , series=American religious experience , year=2006 , isbn=978-0-313-32801-5 , url=https://books.google.com/books?id=V9VM9NEsqXwC&pg=PA136 , access-date=15 May 2023 , page=136 * {{cite journal , last1=Reiter, first1=R. , year=1978, title=On reasoning by default, journal=American Journal of Computational Linguistics, language=en, pages=29–37 * {{Cite book , last=Rhodios, first=Apollonios , title=The Argonautika : Expanded Edition, language=en, date=2007, publisher=University of California Press, isbn=978-0-520-93439-9, pages=355, oclc=811491744 * {{Cite journal , last1= Rose , first1= Allen , title= Lightning Strikes Mathematics , journal= Popular Science , pages= 83–86 , date= April 1946 , url=https://books.google.com/books?id=niEDAAAAMBAJ&q=eniac+intitle:popular+intitle:science&pg=PA83 , access-date=15 April 2012 * {{cite web , last1 = Rosen , first1 = Charles A. , author-link = Charles A. Rosen , last2 = Nilsson , first2 = Nils J. , author-link2 = Nils J. Nilsson , last3 = Adams , first3 = Milton B. , title = A research and development program in applications of intelligent automata to reconnaissance-phase I. (Proposal for Research SRI No. ESU 65-1) , date = 8 January 1965 , url = http://www.ai.sri.com/pubs/files/rosen65-esu65-1tech.pdf , publisher = Stanford Research Institute , archive-url = https://web.archive.org/web/20060316081320/http://www.ai.sri.com/pubs/files/rosen65-esu65-1tech.pdf , archive-date = 16 March 2006 * {{citation , last = Rosenblatt , first = Frank , author-link = Frank Rosenblatt , title = Principles of neurodynamics: Perceptrons and the theory of brain mechanisms , year = 1962 , volume = 55 , publisher = Spartan books , location = Washington DC * {{Cite book , last=Russell , first=Stuart J. , url=https://www.penguinrandomhouse.com/books/566677/human-compatible-by-stuart-russell/ , title=Human compatible: Artificial intelligence and the problem of control , publisher=Penguin Random House , year=2020 , isbn=9780525558637 , oclc=1113410915 * {{cite book , last = Schaeffer , first = Jonathan. , title = One Jump Ahead:: Challenging Human Supremacy in Checkers , year = 1997 , publisher = Springer , isbn=978-0-387-76575-4 * {{cite web , title = Annotated History of Modern AI and Deep Learning , last = Schmidhuber , first = Jürgen , author-link = Jürgen Schmidhuber , year = 2022 , url = https://people.idsia.ch/~juergen/ * {{cite journal , last1 = Schultz , first1 = Wolfram , author1-link = Wolfram Schultz , last2 = Dayan , first2 = Peter , author2-link = Peter Dayan , last3 = Montague , first3 = P. Read , author3-link = P. Read Montague , title = A Neural Substrate of Prediction and Reward , date = March 14, 1997 , journal = Science (journal), Science , volume = 275 , issue = 5306 , pages = 1593–1599 , doi = 10.1126/science.275.5306.1593 , pmid = 9054347 * {{Cite book , last = Sejnowski , first=Terrence J. , title=The Deep Learning Revolution , date=23 October 2018 , publisher=The MIT Press , isbn=978-0-262-03803-4 , edition=1st , location=Cambridge, Massachusetts London, England , pages=93–94 , language=English * {{Cite web , ref={{harvid, Talmud , url=https://www.sefaria.org/Sanhedrin.65b?lang=bi , title=Sanhedrin 65b , website=www.sefaria.org, access-date=15 March 2020 * {{Cite journal , last1=Widrow , first1=B. , last2=Lehr , first2=M.A. , date=September 1990 , title=30 years of adaptive neural networks: perceptron, Madaline, and backpropagation , url=https://ieeexplore.ieee.org/document/58323 , journal=Proceedings of the IEEE , volume=78 , issue=9 , pages=1415–1442 , doi=10.1109/5.58323, s2cid=195704643 * {{Citation , last = Berlinski , first = David , title = The Advent of the Algorithm , year = 2000 , author-link = David Berlinski , publisher = Harcourt Books , isbn = 978-0-15-601391-8 , oclc = 46890682 , url = https://archive.org/details/adventofalgorith0000berl . * {{cite journal, last=Brooks, first=Robert A., title=Elephants Don't Play Chess, journal=Robotics and Autonomous Systems, volume=6, year=1990, issue=1–2 , pages=3–15 , doi=10.1016/S0921-8890(05)80025-9, url=http://people.csail.mit.edu/brooks/papers/elephants.pdf * {{Citation , last=Buchanan , first=Bruce G. , title=A (Very) Brief History of Artificial Intelligence , date=Winter 2005 , url=http://www.aaai.org/AITopics/assets/PDF/AIMag26-04-016.pdf , magazine=AI Magazine , pages=53–60 , access-date=30 August 2007 , url-status=dead , archive-url=https://web.archive.org/web/20070926023314/http://www.aaai.org/AITopics/assets/PDF/AIMag26-04-016.pdf , archive-date=26 September 2007 . * {{Citation , last = Butler , first = Samuel , title = Darwin Among the Machines , date = 13 June 1863 , url = https://nzetc.victoria.ac.nz/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html , author-link = Samuel Butler (novelist) , work = The Press, Christchurch, New Zealand , access-date = 10 October 2008 . * {{Cite web , title = The John Gabriel Byrne Computer Science Collection , date = 8 December 2012 , last = Byrne , first = J. G. , url = https://scss.tcd.ie/SCSSTreasuresCatalog/miscellany/TCD-SCSS-X.20121208.002/TCD-SCSS-X.20121208.002.pdf/ , access-date = 8 August 2019 , archive-url = https://web.archive.org/web/20190416071721/https://www.scss.tcd.ie/SCSSTreasuresCatalog/miscellany/TCD-SCSS-X.20121208.002/TCD-SCSS-X.20121208.002.pdf , archive-date = 16 April 2019 , url-status = dead * {{Citation , ref={{harvid, CNN, 2006 , title=AI set to exceed human brain power , date=26 July 2006 , url=http://www.cnn.com/2006/TECH/science/07/24/ai.bostrom/ , work=CNN.com , access-date=16 October 2007 . * {{Citation , last1 = Colby , first1 = Kenneth M. , last2 = Watt , first2 = James B. , last3 = Gilbert , first3 = John P. , title = A Computer Method of Psychotherapy: Preliminary Communication , date = 1966 , magazine = The Journal of Nervous and Mental Disease , volume = 142 , issue = 2 , pages = 148–152 , url = https://exhibits.stanford.edu/feigenbaum/catalog/hk334rq4790 , doi = 10.1097/00005053-196602000-00005 , pmid = 5936301 , s2cid = 36947398 . * {{Citation , last = Colby , first = Kenneth M. , title = Ten Criticisms of Parry , date = September 1974 , publisher = Stanford Artificial Intelligence Laboratory , id = REPORT NO. STAN-CS-74-457 , url = http://i.stanford.edu/pub/cstr/reports/cs/tr/74/457/CS-TR-74-457.pdf , access-date = 17 June 2018 . * {{Citation , last = Couturat , first = Louis , author-link =Louis Couturat , title = La Logique de Leibniz , year = 1901 * {{Citation , last=Copeland , first=Jack , title=Micro-World AI , url=http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI06.html , year=2000 , author-link=Jack Copeland , access-date=8 October 2008 . * {{Cite book , last= Copeland, first= J (Ed.), title= The Essential Turing: the ideas that gave birth to the computer age, location= Oxford , publisher=Clarendon Press, year=2004, isbn=0-19-825079-7. * {{Citation , last=Cordeschi , first=Roberto , title = The Discovery of the Artificial , year = 2002 , location=Dordrecht , publisher=Kluwer. . * {{Crevier 1993 * {{Citation , last = Darrach , first = Brad , title=Meet Shaky, the First Electronic Person , date=20 November 1970 , magazine=Life Magazine , pages = 58–68 . * {{Citation , last = Doyle , first = J. , title = What is rational psychology? Toward a modern mental philosophy , year = 1983 , magazine = AI Magazine , volume= 4 , issue =3 , pages = 50–53 . * {{Citation , last=Dreyfus , first=Hubert , title = Alchemy and AI , year =1965 , author-link = Hubert Dreyfus , publisher = RAND Corporation Memo . * {{Citation , last=Dreyfus , first=Hubert , title = What Computers Can't Do , year =1972 , location = New York , publisher = MIT Press , isbn = 978-0-06-090613-9 , oclc=5056816 , title-link=What Computers Can't Do . * {{cite book , last1=Dreyfus , first1=Hubert , author-link=Hubert Dreyfus , last2=Dreyfus , first2=Stuart , year=1986 , title=Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , publisher=Blackwell , location=Oxford, UK , isbn=978-0-02-908060-3 , url=https://archive.org/details/mindovermachinep00drey , access-date=22 August 2020 * {{Citation , last=The Economist , title=Are You Talking to Me? , date=7 June 2007 , url=http://www.economist.com/science/tq/displaystory.cfm?story_id=9249338 , magazine=The Economist , access-date=16 October 2008 . * {{Citation , last1 = Feigenbaum , first1 = Edward A. , title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World , year = 1983, last2=McCorduck , first2=Pamela , author-link = Edward Feigenbaum , publisher = Michael Joseph , isbn = 978-0-7181-2401-4 , title-link = Fifth generation computer . * {{Cite journal , last=Haigh , first=Thomas , date=December 2023 , title=There Was No 'First AI Winter' , url=https://dl.acm.org/doi/10.1145/3625833 , journal=Communications of the ACM , language=en , volume=66 , issue=12 , pages=35–39 , doi=10.1145/3625833 , issn=0001-0782. * {{cite book , last=Haugeland , first=John , author-link=John Haugeland , year=1985 , title=Artificial Intelligence: The Very Idea , publisher=MIT Press , location=Cambridge, Mass. , isbn=978-0-262-08153-5 * {{Citation , last1=Hawkins , first1=Jeff , title=On Intelligence , year=2004 , last2=Blakeslee , first2=Sandra , author-link=Jeff Hawkins , location=New York, NY , publisher=Owl Books , isbn=978-0-8050-7853-4 , oclc=61273290 , title-link=On Intelligence . * {{Citation, last=Hebb , first=D.O., title=The Organization of Behavior , year=2002 , orig-year=1949 , author-link=Donald Olding Hebb, location=New York, publisher=Wiley , isbn=978-0-8058-4300-2, oclc=48871099 . * {{Citation , last1=Hewitt , first1=Carl , title=A Universal Modular Actor Formalism for Artificial Intelligence , url=http://dli.iiit.ac.in/ijcai/IJCAI-73/PDF/027B.pdf , year=1973 , last2=Bishop , last3=Steiger , first2=Peter , first3=Richard , author-link=Carl Hewitt , publisher=IJCAI , url-status=dead , archive-url=https://web.archive.org/web/20091229084457/http://dli.iiit.ac.in/ijcai/IJCAI-73/PDF/027B.pdf , archive-date=29 December 2009 * {{Citation , last = Hobbes , first = Thomas , title = Leviathan , year = 1651 , author-link=Hobbes , title-link = Leviathan (Hobbes book) . * {{Citation , last = Hofstadter , first = Douglas , title = Gödel, Escher, Bach: an Eternal Golden Braid , date = 1999 , author-link = Douglas Hofstadter , orig-year=1979, publisher = Basic Books , isbn = 978-0-465-02656-2 , oclc = 225590743 , title-link = Gödel, Escher, Bach . * {{Citation , last = Howe , first = J. , title = Artificial Intelligence at Edinburgh University: a Perspective , date = November 1994 , url = http://www.inf.ed.ac.uk/about/AIhistory.html , access-date = 30 August 2007 . * {{cite journal , last1=Kahneman , first1=Daniel , author-link=Daniel Kahneman , last2=Slovic , first2=D. , last3=Tversky , first3=Amos , author3-link=Amos Tversky , year=1982 , title=Judgment under uncertainty: Heuristics and biases , journal=Science , volume=185 , issue=4157 , pages=1124–1131 , publisher=Cambridge University Press , location=New York , isbn=978-0-521-28414-1 , pmid=17835457 , doi=10.1126/science.185.4157.1124 , bibcode=1974Sci...185.1124T , s2cid=143452957 * {{Citation , last1 = Kaplan , first1 = Andreas , last2 = Haenlein , first2 = Michael , title = Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence , journal = Business Horizons , volume = 62 , pages = 15–25 , date = 2018 , doi = 10.1016/j.bushor.2018.08.004 , s2cid = 158433736 . * {{Citation , last=Kolata , first = G. , title=How can computers get common sense? , year=1982 , journal=Science , volume = 217 , issue= 4566 , pages=1237–1238 , bibcode = 1982Sci...217.1237K , doi = 10.1126/science.217.4566.1237 , pmid = 17837639 . * {{Citation , last = Kurzweil , first = Ray , title = The Singularity is Near , year = 2005 , author-link = Ray Kurzweil , publisher = Viking Press , isbn=978-0-14-303788-0 , oclc = 71826177 , title-link = The Singularity is Near . * {{Citation , last = Lakoff , first = George , title = Women, Fire, and Dangerous Things: What Categories Reveal About the Mind , year = 1987 , author-link = George Lakoff , publisher = University of Chicago Press. , isbn = 978-0-226-46804-4 , url = https://archive.org/details/womenfiredangero00lako_0 . * {{Cite book , vauthors=Lakoff G, Johnson M , url=https://www.basicbooks.com/titles/george-lakoff/philosophy-in-the-flesh/9780465056743/ , title=Philosophy in the flesh: The embodied mind and its challenge to western thought , date=1999 , publisher=Basic Books , isbn=978-0-465-05674-3 * {{Citation , last1=Lenat , first1=Douglas , title = Building Large Knowledge-Based Systems , year = 1989 , last2=Guha , first2=R. V., author-link=Douglas Lenat , publisher = Addison-Wesley, isbn=978-0-201-51752-1 , oclc=19981533 . * {{Citation , last = Levitt , first = Gerald M. , title = The Turk, Chess Automaton, year = 2000, location = Jefferson, N.C. , publisher = McFarland, isbn = 978-0-7864-0778-1 . * {{Citation , last = Lighthill , first = Professor Sir James , title = Artificial Intelligence: a paper symposium, year = 1973 , author-link=James Lighthill , contribution= Lighthill report, Artificial Intelligence: A General Survey , publisher = Science Research Council * {{Citation , last = Lucas , first = John , title = Minds, Machines and Gödel , year = 1961 , author-link = John Lucas (philosopher) , journal = Philosophy (journal), Philosophy , volume = 36 , issue = XXXVI , pages = 112–127 , doi = 10.1017/S0031819100057983 , s2cid = 55408480 , doi-access = free * {{cite book , last1=Luger , first1=George , last2=Stubblefield , first2=William , author2-link=William Stubblefield , year=2004 , title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving , publisher=Benjamin/Cummings , edition=5th , isbn=978-0-8053-4780-7 , url=https://archive.org/details/artificialintell0000luge , url-access=registration , access-date=17 December 2019 * {{Citation , last=Maker , first=Meg Houston , title=AI@50: AI Past, Present, Future , url=http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html , year=2006 , publisher=Dartmouth College , access-date=16 October 2008 , url-status=dead , archive-url=https://web.archive.org/web/20081008120238/http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html , archive-date=8 October 2008 * {{Citation , last=Markoff , first=John , title=Behind Artificial Intelligence, a Squadron of Bright Real People , date=14 October 2005 , url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?_r=1&ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ&oref=slogin , author-link=John Markoff , work=The New York Times , access-date=16 October 2008 * {{Citation , last1 = McCarthy , first1 = John , title = A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence , date = 31 August 1955 , url = http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html , last2 = Minsky , last3 = Rochester , last4 = Shannon , first2 = Marvin , first3 = Nathan , first4 = Claude , author-link = John McCarthy (computer scientist) , author3-link = Nathaniel Rochester (computer scientist) , author4-link = Claude Shannon , access-date = 16 October 2008 , archive-url = https://web.archive.org/web/20080930164306/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html , archive-date = 30 September 2008 , url-status = dead * {{Citation , last1=McCarthy , first1=John , author-link = John McCarthy (computer scientist) , title=Machine Intelligence 4 , url=http://www-formal.stanford.edu/jmc/mcchay69/mcchay69.html , year=1969 , last2=Hayes , first2=P. J. , author2-link=Patrick J. Hayes , pages=463–502 , contribution=Some philosophical problems from the standpoint of artificial intelligence , publisher=Edinburgh University Press , editor-last=Meltzer , editor2-last=Mitchie , editor-first=B. J. , editor2-first=Donald , editor-link=Bernard Meltzer (computer scientist) , editor2-link=Donald Mitchie , access-date=16 October 2008 * {{cite web , last = McCarthy , first = John , author-link = John McCarthy (computer scientist) , year = 1974 , title = Review of Lighthill report , url =http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html * {{Citation , last=McCorduck , first=Pamela , title = Machines Who Think , year = 2004 , edition=2nd , location=Natick, MA , publisher=A. K. Peters, Ltd. , isbn=978-1-56881-205-2 , oclc=52197627. * {{Citation , last1 = McCullough , first1 = W. S. , title = A logical calculus of the ideas immanent in nervous activity , year = 1943 , last2 = Pitts , first2 = W. , author-link = Warren McCullough , journal= Bulletin of Mathematical Biophysics , volume= 5 , issue = 4, pages = 115–127 , doi = 10.1007/BF02478259 * {{Citation , last1 = Menabrea , first1 = Luigi Federico , title = Sketch of the Analytical Engine Invented by Charles Babbage , url = http://www.fourmilab.ch/babbage/sketch.html , year = 1843 , last2 = Lovelace , first2 = Ada , author2-link = Ada Lovelace , journal = Scientific Memoirs , volume = 3 , access-date = 29 August 2008 With notes upon the Memoir by the Translator * {{Citation , last = Minsky , first = Marvin , title = Computation: Finite and Infinite Machines , year = 1967 , author-link=Marvin Minsky , location=Englewood Cliffs, N.J. , publisher = Prentice-Hall * {{Citation , last1 = Minsky , first1 = Marvin , title = Perceptrons: An Introduction to Computational Geometry , year = 1969 , last2 = Papert , first2 = Seymour , author-link = Marvin Minsky , author2-link = Seymour Papert , publisher = The MIT Press , isbn = 978-0-262-63111-2 , oclc = 16924756 , url = https://archive.org/details/perceptronsintro00mins * {{Citation , last = Minsky , first = Marvin , title = A Framework for Representing Knowledge , url = http://web.media.mit.edu/~minsky/papers/Frames/frames.html , year = 1974 , author-link = Marvin Minsky , access-date = 16 October 2008 , archive-date = 7 January 2021 , archive-url = https://web.archive.org/web/20210107162402/http://web.media.mit.edu/~minsky/papers/Frames/frames.html , url-status = dead * {{Citation , last = Minsky , first = Marvin , title = The Society of Mind , year = 1986 , author-link=Marvin Minsky , publisher = Simon and Schuster , isbn=978-0-671-65713-0 , oclc = 223353010 , title-link = The Society of Mind * {{Citation , last=Minsky , first=Marvin , title=It's 2001. Where Is HAL? , url=http://www.ddj.com/hpc-high-performance-computing/197700454?cid=RSSfeed_DDJ_AI , year=2001 , author-link=Marvin Minsky , publisher=Dr. Dobb's Technetcast , access-date=8 August 2009 * {{Citation, editor-last=Moor , editor-first=James , year=2003 , title=The Turing Test: The Elusive Standard of Artificial Intelligence , isbn=978-1-4020-1205-1, publisher=Kluwer Academic Publishers, location=Dordrecht * {{Citation , last = Moravec , first = Hans , title = The Role of Raw Power in Intelligence , url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html , year = 1976 , author-link = Hans Moravec , access-date = 16 October 2008 , archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html , archive-date = 3 March 2016 , url-status = dead * {{Citation , last = Moravec , first = Hans , title = Mind Children , year = 1988 , publisher = Harvard University Press , isbn = 978-0-674-57618-6 , oclc = 245755104 , url-access = registration , url = https://archive.org/details/mindchildren00hans * {{Cite web , title = 1907: was the first portable computer design Irish? , last = Mulvihill , first = Mary , date = 17 October 2012 , url = http://ingeniousireland.ie/2012/10/1909-a-novel-irish-computer/ , website = Ingenious Ireland * {{cite book , last=Needham , first=Joseph , year=1986 , title=Science and Civilization in China: Volume 2 , location=Taipei , publisher=Caves Books Ltd * {{Citation , last1 = Newell , first1 = Allen , title=Computers and Thought , year = 1995 , orig-year = 1963 , last2 = Simon , first2=H. A. , author-link=Allen Newell , contribution=GPS: A Program that Simulates Human Thought, location= New York, publisher=McGraw-Hill , editor-last= Feigenbaum , editor2-last= Feldman , editor-first= E.A. , editor2-first= J. , isbn=978-0-262-56092-4 , oclc = 246968117 * {{Citation , last = Newquist , first = HP , title=The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That Think , year = 1994 , author-link=HP Newquist , location= New York, publisher=Macmillan/SAMS , isbn=978-0-9885937-1-8 , oclc=313139906 * {{Citation, last=NRC, title=Funding a Revolution: Government Support for Computing Research, year=1999, author-link=United States National Research Council, chapter=Developments in Artificial Intelligence, publisher=National Academy Press, isbn=978-0-309-06278-7, oclc=246584055, chapter-url=https://archive.org/details/fundingrevolutio00nati * {{Citation , last=Nick , first=Martin , title=Al Jazari: The Ingenious 13th Century Muslim Mechanic , url=http://www.alshindagah.com/marapr2005/jaziri.html , year=2005 , publisher=Al Shindagah , access-date=16 October 2008 . * {{cite book , last = Nilsson , first = Nils , author-link = Nils John Nilsson , title = The Quest for Artificial Intelligence , date = October 30, 2009 , publisher = Cambridge University Press , isbn=978-0-52-112293-1 * {{Citation , last=O'Connor , first=Kathleen Malone , title=The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam , pages=1–435 , url=http://repository.upenn.edu/dissertations/AAI9503804 , year=1994 , publisher=University of Pennsylvania , access-date=10 January 2007 * {{Citation , last=Olsen , first=Stefanie , title=Newsmaker: Google's man behind the curtain , date=10 May 2004 , url=http://news.cnet.com/Googles-man-behind-the-curtain/2008-1024_3-5208228.html , publisher=CNET , access-date=17 October 2008 . * {{Citation , last=Olsen , first=Stefanie , title=Spying an intelligent search engine , date=18 August 2006 , url=http://news.cnet.com/Spying-an-intelligent-search-engine/2100-1032_3-6107048.html , publisher=CNET , access-date=17 October 2008 . * {{Citation , last = Pearl , first = J. , title = Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference , year = 1988 , author-link=Judea Pearl , location=San Mateo, California, publisher=Morgan Kaufmann , isbn=978-1-55860-479-7 , oclc = 249625842 . * {{Citation , last1 = Poole , first1 = David , title = Computational Intelligence: A Logical Approach , url = https://archive.org/details/computationalint00pool , year = 1998 , last2 = Mackworth , last3 = Goebel , first2 = Alan , first3 = Randy , publisher = Oxford University Press. , isbn = 978-0-19-510270-3 . * {{cite news , title = Technology; Fuzzy Logic For Computers , first = Andrew , last = Pollack , date = October 11, 1984 , newspaper = The New York Times , url = https://www.nytimes.com/1984/10/11/business/technology-fuzzy-logic-for-computers.html * {{cite news , title = Fuzzy Computer Theory: How to Mimic the Mind? , first = Andrew , last = Pollack , date = April 2, 1989 , newspaper = The New York Times , url = https://www.nytimes.com/1989/04/02/us/fuzzy-computer-theory-how-to-mimic-the-mind.html * {{citation , last = Quevedo , first = L. Torres Quevedo , work = Ensayos sobre Automática – Su definicion. Extension teórica de sus aplicaciones , title = Revista de la Academia de Ciencias Exacta , volume = 12 , pages = 391–418 , year = 1914 * {{citation , last = Quevedo , first = L. Torres Quevedo , year = 1915 , work = Essais sur l'Automatique - Sa définition. Etendue théorique de ses applications , title = Revue Génerale des Sciences Pures et Appliquées , volume = 2 , pages = 601–611 , url = https://diccan.com/dicoport/Torres.htm * {{citation , last = Randall , first = Brian , year = 1982 , title = From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush , website = fano.co.uk , url = http://www.fano.co.uk/ludgate/ , access-date = 29 October 2018 * {{Russell Norvig 2003. * {{Cite book , first1 = Stuart J. , last1 = Russell , author1-link = Stuart J. Russell , first2 = Peter. , last2 = Norvig , author2-link = Peter Norvig , title=Artificial Intelligence: A Modern Approach , year = 2021 , edition = 4th , isbn = 978-0-13-461099-3 , lccn = 20190474 , publisher = Pearson , location = Hoboken * {{Citation , last =Samuel , first =Arthur L. , title =Some studies in machine learning using the game of checkers , date =July 1959 , url =http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74?OpenDocument , author-link =Arthur Samuel (computer scientist) , journal =IBM Journal of Research and Development , volume =3 , issue =3 , pages =210–219 , doi =10.1147/rd.33.0210 , access-date =20 August 2007 , citeseerx =10.1.1.368.2254 , s2cid =2126705 , archive-date =3 March 2016 , archive-url =https://web.archive.org/web/20160303191010/http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74?OpenDocument , url-status =dead . * {{citation , ref = {{harvid, Saygin, 2000 , first1 = A. P. , last1 = Saygin , first2 = I. , last2 = Cicekli , first3 = V. , last3 = Akman , year = 2000 , title = Turing Test: 50 Years Later , journal = Minds and Machines , volume = 10 , issue = 4 , pages = 463–518 , url = http://crl.ucsd.edu/~saygin/papers/MMTT.pdf , doi = 10.1023/A:1011288000451 , hdl = 11693/24987 , s2cid = 990084 , hdl-access = free , access-date = 7 January 2004 , archive-date = 9 April 2011 , archive-url = https://web.archive.org/web/20110409073501/http://crl.ucsd.edu/~saygin/papers/MMTT.pdf . Reprinted in {{harvtxt, Moor, 2003, pp=23–78. * {{Searle 1980. * {{Citation , title = Heuristic Problem Solving: The Next Advance in Operations Research , year = 1958 , last1 =Simon , last2=Newell , first1 = H. A. , first2=Allen , author-link=Herbert A. Simon , journal =Operations Research , volume=6 , pages =1–10 , doi =10.1287/opre.6.1.1 . * {{Citation , last= Simon, first = H. A. , title=The Shape of Automation for Men and Management , year = 1965 , location = New York , publisher =Harper & Row . * {{Citation , last = Skillings , first = Jonathan , title = Newsmaker: Getting machines to think like us , url = http://news.cnet.com/Getting-machines-to-think-like-us---page-2/2008-11394_3-6090207-2.html?tag=st.next , year = 2006 , publisher = CNET , access-date = 8 October 2008 . * {{Citation , last=Tascarella , first=Patty , title=Robotics firms find fundraising struggle, with venture capital shy , date=14 August 2006 , url=http://www.bizjournals.com/pittsburgh/stories/2006/08/14/focus3.html?b=1155528000%5E1329573 , work=Pittsburgh Business Times , access-date=15 March 2016 . * {{Citation , last=Turing , first=Alan , title=On Computable Numbers, with an Application to the Entscheidungsproblem , date=1936–1937 , url=http://www.abelard.org/turpap2/tp2-ie.asp , series=2 , journal=Proceedings of the London Mathematical Society , issue=42 , pages=230–265 , doi=10.1112/plms/s2-42.1.230 , access-date=8 October 2008 , volume=42 , s2cid=73712 . * {{Turing 1950. * {{cite book , last1=Turkle , first1=Sherry , title=The second self: computers and the human spirit , date=1984 , publisher=Simon and Schuster , isbn=978-0-671-46848-4 , oclc=895659909 * {{cite book , last1=Wason , first1=P. C. , author-link=Peter Cathcart Wason , last2=Shapiro , first2=D. , editor=Foss, B. M. , year=1966 , title=New horizons in psychology , chapter-url=https://archive.org/details/newhorizonsinpsy0000foss , chapter-url-access=registration , location=Harmondsworth , publisher=Penguin , chapter=Reasoning , access-date=18 November 2019 * {{Citation , last = Weizenbaum , first = Joseph , title = Computer Power and Human Reason , year = 1976 , author-link=Joseph Weizenbaum , publisher = W.H. Freeman & Company , isbn=978-0-14-022535-8 , oclc = 10952283 , title-link = Computer Power and Human Reason . {{divcolend {{refend History of artificial intelligence, History of computing