HOME

TheInfoList



OR:

In
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machine A machine is a physical system using Power (physics), power to apply Force, forces and control Motion, moveme ...
, symbolic artificial intelligence is the term for the collection of all methods in
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machine A machine is a physical system using Power (physics), power to apply Force, forces and control Motion, moveme ...
research that are based on high-level
symbolic Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing * Symbolic computation, a scientific area concerned with computing with mathematical formulas * Symbolic dynamic ...
(human-readable) representations of problems,
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premis ...
and search. Symbolic AI used tools such as
logic programming Logic programming is a programming paradigm which is largely based on formal logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of log ...
, production rules,
semantic nets A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, ...
and
frames A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent. Frame and FRAME may also refer to: Physical objects In building construction *Framing (co ...
, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics,
automated theorem provers Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a maj ...
, ontologies, the semantic web, and
automated planning and scheduling Automation describes a wide range of technologies that reduce human intervention in processes, namely by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines ...
systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the middle 1990s. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with
artificial general intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fict ...
and considered this the ultimate goal of their field. An early boom, with early successes such as the Logic Theorist and Samuel's Checker's Playing Program led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up. A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace. That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment. Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988–2011) followed. Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition. Uncertainty was addressed with formal methods such as hidden Markov models,
Bayesian reasoning Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification o ...
, and
statistical relational learning Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational ...
. Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3
decision-tree A decision tree is a decision support system, decision support tool that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event outcomes, resource costs, and ...
learning, case-based learning, and inductive logic programming to learn relations. Neural networks, a sub-symbolic approach, had been pursued from early days and was to reemerge strongly in 2012. Early examples are
Rosenblatt Rosenblatt is a surname of German and Jewish origin, meaning " rose leaf". People with this surname include: *Albert Rosenblatt (born 1936), New York Court of Appeals judge *Dana Rosenblatt, known as "Dangerous" (born 1972), American boxer *Elie R ...
's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams, and work in convolutional neural networks by LeCun et al. in 1989. However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks." Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches and addressing areas that both approaches have difficulty with, such as
common-sense reasoning ''Common Sense'' is a 47-page pamphlet written by Thomas Paine in 1775–1776 advocating independence from Kingdom of Great Britain, Great Britain to people in the Thirteen Colonies. Writing in clear and persuasive prose, Paine collected variou ...
.


Foundational ideas

The symbolic approach was succinctly expressed in the " physical symbol systems hypothesis" proposed by Newell and Simon in 1976: * "A physical symbol system has the necessary and sufficient means of general intelligent action." Later, practitioners using knowledge-based approaches adopted a second maxim: * "In the knowledge lies the power." to describe that high-performance in a specific domain required both general and highly domain-specific knowledge. Ed Feigenbaum and Doug Lenat called this The Knowledge Principle: Finally, with the rise of deep learning, the symbolic AI approach has been compared to deep learning as complementary "...with parallels having been drawn many times by AI researchers between Kahneman's research on human reasoning and decision making – reflected in his book '' Thinking, Fast and Slow'' – and the so-called "AI systems 1 and 2", which would in principle be modelled by deep learning and symbolic reasoning, respectively." In this view, symbolic reasoning is more apt for deliberative reasoning, planning, and explanation while deep learning is more apt for fast pattern recognition in perceptual applications with noisy data.


A short history

A short history of symbolic AI to the present day follows below. Time periods and titles are drawn from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial Lecture and the longer Wikipedia article on the
History of AI The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempt ...
, with dates and titles differing slightly for increased clarity.


The first AI summer: irrational exuberance, 1948–1966

Success at early attempts in AI occurred in three main areas: artificial neural networks, knowledge representation, and heuristic search, contributing to high expectations. This section summarizes Kautz's reprise of early AI history.


Approaches inspired by human or animal cognition or behavior

Cybernetic approaches attempted to replicate the feedback loops between animals and their environments. A robotic turtle, with sensors, motors for driving and steering, and seven vacuum tubes for control, based on a preprogrammed neural net, was built as early as 1948. This work can be seen as an early precursor to later work in neural networks, reinforcement learning, and situated robotics. An important early symbolic AI program was the Logic theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955–56, as it was able to prove 38 elementary theorems from Whitehead and Russell's Principia Mathematica. Newell, Simon, and Shaw later generalized this work to create a domain-independent problem solver, GPS (General Problem Solver). GPS solved problems represented with formal operators via state-space search using means-ends analysis. During the 1960s, symbolic approaches achieved great success at simulating intelligent behavior in structured environments such as game-playing, symbolic mathematics, and theorem-proving. AI research was centered in three institutions in the 1960s:
Carnegie Mellon University Carnegie Mellon University (CMU) is a private research university in Pittsburgh, Pennsylvania. One of its predecessors was established in 1900 by Andrew Carnegie as the Carnegie Technical Schools; it became the Carnegie Institute of Technology ...
,
Stanford Stanford University, officially Leland Stanford Junior University, is a private research university in Stanford, California. The campus occupies , among the largest in the United States, and enrolls over 17,000 students. Stanford is consider ...
, MIT and (later)
University of Edinburgh The University of Edinburgh ( sco, University o Edinburgh, gd, Oilthigh Dhùn Èideann; abbreviated as ''Edin.'' in post-nominals) is a public research university based in Edinburgh, Scotland. Granted a royal charter by King James VI in 15 ...
. Each one developed its own style of research. Earlier approaches based on cybernetics or
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units ...
s were abandoned or pushed into the background. Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science,
operations research Operations research ( en-GB, operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve dec ...
and
management science Management science (or managerial science) is a wide and interdisciplinary study of solving complex problems and making strategic decisions as it pertains to institutions, corporations, governments and other types of organizational entities. It is ...
. Their research team used the results of
psychological Psychology is the scientific study of mind and behavior. Psychology includes the study of conscious and unconscious phenomena, including feelings and thoughts. It is an academic discipline of immense scope, crossing the boundaries betw ...
experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at
Carnegie Mellon University Carnegie Mellon University (CMU) is a private research university in Pittsburgh, Pennsylvania. One of its predecessors was established in 1900 by Andrew Carnegie as the Carnegie Technical Schools; it became the Carnegie Institute of Technology ...
would eventually culminate in the development of the Soar architecture in the middle 1980s.


Heuristic search

In addition to the highly-specialized domain-specific kinds of knowledge that we will see later used in expert systems, early symbolic AI researchers discovered another more general application of knowledge. These were called heuristics, rules of thumb that guide a search in promising directions: "How can non-enumerative search be practical when the underlying problem is exponentially hard? The approach advocated by Simon and Newell is to employ heuristics: fast algorithms that may fail on some inputs or output suboptimal solutions." Another important advance was to find a way to apply these heuristics that guarantees a solution will be found, if there is one, not withstanding the occasional fallibility of heuristics: "The A* algorithm provided a general frame for complete and optimal heuristically guided search. A* is used as a subroutine within practically every AI algorithm today but is still no magic bullet; its guarantee of completeness is bought at the cost of worst-case exponential time.


Early work on knowledge representation and reasoning

Early work covered both applications of formal reasoning emphasizing
first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
, along with attempts to handle
common-sense reasoning ''Common Sense'' is a 47-page pamphlet written by Thomas Paine in 1775–1776 advocating independence from Kingdom of Great Britain, Great Britain to people in the Thirteen Colonies. Writing in clear and persuasive prose, Paine collected variou ...
in a less formal manner.


= Modeling formal reasoning with logic: the "neats"

= Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate the exact mechanisms of human thought, but could instead try to find the essence of abstract reasoning and problem-solving with logic, regardless of whether people used the same algorithms. His laboratory at
Stanford Stanford University, officially Leland Stanford Junior University, is a private research university in Stanford, California. The campus occupies , among the largest in the United States, and enrolls over 17,000 students. Stanford is consider ...
(
SAIL A sail is a tensile structure—which is made from fabric or other membrane materials—that uses wind power to propel sailing craft, including sailing ships, sailboats, windsurfers, ice boats, and even sail-powered land vehicles. Sails ma ...
) focused on using formal
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premis ...
to solve a wide variety of problems, including
knowledge representation Knowledge representation and reasoning (KRR, KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medic ...
,
planning Planning is the process of thinking regarding the activities required to achieve a desired goal. Planning is based on foresight, the fundamental capacity for mental time travel. The evolution of forethought, the capacity to think ahead, is c ...
and learning. Logic was also the focus of the work at the
University of Edinburgh The University of Edinburgh ( sco, University o Edinburgh, gd, Oilthigh Dhùn Èideann; abbreviated as ''Edin.'' in post-nominals) is a public research university based in Edinburgh, Scotland. Granted a royal charter by King James VI in 15 ...
and elsewhere in Europe which led to the development of the programming language
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
and the science of
logic programming Logic programming is a programming paradigm which is largely based on formal logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of log ...
.


= Modeling implicit common-sense knowledge with frames and scripts: the "scruffies"

= Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving difficult problems in vision and
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
required ad hoc solutions—they argued that no simple and general principle (like
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premis ...
) would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as " scruffy" (as opposed to the " neat" paradigms at CMU and Stanford). Commonsense knowledge bases (such as
Doug Lenat Douglas Bruce Lenat (born 1950) is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning p ...
's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.


The first AI winter: crushed dreams, 1967–1977

The first AI winter was a shock:


The second AI summer: knowledge is power, 1978–1987


Knowledge-based systems

As limitations with weak, domain-independent methods became more and more apparent, researchers from all three traditions began to build
knowledge Knowledge can be defined as awareness of facts or as practical skills, and may also refer to familiarity with objects or situations. Knowledge of facts, also called propositional knowledge, is often defined as true belief that is disti ...
into AI applications. The knowledge revolution was driven by the realization that knowledge underlies high-performance, domain-specific AI applications.


Success with expert systems

This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first commercially successful form of AI software.


= Examples

= Key expert systems were: * DENDRAL, which found the structure of organic molecules from their chemical formula and mass spectrometer readings. * MYCIN, which diagnosed bacteremia – and suggested further lab tests, when necessary – by interpreting lab results, patient history, and doctor observations. "With about 450 rules, MYCIN was able to perform as well as some experts, and considerably better than junior doctors." * INTERNIST and
CADUCEUS The caduceus (☤; ; la, cādūceus, from grc-gre, κηρύκειον "herald's wand, or staff") is the staff carried by Hermes in Greek mythology and consequently by Hermes Trismegistus in Greco-Egyptian mythology. The same staff was also ...
which tackled internal medicine diagnosis. Internist attempted to capture the expertise of the chairman of internal medicine at the University of Pittsburgh School of Medicine while CADUCEUS could eventually diagnose up to 1000 different diseases. * GUIDON, which showed how a knowledge base built for expert problem solving could be repurposed for teaching. * XCON, to configure VAX computers, a then laborious process that could take up to 90 days. XCON reduced the time to about 90 minutes. DENDRAL is considered the first expert system that relied on knowledge-intensive problem-solving. It is described below, by Ed Feigenbaum, from a Communications of the ACM interview
Interview with Ed Feigenbaum
The other expert systems mentioned above came after DENDRAL. MYCIN exemplifies the classic expert system architecture of a knowledge-base of rules coupled to a symbolic reasoning mechanism, including the use of certainty factors to handle uncertainty. GUIDON shows how an explicit knowledge base can be repurposed for a second application, tutoring, and is an example of an intelligent tutoring system, a particular kind of knowledge-based application. Clancey showed that it was not sufficient simply to use MYCIN's rules for instruction, but that he also needed to add rules for dialogue management and student modeling. XCON is significant because of the millions of dollars it saved DEC, which triggered the expert system boom where most all major corporations in the US had expert systems groups, with the aim to capture corporate expertise, preserve it, and automate it: Chess expert knowledge was encoded in
Deep Blue Deep Blue may refer to: Film * '' Deep Blues: A Musical Pilgrimage to the Crossroads'', a 1992 documentary film about Mississippi Delta blues music * ''Deep Blue'' (2001 film), a film by Dwight H. Little * ''Deep Blue'' (2003 film), a film us ...
. In 1996, this allowed IBM's
Deep Blue Deep Blue may refer to: Film * '' Deep Blues: A Musical Pilgrimage to the Crossroads'', a 1992 documentary film about Mississippi Delta blues music * ''Deep Blue'' (2001 film), a film by Dwight H. Little * ''Deep Blue'' (2003 film), a film us ...
, with the help of symbolic AI, to win in a game of chess against the world champion at that time, Garry Kasparov.


= Architecture of knowledge-based and expert systems

= A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules for problem-solving. The simplest approach for an expert system knowledge base is simply a collection or network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. For example, OPS5, CLIPS and their successors Jess and Drools operate in this fashion. Expert systems can operate in either a forward chaining – from evidence to conclusions – or backward chaining – from goals to needed data and prerequisites – manner. More advanced knowledge-based systems, such as Soar can also perform meta-level reasoning, that is reasoning about their own reasoning in terms of deciding how to solve problems and monitoring the success of problem-solving strategies. Blackboard systems are a second kind of
knowledge-based The knowledge economy (or the knowledge-based economy) is an economic system in which the production of goods and services is based principally on knowledge-intensive activities that contribute to advancement in technical and scientific inn ...
or expert system architecture. They model a community of experts incrementally contributing, where they can, to solve a problem. The problem is represented in multiple levels of abstraction or alternate views. The experts (knowledge sources) volunteer their services whenever they recognize they can make a contribution. Potential problem-solving actions are represented on an agenda that is updated as the problem situation changes. A controller decides how useful each contribution is, and who should make the next problem-solving action. One example, the BB1 blackboard architecture was originally inspired by studies of how humans plan to perform multiple tasks in a trip. An innovation of BB1 was to apply the same blackboard model to solving its own control problem, i.e., its controller performed meta-level reasoning with knowledge sources that monitored how well a plan or the problem-solving was proceeding, and could switch from one strategy to another as conditions – such as goals or times – changed. BB1 was applied in multiple domains: construction site planning, intelligent tutoring systems, and real-time patient monitoring.


The second AI winter, 1988–1993

At the height of the AI boom, companies such as Symbolics, LMI, and
Texas Instruments Texas Instruments Incorporated (TI) is an American technology company headquartered in Dallas, Texas, that designs and manufactures semiconductors and various integrated circuits, which it sells to electronics designers and manufacturers globa ...
were selling LISP machines specifically targeted to accelerate the development of AI applications and research. In addition, several artificial intelligence companies, such as Teknowledge and Inference Corporation, were selling expert system shells, training, and consulting to corporations. Unfortunately, the AI boom did not last and Kautz best describes the second AI winter that followed:


Adding in more rigorous foundations, 1993–2011


Uncertain reasoning

Both statistical approaches and extensions to logic were tried. One statistical approach, Hidden Markov Models, had already been popularized in the 1980s for speech recognition work. Subsequently, in 1988, Judea Pearl popularized the use of Bayesian Networks as a sound but efficient way of handling uncertain reasoning with his publication of the book Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. and Bayesian approaches were applied successfully in expert systems. Even later, in the 1990s,
statistical relational learning Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational ...
, an approach that combines probability with logical formulas, allowed probability to be combined with first-order logic, e.g., with either
Markov Logic Networks A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, enabling uncertain inference. Markov logic networks generalize first-order logic, in the sense that, in a certain limit, all u ...
or Probabilistic Soft Logic. Other, non-probabilistic extensions to
first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
to support were also tried. For example, non-monotonic reasoning could be used with truth maintenance systems. A truth maintenance system tracked assumptions and justifications for all inferences. It allowed inferences to be withdrawn when assumptions were found out to be incorrect or a contradiction was derived. Explanations could be provided for an inference by explaining which rules were applied to create it and then continuing through underlying inferences and rules all the way back to root assumptions. Lofti Zadeh had introduced a different kind of extension to handle the representation of vagueness. For example, in deciding how "heavy" or "tall" a man is, there is frequently no clear "yes" or "no" answer, and a predicate for heavy or tall would instead return values between 0 and 1. Those values represented to what degree the predicates were true. His
fuzzy logic Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and complet ...
further provided a means for propagating combinations of these values through logical formulas.


Machine learning

Symbolic machine learning approaches were investigated to address the knowledge acquisition bottleneck. One of the earliest is Meta-DENDRAL. Meta-DENDRAL used a generate-and-test technique to generate plausible rule hypotheses to test against spectra. Domain and task knowledge reduced the number of candidates tested to a manageable size. Feigenbaum described Meta-DENDRAL as In contrast to the knowledge-intensive approach of Meta-DENDRAL, Ross Quinlan invented a domain-independent approach to statistical classification, decision tree learning, starting first with ID3 and then later extending its capabilities to
C4.5 C4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referr ...
. The decision trees created are
glass box A white box (or glass box, clear box, or open box) is a subsystem whose internals can be viewed but usually not altered.Patrick J. Driscoll, "Systems Thinking," in Gregory S. Parnell, Patrick J. Driscoll, and Dale L. Henderson (eds.), ''Decisi ...
, interpretable classifiers, with human-interpretable classification rules. Advances were made in understanding machine learning theory, too. Tom Mitchell introduced version space learning which describes learning as search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far. More formally,
Valiant Valiant may refer to: People * James Valiant (1884–1917), English cricketer * The Valiant Brothers, a professional wrestling tag team of storyline brothers ** Jerry Valiant, a ring name of professional wrestler John Hill (1941-2010) ** Jimmy ...
introduced Probably Approximately Correct Learning (PAC Learning), a framework for the mathematical analysis of machine learning. Symbolic machine learning encompassed more than learning by example. E.g., John Anderson provided a cognitive model of human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with his ACT-R
cognitive architecture A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized mod ...
. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation". ACT-R has been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used in
intelligent tutoring systems An intelligent tutoring system (ITS) is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. ITSs have the common goal of enabling lea ...
, called cognitive tutors, to successfully teach geometry, computer programming, and algebra to school children. Inductive logic programming was another approach to learning that allowed logic programs to be synthesized from input-output examples. E.g., Ehud Shapiro's MIS (Model Inference System) could synthesize Prolog programs from examples.
John R. Koza John R. Koza is a computer scientist and a former adjunct professor at Stanford University, most notable for his work in pioneering the use of genetic programming for the optimization of complex problems. Koza co-founded Scientific Games Corporat ...
applied genetic algorithms to program synthesis to create genetic programming, which he used to synthesize LISP programs. Finally,
Manna Manna ( he, מָן, mān, ; ar, اَلْمَنُّ; sometimes or archaically spelled mana) is, according to the Bible, an edible substance which God provided for the Israelites during their travels in the desert during the 40-year period follow ...
and
Waldinger Waldinger is a surname. Notable people with the surname include: * Adolf Waldinger (1843–1904), Croatian painter *Richard Waldinger, American computer scientist *Robert J. Waldinger Robert J. Waldinger (born 1951) is an American psychiatrist, ...
provided a more general approach to program synthesis that synthesizes a functional program in the course of proving its specifications to be correct. As an alternative to logic, Roger Schank introduced case-based reasoning (CBR). The CBR approach outlined in his book, Dynamic Memory, focuses first on remembering key problem-solving cases for future use and generalizing them where appropriate. When faced with a new problem, CBR retrieves the most similar previous case and adapts it to the specifics of the current problem. Another alternative to logic, genetic algorithms and genetic programming are based on an evolutionary model of learning, where sets of rules are encoded into populations, the rules govern the behavior of individuals, and selection of the fittest prunes out sets of unsuitable rules over many generations. Symbolic machine learning was applied to learning concepts, rules, heuristics, and problem-solving. Approaches, other than those above, include: # Learning from instruction or advice—i.e., taking human instruction, posed as advice, and determining how to operationalize it in specific situations. For example, in a game of Hearts, learning ''exactly how'' to play a hand to "avoid taking points." # Learning from exemplars—improving performance by accepting subject-matter expert (SME) feedback during training. When problem-solving fails, querying the expert to either learn a new exemplar for problem-solving or to learn a new explanation as to exactly why one exemplar is more relevant than another. For example, the program Protos learned to diagnose tinnitus cases by interacting with an audiologist. # Learning by analogy—constructing problem solutions based on similar problems seen in the past, and then modifying their solutions to fit a new situation or domain. # Apprentice learning systems—learning novel solutions to problems by observing human problem-solving. Domain knowledge explains why novel solutions are correct and how the solution can be generalized. LEAP learned how to design VLSI circuits by observing human designers. # Learning by discovery—i.e., creating tasks to carry out experiments and then learning from the results.
Doug Lenat Douglas Bruce Lenat (born 1950) is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning p ...
's
Eurisko Eurisko ( Gr., ''I discover'') is a discovery system written by Douglas Lenat in RLL-1, a representation language itself written in the Lisp programming language. A sequel to Automated Mathematician, it consists of heuristics, i.e. rules of thu ...
, for example, learned heuristics to beat human players at the
Traveller Traveler(s), traveller(s), The Traveler(s), or The Traveller(s) may refer to: People Generic terms *One engaged in travel *Explorer, one who searches for the purpose of discovery of information or resources *Nomad, a member of a community withou ...
role-playing game for two years in a row. # Learning macro-operators—i.e., searching for useful macro-operators to be learned from sequences of basic problem-solving actions. Good macro-operators simplify problem-solving by allowing problems to be solved at a more abstract level.


Deep learning and neuro-symbolic AI 2011–now


Neuro-symbolic AI: integrating neural and symbolic approaches

Neuro-symbolic AI attempts to integrate neural and symbolic architectures in a manner that addresses strengths and weaknesses of each, in a complementary fashion, in order to support robust AI capable of reasoning, learning, and cognitive modeling. As argued by
Valiant Valiant may refer to: People * James Valiant (1884–1917), English cricketer * The Valiant Brothers, a professional wrestling tag team of storyline brothers ** Jerry Valiant, a ring name of professional wrestler John Hill (1941-2010) ** Jimmy ...
and many others, the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient (machine) learning models. Gary Marcus, similarly, argues that: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning.", and in particular: "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can manipulate such abstract knowledge reliably is the apparatus of symbol-manipulation." Henry Kautz, Francesca Rossi, and
Bart Selman Bart Selman is a Dutch-American professor of computer science at Cornell University. He has previously worked at AT&T Bell Laboratories. He is also co-founder and principal investigator of the Center for Human-Compatible Artificial Intelligence ( ...
have also argued for a synthesis. Their arguments are based on a need to address the two kinds of thinking discussed in
Daniel Kahneman Daniel Kahneman (; he, דניאל כהנמן; born March 5, 1934) is an Israeli-American psychologist and economist notable for his work on the psychology of judgment and decision-making, as well as behavioral economics, for which he was awarde ...
's book, '' Thinking, Fast and Slow''. Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is fast, automatic, intuitive and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed. Garcez describes research in this area as being ongoing for at least the past twenty years, dating from his 2002 book on neurosymbolic learning systems. A series of workshops on neuro-symbolic reasoning has been held every year since 2005, see http://www.neural-symbolic.org/ for details. In their 2015 paper, Neural-Symbolic Learning and Reasoning: Contributions and Challenges, Garcez et al. argue that: Approaches for integration are varied. Henry Kautz's taxonomy of neuro-symbolic architectures, along with some examples, follows: * Symbolic Neural symbolic—is the current approach of many neural models in natural language processing, where words or subword tokens are both the ultimate input and output of large language models. Examples include BERT, RoBERTa, and GPT-3. * Symbolic
eural Eural may refer to: * Eural Trans Gas, a Hungarian energy company * Eural, a Belgian bank, subsidiary of Dexia Dexia N.V./S.A., or the Dexia Group, is a Franco-Belgian financial institution formed in 1996. At its peak in 2010, it had about 35, ...
��is exemplified by
AlphaGo AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind Technologies a subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed u ...
, where symbolic techniques are used to call neural techniques. In this case the symbolic approach is Monte Carlo tree search and the neural techniques learn how to evaluate game positions. * Neural, Symbolic—uses a neural architecture to interpret perceptual data as symbols and relationships that are then reasoned about symbolically. * Neural:Symbolic → Neural—relies on symbolic reasoning to generate or label training data that is subsequently learned by a deep learning model, e.g., to train a neural model for symbolic computation by using a Macsyma-like symbolic mathematics system to create or label examples. * Neural_—uses a neural net that is generated from symbolic rules. An example is the Neural Theorem Prover, which constructs a neural network from an AND-OR proof tree generated from knowledge base rules and terms. Logic Tensor Networks also fall into this category. * Neural ymbolic��allows a neural model to directly call a symbolic reasoning engine, e.g., to perform an action or evaluate a state. Many key research questions remain, such as: * What is the best way to integrate neural and symbolic architectures? * How should symbolic structures be represented within neural networks and extracted from them? * How should common-sense knowledge be learned and reasoned about? * How can abstract knowledge that is hard to encode logically be handled?


Techniques and contributions

This section provides an overview of techniques and contributions in an overall context leading to many other, more detailed articles in Wikipedia. Sections on
Machine Learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
and Uncertain Reasoning are covered earlier in the history section.


AI programming languages

The key AI programming language in the US during the last symbolic AI boom period was
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
.
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
is the second oldest programming language after FORTRAN and was created in 1958 by John McCarthy.
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
provided the first read-eval-print loop to support rapid program development. Compiled functions could be freely mixed with interpreted functions. Program tracing, stepping, and breakpoints were also provided, along with the ability to change values or functions and continue from breakpoints or errors. It had the first
self-hosting compiler In computer programming, self-hosting is the use of a program as part of the toolchain or operating system that produces new versions of that same program—for example, a compiler that can compile its own source code. Self-hosting software is co ...
, meaning that the compiler itself was originally written in
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
and then ran interpretively to compile the compiler code. Other key innovations pioneered by LISP that have spread to other programming languages include: * Garbage collection *
Dynamic typing In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every "term" (a word, phrase, or other set of symbols). Usually the terms are various constructs of a computer progra ...
* Higher-order functions *
Recursion Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematic ...
* Conditionals Programs were themselves data structures that other programs could operate on, allowing the easy definition of higher-level languages. In contrast to the US, in Europe the key AI programming language during that same period was
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
.
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
provided a built-in store of facts and clauses that could be queried by a read-eval-print loop. The store could act as a knowledge base and the clauses could act as rules or a restricted form of logic. As a subset of first-order logic
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
was based on Horn clauses with a closed-world assumption -- any facts not known were considered false -- and a unique name assumption for primitive terms -- e.g., the identifier barack_obama was considered to refer to exactly one object. Backtracking and unification are built-in to
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
. Alain Colmerauer and Philippe Roussel are credited as the inventors of
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
.
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
is a form of
logic programming Logic programming is a programming paradigm which is largely based on formal logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of log ...
, which was invented by
Robert Kowalski Robert Anthony Kowalski (born 15 May 1941) is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent m ...
. Its history was also influenced by Carl Hewitt's
PLANNER Planner may refer to: * A personal organizer (book) for planning * Microsoft Planner * Planner programming language * Planner (PIM for Emacs) * Urban planner * Route planner * Meeting and convention planner * Japanese term for video game de ...
, an assertional database with pattern-directed invocation of methods. For more detail see the section on the origins of Prolog in the PLANNER article.
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
is also a kind of declarative programming. The logic clauses that describe programs are directly interpreted to run the programs specified. No explicit series of actions is required, as is the case with
imperative programming In computer science, imperative programming is a programming paradigm of software that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program co ...
languages. Japan championed
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
for its
Fifth Generation Project The Fifth Generation Computer Systems (FGCS) was a 10-year initiative begun in 1982 by Japan's Ministry of International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. It aimed to create ...
, intending to build special hardware for high performance. Similarly, LISP machines were built to run
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
, but as the second AI boom turned to bust these companies could not compete with new workstations that could now run
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
or
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
natively at comparable speeds. See the history section for more detail.
Smalltalk Smalltalk is an object-oriented, dynamically typed reflective programming language. It was designed and created in part for educational use, specifically for constructionist learning, at the Learning Research Group (LRG) of Xerox PARC by ...
was another influential AI programming language. For example it introduced
metaclasses In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances. Not all object-oriente ...
and, along with Flavors and CommonLoops, influenced the Common Lisp Object System, or ( CLOS), that is now part of
Common Lisp Common Lisp (CL) is a dialect of the Lisp programming language, published in ANSI standard document ''ANSI INCITS 226-1994 (S20018)'' (formerly ''X3.226-1994 (R1999)''). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived fr ...
, the current standard Lisp dialect. CLOS is a Lisp-based object-oriented system that allows multiple inheritance, in addition to incremental extensions to both classes and
metaclasses In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances. Not all object-oriente ...
, thus providing a run-time meta-object protocol. For other AI programming languages see this
list of programming languages for artificial intelligence Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence: Languages * AIML (meaning "Artificial Intelligence Markup Language")according to (the intro page to) thAIML Repository at ...
. Currently, Python, a
multi-paradigm programming language Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms. Some paradigms are concerned mainly with implications for the execution model of the language, suc ...
, is the most popular programming language, partly due to its extensive package library that supports data science,
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, and deep learning. Python includes a read-eval-print loop, functional elements such as higher-order functions, and
object-oriented programming Object-oriented programming (OOP) is a programming paradigm based on the concept of " objects", which can contain data and code. The data is in the form of fields (often known as attributes or ''properties''), and the code is in the form of ...
that includes
metaclasses In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances. Not all object-oriente ...
.


Search

Search arises in many kinds of problem solving, including
planning Planning is the process of thinking regarding the activities required to achieve a desired goal. Planning is based on foresight, the fundamental capacity for mental time travel. The evolution of forethought, the capacity to think ahead, is c ...
, constraint satisfaction, and playing games such as checkers,
chess Chess is a board game for two players, called White and Black, each controlling an army of chess pieces in their color, with the objective to checkmate the opponent's king. It is sometimes called international chess or Western chess to dist ...
, and go. The best known AI-search tree search algorithms are breadth-first search,
depth-first search Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible alo ...
, A*, and Monte Carlo Search. Key search algorithms for
Boolean satisfiability In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfi ...
are
WalkSAT In computer science, GSAT and WalkSAT are local search algorithms to solve Boolean satisfiability problems. Both algorithms work on formulae in Boolean logic that are in, or have been converted into conjunctive normal form. They start by assi ...
, conflict-driven clause learning, and the DPLL algorithm. For adversarial search when playing games, alpha-beta pruning, branch and bound, and
minimax Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for ''mini''mizing the possible loss for a worst case (''max''imum loss) scenario. Whe ...
were early contributions.


Knowledge representation and reasoning

Multiple different approaches to represent knowledge and then reason with those representations have been investigated. Below is a quick overview of approaches to knowledge representation and automated reasoning.


Knowledge representation

Semantic networks A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, ...
,
conceptual graphs A conceptual graph (CG) is a formalism for knowledge representation. In the first published paper on CGs, John F. Sowa used them to represent the conceptual schemas used in database systems. The first book on CGs applied them to a wide range of t ...
,
frames A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent. Frame and FRAME may also refer to: Physical objects In building construction *Framing (co ...
, and
logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of logical truths. It is a formal science investigating how conclusions follow from premis ...
are all approaches to modeling knowledge such as domain knowledge, problem-solving knowledge, and the semantic meaning of language. Ontologies model key concepts and their relationships in a domain. Example ontologies are YAGO, WordNet, and DOLCE. DOLCE is an example of an upper ontology that can be used for any domain while WordNet is a lexical resource that can also be viewed as an
ontology In metaphysics, ontology is the philosophical study of being, as well as related concepts such as existence, becoming, and reality. Ontology addresses questions like how entities are grouped into categories and which of these entities ...
. YAGO incorporates WordNet as part of its ontology, to align facts extracted from
Wikipedia Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system. Wikipedia is the largest and most-read ref ...
with WordNet
synsets In metadata, a synonym ring or synset, is a group of data elements that are considered semantically equivalent for the purposes of information retrieval. These data elements are frequently found in different metadata registries. Although a group ...
. The Disease Ontology is an example of a medical ontology currently being used. Description logic is a logic for automated classification of ontologies and for detecting inconsistent classification data. OWL is a language used to represent ontologies with description logic. Protégé is a ontology editor that can read in OWL ontologies and then check consistency with deductive classifiers such as such as HermiT.
First-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
is more general than description logic. The automated theorem provers discussed below can prove theorems in first-order logic. Horn clause logic is more restricted than
first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
and is used in
logic programming Logic programming is a programming paradigm which is largely based on formal logic Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or of log ...
languages such as
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
. Extensions to first-order logic include temporal logic, to handle time; epistemic logic, to reason about agent knowledge; modal logic, to handle possibility and necessity; and Probabilistic logic, probabilistic logics to handle logic and probability together.


Automatic theorem proving

Examples of automated theorem provers for first-order logic are: * Prover9 * ACL2 * Vampire (theorem prover), Vampire Prover9 can be used in conjunction with the Mace4 Model checking, model checker. ACL2 is a theorem prover that can handle proofs by induction and is a descendant of the Boyer-Moore Theorem Prover, also known as Nqthm.


Reasoning in knowledge-based systems

Knowledge-based systems have an explicit knowledge base, typically of rules, to enhance reusability across domains by separating procedural code and domain knowledge. A separate inference engine processes rules and adds, deletes, or modifies a knowledge store. Forward chaining inference engines are the most common, and are seen in CLIPS and OPS5. Backward chaining occurs in
Prolog Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
, where a more limited logical representation is used, Horn clause, Horn Clauses. Pattern-matching, specifically unification (computer science), unification, is used in Prolog. A more flexible kind of problem-solving occurs when reasoning about what to do next occurs, rather than simply choosing one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 blackboard architecture. Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to compile frequently used knowledge into higher-level Chunking (psychology), chunks.


Commonsense reasoning

Marvin Minsky first proposed
frames A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent. Frame and FRAME may also refer to: Physical objects In building construction *Framing (co ...
as a way of interpreting common visual situations, such as an office, and Roger Schank extended this idea to Script theory, scripts for common routines, such as dining out. Cyc has attempted to capture useful common-sense knowledge and has "micro-theories" to handle particular kinds of domain-specific reasoning. Qualitative simulation, such as Benjamin Kuipers's QSIM, approximates human reasoning about naive physics, such as what happens when we heat a liquid in a pot on the stove. We expect it to heat and possibly boil over, even though we may not know its temperature, its boiling point, or other details, such as atmospheric pressure. Similarly, James F. Allen (computer scientist), Allen's Allen's interval algebra, temporal interval algebra is a simplification of reasoning about time and Region Connection Calculus is a simplification of reasoning about spatial relationships. Both can be solved with constraint programming, constraint solvers.


Constraints and constraint-based reasoning

Constraint programming, Constraint solvers perform a more limited kind of inference than first-order logic. They can simplify sets of spatiotemporal constraints, such as those for Region connection calculus, RCC or Allen's interval algebra, Temporal Algebra, along with solving other kinds of puzzle problems, such as Wordle, Sudoku, verbal arithmetic, cryptarithmetic problems, and so on. Constraint logic programming can be used to solve scheduling problems, for example with constraint handling rules (CHR).


Automated planning

The General Problem Solver ( GPS) cast planning as problem-solving used means-ends analysis to create plans. Stanford Research Institute Problem Solver, STRIPS took a different approach, viewing planning as theorem proving. Graphplan takes a least-commitment approach to planning, rather than sequentially choosing actions from an initial state, working forwards, or a goal state if working backwards. Satplan is an approach to planning where a planning problem is reduced to a Boolean satisfiability problem.


Natural language processing

Natural language processing focuses on treating language as data to perform tasks such as identifying topics without necessarily understanding the intended meaning. Natural language understanding, in contrast, constructs a meaning representation and uses that for further processing, such as answering questions. Parsing, tokenizing, spell checker, spelling correction, part-of-speech tagging, shallow parsing, noun and verb phrase chunking are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches. In symbolic AI, discourse representation theory and first-order logic have been used to represent sentence meanings. Latent semantic analysis (LSA) and explicit semantic analysis also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles. New deep learning approaches based on Transformer (machine learning model), Transformer models have now eclipsed these earlier symbolic AI approaches and attained state-of-the-art performance in natural language ''processing''. However, Transformer models are opaque and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector components is opaque.


Agents and multi-agent systems

Software agent, Agents are autonomous systems embedded in an environment they perceive and act upon in some sense. Russell and Norvig's standard textbook on artificial intelligence is organized to reflect agent architectures of increasing sophistication. The sophistication of agents varies from simple reactive agents, to those with a model of the world and automated planning capabilities, possibly a Belief–desire–intention software model, BDI agent, i.e., one with beliefs, desires, and intentions – or alternatively a reinforcement learning model learned over time to choose actions – up to a combination of alternative architectures, such as a neuro-symbolic AI, neuro-symbolic architecture that includes deep learning for perception. In contrast, a multi-agent system consists of multiple agents that communicate amongst themselves with some inter-agent communication language such as Knowledge Query and Manipulation Language (KQML). The agents need not all have the same internal architecture. Advantages of multi-agent systems include the ability to divide work among the agents and to increase fault tolerance when agents are lost. Research problems include Consensus dynamics, how agents reach consensus, Cooperative distributed problem solving, distributed problem solving, multi-agent learning, multi-agent planning, and distributed constraint optimization.


Controversies

Controversies arose from early on in symbolic AI, both within the field—e.g., between logicists (the pro-logic Neats and scruffies, "neats") and non-logicists (the anti-logic Neats and scruffies, "scruffies")—and between those who embraced AI but rejected symbolic approaches—primarily Connectionism, connectionists—and those outside the field. Critiques from outside of the field were primarily from philosophers, on intellectual grounds, but also from funding agencies, especially during the two AI winters.


Connectionist AI: philosophical challenges and sociological conflicts

Connectionist approaches include earlier work on neural networks, such as Perceptron, perceptrons; work in the mid to late 80s, such as Danny Hillis's Connection Machine and Yann LeCun, Yann Le Cun's advances in Convolutional neural network, convolutional neural networks; to today's more advanced approaches, such as Transformer (machine learning model), Transformers, Generative adversarial network, GANs, and other work in deep learning. Three philosophical positions have been outlined among connectionists: # Implementationism—where connectionist architectures implement the capabilities for symbolic processing, # Radical connectionism—where symbolic processing is rejected totally, and connectionist architectures underly intelligence and are fully sufficient to explain it, # Moderate connectionism—where symbolic processing and connectionist architectures are viewed as complementary and both are required for intelligence. Olazaran, in his sociological history of the controversies within the neural network community, described the moderate connectionism view as essentially compatible with current research in Neuro-symbolic AI, neuro-symbolic hybrids:
The third and last position I would like to examine here is what I call the moderate connectionist view, a more eclectic view of the current debate between connectionism and symbolic AI. One of the researchers who has elaborated this position most explicitly is Andy Clark, a philosopher from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England). Clark defended hybrid (partly symbolic, partly connectionist) systems. He claimed that (at least) two kinds of theories are needed in order to study and model cognition. On the one hand, for some information-processing tasks (such as pattern recognition) connectionism has advantages over symbolic models. But on the other hand, for other cognitive processes (such as serial, deductive reasoning, and generative symbol manipulation processes) the symbolic paradigm offers adequate models, and not only "approximations" (contrary to what radical connectionists would claim).
Gary Marcus has claimed that the animus in the deep learning community against symbolic approaches now may be more sociological than philosophical:
To think that we can simply abandon symbol-manipulation is to suspend disbelief.

And yet, for the most part, that's how most current AI proceeds. Geoffrey Hinton, Hinton and many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet, Artificial neural network, neural networks typically try to solve tasks by statistical approximation and learning from examples.

According to Gary Marcus, Marcus, Geoffrey Hinton and his colleagues have been vehemently "anti-symbolic":
When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. By 2015, his hostility toward all things symbols had fully crystallized. He gave a talk at an AI workshop at Stanford comparing symbols to Aether (classical element), aether, one of science's greatest mistakes. ... Since then, his anti-symbolic campaign has only increased in intensity. In 2016, Yann LeCun, Yoshua Bengio, Bengio, and Geoffrey Hinton, Hinton wrote a manifesto for deep learning in one of science's most important journals, Nature. It closed with a direct attack on symbol manipulation, calling not for reconciliation but for outright replacement. Later, Geoffrey Hinton, Hinton told a gathering of European Union leaders that investing any further money in symbol-manipulating approaches was "a huge mistake," likening it to investing in internal combustion engines in the era of electric cars.
Part of these disputes may be due to unclear terminology:
Turing award winner Judea Pearl offers a critique of machine learning which, unfortunately, conflates the terms machine learning and deep learning. Similarly, when Geoffrey Hinton refers to symbolic AI, the connotation of the term tends to be that of Expert system, expert systems dispossessed of any ability to learn. The use of the terminology is in need of clarification. Machine learning is not confined to Association rule learning, association rule mining, c.f. the body of work on symbolic ML and Inductive logic programming, relational learning (the differences to deep learning being the choice of representation, localist logical rather than distributed, and the non-use of Gradient descent, gradient-based learning algorithms). Equally, symbolic AI is not just about production rules written by hand. A proper definition of AI concerns knowledge representation and reasoning, autonomous Multi-agent system, multi-agent systems, Automated planning and scheduling, planning and Argumentation framework, argumentation, as well as learning.


Philosophical: critiques from Dreyfus and other philosophers

Now we turn to attacks from outside the field specifically by philosophers. One argument frequently cited by philosophers was made earlier by the computer scientist Alan Turing, in his 1950 paper Computing Machinery and Intelligence, when he said that "human behavior is far too complex to be captured by any formal set of rules—humans must be using some informal guidelines that … could never be captured in a formal set of rules and thus could never be codified in a computer program." Turing called this "The Argument from Informality of Behaviour." Similar critiques were provided by Hubert Dreyfus, in his books ''What Computers Can't Do'' and ''What Computers Still Can't Do''. Hubert Dreyfus, Dreyfus predicted AI would only be suitable for toy problems, and thought that building more complex systems or scaling up the idea towards useful software would not be possible. John Haugeland, another philosopher, similarly argued against rule-based symbolic AI in his book ''Artificial Intelligence: The Very Idea'', calling it GOFAI ("Good Old-Fashioned Artificial Intelligence"). Stuart J. Russell, Russell and Peter Norvig, Norvig explain that these arguments were targeted to the symbolic AI of the 1980s:
The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described ... and we saw ... that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem.
Since then, Probabilistic logic, probabilistic reasoning systems have extended the capability of symbolic AI so they can be much "more appropriate for open-ended domains." However, Hubert Dreyfus, Dreyfus raised another argument that cannot be addressed by disembodied symbolic AI systems:
One of Hubert Dreyfus, Dreyfus's strongest arguments is for Situated cognition, situated agents rather than disembodied logical inference engines. An agent whose understanding of "dog" comes only from a limited set of logical sentences such as "Dog(x) ⇒ Mammal(x)" is at a disadvantage compared to an agent that has watched dogs run, has played fetch with them, and has been licked by one. As philosopher Andy Clark (1998) says, "Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings." According to Clark, we are "good at frisbee, bad at logic." The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral.


Situated robotics: the world as a model

Rodney Brooks, created behavior-based robotics, also named Nouvelle AI, as an alternative to ''both'' symbolic AI and connectionist AI. His approach rejected representations, either symbolic or distributed, as not only unnecessary, but as detrimental. Instead, he created the subsumption architecture, a layered architecture for embodied agents. Each layer achieves a different purpose and must function in the real world. For example, the first robot he describes in ''Intelligence Without Representation'', has three layers. The bottom layer interprets sonar sensors to avoid objects. The middle layer causes the robot to wander around when there are no obstacles. The top layer causes the robot to go to more distant places for further exploration. Each layer can temporarily inhibit or suppress a lower-level layer. He criticized AI researchers for defining AI problems for their systems, when: "There is no clean division between perception (abstraction) and reasoning in the real world." He called his robots "Creatures" and each layer was "composed of a fixed-topology network of simple finite state machines." In the Nouvelle AI approach, "First, it is vitally important to test the Creatures we build in the real world; i.e., in the same world that we humans inhabit. It is disastrous to fall into the temptation of testing them in a simplified world first, even with the best intentions of later transferring activity to an unsimplified world." His emphasis on real-world testing was in contrast to "Early work in AI concentrated on games, geometrical problems, symbolic algebra, theorem proving, and other formal systems" and the use of the blocks world in symbolic AI systems such as SHRDLU.


Current views

Each approach—symbolic, connectionist, and behavior-based—has advantages, but has been criticized by the other approaches. Symbolic AI has been criticized as disembodied, liable to the qualification problem, and poor in handling the perceptual problems where deep learning excels. In turn, Connectionism, connectionist AI has been criticized as poorly suited for deliberative step-by-step problem solving, incorporating knowledge, and handling planning. Finally, Nouvelle AI excels in reactive and real-world robotics domains but has been criticized for difficulties in incorporating learning and knowledge. Hybrid intelligent system, Hybrid AIs incorporating one or more of these approaches are currently viewed as the path forward. Russell and Norvig conclude that:
Overall, Hubert Dreyfus, Dreyfus saw areas where AI did not have complete answers and said that Al is therefore impossible; we now see many of these same areas undergoing continued research and development leading to increased capability, not impossibility.


See also

* Artificial intelligence * Automated planning and scheduling * Automated theorem proving * Belief revision * Case-based reasoning * Cognitive architecture * Cognitive science * Connectionism * Constraint programming * Deep learning *
First-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
* History of artificial intelligence * Inductive logic programming * Knowledge-based systems * Knowledge representation and reasoning * Logic programming * Machine learning * Model checking * Model-based reasoning * Multi-agent system * Neuro-symbolic AI * Ontology (information science), Ontology * Philosophy of artificial intelligence * Physical symbol systems hypothesis * Semantic Web * Sequential pattern mining * Statistical relational learning * Symbolic mathematics * YAGO (database), YAGO ontology * WordNet


Notes


Citations


References

* * * . * * * * * * * * * * * * * * * * * * * * * * * * * . * * * * * * * * {{cite conference , year=2017 , author1=Xifan Yao , author2=Jiajun Zhou , author3=Jiangming Zhang , author4=Claudio R. Boer , title=From Intelligent Manufacturing to Smart Manufacturing for Industry 4.0 Driven by Next Generation Artificial Intelligence and Further On , publisher=IEEE , conference=2017 5th International Conference on Enterprise Systems (ES) , doi=10.1109/es.2017.58 Artificial intelligence