AI winter
   HOME

TheInfoList



OR:

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
research.AI Expert Newsletter: W is for Winter
The term was coined by
analogy Analogy (from Greek ''analogia'', "proportion", from ''ana-'' "upon, according to" lso "against", "anew"+ ''logos'' "ratio" lso "word, speech, reckoning" is a cognitive process of transferring information or meaning from a particular subject ...
to the idea of a
nuclear winter Nuclear winter is a severe and prolonged global climatic cooling effect that is hypothesized to occur after widespread firestorms following a large-scale nuclear war. The hypothesis is based on the fact that such fires can inject soot into t ...
. The field has experienced several
hype cycle The Gartner hype cycle is a graphical presentation developed, used and branded by the American research, advisory and information technology firm Gartner to represent the maturity, adoption, and social application of specific technologies. The hy ...
s, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory ...
—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse. Hype is common in many emerging technologies, such as the
railway mania Railway Mania was an instance of a stock market bubble in the United Kingdom of Great Britain and Ireland in the 1840s. It followed a common pattern: as the price of railway shares increased, speculators invested more money, which further increa ...
or the
dot-com bubble The dot-com bubble (dot-com boom, tech bubble, or the Internet bubble) was a stock market bubble in the late 1990s, a period of massive growth in the use and adoption of the Internet. Between 1995 and its peak in March 2000, the Nasdaq Comp ...
. The AI winter was a result of such hype, due to over-inflated promises by developers, unnaturally high expectations from end-users, and extensive promotion in the media. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry." Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
) from the research and corporate communities led to a dramatic increase in funding and investment. Quantum winter is the prospect of a similar development in
quantum computing Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Though ...
, anticipated or contemplated by Mikhail Dyakonov,
Chris Hoofnagle Chris Jay Hoofnagle is an American professor at the University of California, Berkeley who teaches information privacy law, computer crime law, regulation of online privacy, internet law, and seminars on new technology. Hoofnagle has contributed ...
, Simson Garfinkel, Victor Galitsky, and Nikita Gourianov.


Overview

There were two major winters in 1974–1980 and 1987–1993 and several smaller episodes, including the following: * 1966: failure of
machine translation Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates ...
* 1970: abandonment of
connectionism Connectionism refers to both an approach in the field of cognitive science that hopes to explain mental phenomena using artificial neural networks (ANN) and to a wide range of techniques and algorithms using ANNs in the context of artificial in ...
* Period of overlapping trends: ** 1971–75:
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the A ...
's frustration with the Speech Understanding Research program at
Carnegie Mellon University Carnegie Mellon University (CMU) is a private research university in Pittsburgh, Pennsylvania. One of its predecessors was established in 1900 by Andrew Carnegie as the Carnegie Technical Schools; it became the Carnegie Institute of Technology ...
** 1973: large decrease in AI research in the United Kingdom in response to the
Lighthill report __NOTOC__ ''Artificial Intelligence: A General Survey'', commonly known as the Lighthill report, is a scholarly article by James Lighthill, published in ''Artificial Intelligence: a paper symposium'' in 1973. Published in 1973, it was compiled by ...
** 1973–74: DARPA's cutbacks to academic AI research in general * 1987: collapse of the LISP machine market * 1988: cancellation of new spending on AI by the Strategic Computing Initiative * 1993: resistance to new expert systems deployment and maintenance * 1990s: end of the Fifth Generation computer project's original goals


Early episodes


Machine translation and the ALPAC report of 1966

During the
Cold War The Cold War is a term commonly used to refer to a period of geopolitical tension between the United States and the Soviet Union and their respective allies, the Western Bloc and the Eastern Bloc. The term '' cold war'' is used because t ...
, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic.
Noam Chomsky Avram Noam Chomsky (born December 7, 1928) is an American public intellectual: a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. Sometimes called "the father of modern linguistics", Chomsky i ...
's new work in
grammar In linguistics, the grammar of a natural language is its set of structural constraints on speakers' or writers' composition of clauses, phrases, and words. The term can also refer to the study of such constraints, a field that includes doma ...
was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".John Hutchins 200
The history of machine translation in a nutshell.
However, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An apocryphal example is "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten." Later researchers would call this the
commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence. The f ...
problem. By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (
ALPAC ALPAC (Automatic Language Processing Advisory Committee) was a committee of seven scientists led by John R. Pierce, established in 1964 by the United States government in order to evaluate the progress in computational linguistics in general an ...
) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended. Machine translation is still an
open Open or OPEN may refer to: Music * Open (band), Australian pop/rock band * The Open (band), English indie rock band * Open (Blues Image album), ''Open'' (Blues Image album), 1969 * Open (Gotthard album), ''Open'' (Gotthard album), 1999 * Open (C ...
research problem in the 21st century, which has met with some success (
Google Translate Google Translate is a multilingual neural machine translation service developed by Google to translate text, documents and websites from one language into another. It offers a website interface, a mobile app for Android and iOS, and an API ...
, Yahoo Babel Fish).


The abandonment of connectionism in 1969

Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behavior. Examples of this kind of work, called "connectionism", include Walter Pitts and Warren McCulloch's first description of a
neural network A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
for logic and
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory ...
's work on the SNARC system. In the late 1950s, most of these approaches were abandoned when researchers began to explore ''symbolic'' reasoning as the essence of intelligence, following the success of programs like the Logic Theorist and the General Problem Solver. However, one type of connectionist work continued: the study of perceptrons, invented by Frank Rosenblatt, who kept the field alive with his salesmanship and the sheer force of his personality. He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages". Mainstream research into perceptrons came to an abrupt end in 1969, when
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory ...
and Seymour Papert published the book '' Perceptrons'', which was perceived as outlining the limits of what perceptrons could do. Connectionist approaches were abandoned for the next decade or so. While important work, such as
Paul Werbos Paul John Werbos (born 1947) is an American social scientist and machine learning pioneer. He is best known for his 1974 dissertation, which first described the process of training artificial neural networks through backpropagation of errors. He ...
' discovery of backpropagation, continued in a limited way, major funding for connectionist projects was difficult to find in the 1970s and early 1980s. The "winter" of connectionist research came to an end in the middle 1980s, when the work of
John Hopfield John Joseph Hopfield (born July 15, 1933) is an American scientist most widely known for his invention of an associative neural network in 1982. It is now more commonly known as the Hopfield network. Biography Hopfield was born in 1933 to Po ...
,
David Rumelhart David Everett Rumelhart (June 12, 1942 – March 13, 2011) was an American psychologist who made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artif ...
and others revived large scale interest in neural networks. Rosenblatt did not live to see this, however, as he died in a boating accident shortly after ''Perceptrons'' was published.


The setbacks of 1974


The Lighthill report

In 1973, professor
Sir James Lighthill Sir Michael James Lighthill (23 January 1924 – 17 July 1998) was a British applied mathematician, known for his pioneering work in the field of aeroacoustics and for writing the Lighthill report on artificial intelligence. Biography J ...
was asked by the UK
Parliament In modern politics, and history, a parliament is a legislative body of government. Generally, a modern parliament has three functions: representing the electorate, making laws, and overseeing the government via hearings and inquiries. Th ...
to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives". He concluded that nothing being done in AI could not be done in other sciences. He specifically mentioned the problem of " combinatorial explosion" or " intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions. , , and see also The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of
Donald Michie Donald Michie (; 11 November 1923 – 7 July 2007) was a British researcher in artificial intelligence. During World War II, Michie worked for the Government Code and Cypher School at Bletchley Park, contributing to the effort to solve " Tunny ...
, John McCarthy and
Richard Gregory Richard Langton Gregory (24 July 1923 – 17 May 2010) was a British psychologist and Professor of Neuropsychology at the University of Bristol. Life and career Richard Gregory was born in London. He was the son of Christopher Clive Lang ...
. McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning". The report led to the complete dismantling of AI research in England. AI research continued in only a few universities (Edinburgh, Essex and Sussex). Research would not revive on a large scale until 1983, when
Alvey The Alvey Programme was a British government sponsored research programme in information technology that ran from 1984 to 1990. The programme was a reaction to the Japanese Fifth Generation project, which aimed to create a computer using massi ...
(a research project of the British Government) began to fund AI again from a war chest of £350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.


DARPA's early 1970s funding cuts

During the 1960s, the
Defense Advanced Research Projects Agency The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adv ...
(then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with few strings attached.
J. C. R. Licklider Joseph Carl Robnett Licklider (; March 11, 1915 – June 26, 1990), known simply as J. C. R. or "Lick", was an American psychologistMiller, G. A. (1991), "J. C. R. Licklider, psychologist", ''Journal of the Acoustical Society of Am ...
, the founding director of DARPA's computing division, believed in "funding people, not projects" and he and several successors allowed AI's leaders (such as
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory ...
, John McCarthy, Herbert A. Simon or
Allen Newell Allen Newell (March 19, 1927 – July 19, 1992) was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Depart ...
) to spend it almost any way they liked. This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research". (only the sections ''before'' 1980 apply to the current discussion). Pure undirected research of the kind that had gone on in the 1960s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find. (only the sections ''before'' 1980 apply to the current discussion). AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more." The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier. While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first
Gulf War The Gulf War was a 1990–1991 armed campaign waged by a 35-country military coalition in response to the Iraqi invasion of Kuwait. Spearheaded by the United States, the coalition's efforts against Iraq were carried out in two key phases: ...
, repaying all of DARPAs investment in AI and justifying DARPA's pragmatic policy.


The SUR debacle

DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but ''only if the words were spoken in a particular order''. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year contract. Many years later, several successful commercial
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ...
systems would use the technology developed by the Carnegie Mellon team (such as
hidden Markov models A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ...
) and the market for speech recognition systems would reach $4 billion by 2001.


The setbacks of the late 1980s and early 1990s


The collapse of the LISP machine market

In the 1980s, a form of AI program called an "
expert system In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if ...
" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for
Digital Equipment Corporation Digital Equipment Corporation (DEC ), using the trademark Digital, was a major American company in the computer industry from the 1960s to the 1990s. The company was co-founded by Ken Olsen and Harlan Anderson in 1957. Olsen was president un ...
, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like
Symbolics Symbolics was a computer manufacturer Symbolics, Inc., and a privately held company that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system.
and LISP Machines Inc. who built specialized computers, called
LISP machines Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture, and in a sense, t ...
, that were optimized to process the programming language
LISP A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech. Types * A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispin ...
, the preferred language for AI. In 1987, three years after Minsky and Schank's
prediction A prediction (Latin ''præ-'', "before," and ''dicere'', "to say"), or forecast, is a statement about a future event or data. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact ...
, the market for specialized LISP-based AI hardware collapsed. Workstations by companies like
Sun Microsystems Sun Microsystems, Inc. (Sun for short) was an American technology company that sold computers, computer components, software, and information technology services and created the Java programming language, the Solaris operating system, ZFS, t ...
offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like
Lucid LUCID (Langton Ultimate Cosmic ray Intensity Detector) is a cosmic ray detector built by Surrey Satellite Technology Ltd and designed at Simon Langton Grammar School for Boys, in Canterbury, England. Its main purpose is to monitor cosmic rays ...
and
Franz LISP In computer programming, Franz Lisp is a discontinued Lisp programming language system written at the University of California, Berkeley (UC Berkeley, UCB) by Professor Richard Fateman and several students, based largely on Maclisp and distrib ...
offered increasingly powerful versions of LISP that were portable to all UNIX systems. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines. Later desktop computers built by
Apple An apple is an edible fruit produced by an apple tree (''Malus domestica''). Apple trees are cultivated worldwide and are the most widely grown species in the genus '' Malus''. The tree originated in Central Asia, where its wild ancest ...
and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987, some of them had become as powerful as the more expensive LISP machines. The desktop computers had rule-based engines such as CLIPS available.Avoiding another AI Winter
James Hendler, IEEE Intelligent Systems (March/April 2008 (Vol. 23, No. 2) pp. 2–4
These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year. By the early 1990s, most commercial LISP companies had failed, including Symbolics, LISP Machines Inc., Lucid Inc., etc. Other companies, like
Texas Instruments Texas Instruments Incorporated (TI) is an American technology company headquartered in Dallas, Texas, that designs and manufactures semiconductors and various integrated circuits, which it sells to electronics designers and manufacturers globa ...
and
Xerox Xerox Holdings Corporation (; also known simply as Xerox) is an American corporation that sells print and digital document products and services in more than 160 countries. Xerox is headquartered in Norwalk, Connecticut (having moved from St ...
, abandoned the field. A small number of customer companies (that is, companies using systems written in LISP and developed on LISP machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.


Slowdown in deployment of expert systems

By the early 1990s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the
qualification problem In philosophy and AI (especially, knowledge-based systems), the qualification problem is concerned with the impossibility of listing ''all'' the preconditions required for a real-world action to have its intended effect. It might be posed as ''h ...
) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts. Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (se
NASA, TEXSYS
supporting multiple-world scenarios that was difficult to understand and apply. The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case-based reasoning or universal
database In computing, a database is an organized collection of data stored and accessed electronically. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases ...
access. The maturation of Common Lisp saved many systems such as ICAD which found application in
knowledge-based engineering Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for ...
. Other systems, such as Intellicorp's KEE, moved from LISP to a C++ (variant) on the PC and helped establish
object-oriented technology Object-oriented programming (OOP) is a programming paradigm based on the concept of " objects", which can contain data and code. The data is in the form of fields (often known as attributes or ''properties''), and the code is in the form of pr ...
(including providing major support for the development of UML (see
UML Partners UML Partners was a consortium of system integrators and vendors convened in 1996 to specify the Unified Modeling Language (UML). Initially the consortium was led by Grady Booch, Ivar Jacobson, and James Rumbaugh of Rational Software. The UML Partne ...
).


The end of the Fifth Generation project

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth Generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. According to HP Newquist in ''The Brain Makers'', "On June 1, 1992, The Fifth Generation Project ended not with a successful roar, but with a whimper." As with other AI projects, expectations had run much higher than what was actually possible.


Strategic Computing Initiative cutbacks

In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long-term objective. The program was under the direction of the Information Processing Technology Office (IPTO) and was also directed at
supercomputing A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second ( FLOPS) instead of million instruction ...
and
microelectronics Microelectronics is a subfield of electronics. As the name suggests, microelectronics relates to the study and manufacture (or microfabrication) of very small electronic designs and components. Usually, but not always, this means micrometre- ...
. By 1985 it had spent $100 million and 92 projects were underway at 60 institutions, half in industry, half in universities and government labs. AI research was generously funded by the SCI. Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally", "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was ''not'' "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.


Developments post-AI winter

A survey of reports from the early 2000s suggests that AI's reputation was still less than stellar: * Alex Castro, quoted in ''The Economist'', 7 June 2007: " nvestorswere put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises." * Patty Tascarella in ''
Pittsburgh Business Times American City Business Journals, Inc. (ACBJ) is an American newspaper publisher based in Charlotte, North Carolina. ACBJ publishes The Business Journals, which contains local business news for 44 markets in the United States, Hemmings Motor New ...
'', 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding." *
John Markoff John Gregory Markoff (born October 24, 1949) is a journalist best known for his work covering technology at '' The New York Times'' for 28 years until his retirement in 2016, and a book and series of articles about the 1990s pursuit and captur ...
in the ''New York Times'', 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers." Many researchers in AI in the mid 2000s deliberately called their work by other names, such as
informatics Informatics is the study of computational systems, especially those for data storage and retrieval. According to ACM ''Europe and'' '' Informatics Europe'', informatics is synonymous with computer science and computing as a profession, in which t ...
, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence".


AI integration

In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. In 2006,
Nick Bostrom Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the ...
explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software,"AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis." Nick Bostrom
AI set to exceed human brain power
CNN.com (26 July 2006)
medical diagnosis, and
Google Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
's search engine.
Fuzzy logic Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completel ...
controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Touareg and VW Caravelle feature the DSP transmission which utilizes fuzzy logic, a number of Škoda variants (
Škoda Fabia The Škoda Fabia is a supermini car produced by Czech manufacturer Škoda Auto since 1999. It is the successor of the Škoda Felicia, which was discontinued in 2001. The Fabia was available in hatchback, estate (named Fabia Combi) and saloon ...
) also currently include a fuzzy logic-based controller). Camera sensors widely utilize
fuzzy logic Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completel ...
to enable focus.
Heuristic search In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for solving a problem more quickly when classic methods are too slow for finding an approximate solution, or w ...
and data analytics are both technologies that have developed from the
evolutionary computing In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, th ...
and machine learning subdivision of the AI research community. Again, these techniques have been applied to a wide range of real world problems with considerable commercial success. Data analytics technology utilizing algorithms for the automated formation of classifiers that were developed in the supervised machine learning community in the 1990s (for example, TDIDT, Support Vector Machines, Neural Nets, IBL) are now used pervasively by companies for marketing survey targeting and discovery of trends and features in data sets.


AI funding

Researchers and economists frequently judged the status of an AI winter by reviewing which AI projects were being funded, how much and by whom. Trends in funding are often set by major funding agencies in the developed world. Currently, DARPA and a civilian funding program called EU-FP7 provide much of the funding for AI research in the US and
European Union The European Union (EU) is a supranational union, supranational political union, political and economic union of Member state of the European Union, member states that are located primarily in Europe, Europe. The union has a total area of ...
. As of 2007, DARPA was soliciting AI research proposals under a number of programs including The Grand Challenge Program, Cognitive Technology Threat Warning System (CT2WS), "''Human Assisted Neural Devices (SN07-43)''", "''Autonomous Real-Time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS'')" and "''Urban Reasoning and Geospatial Exploitation Technology (URGENT)''" Perhaps best known is DARPA's Grand Challenge Program which has developed fully automated road vehicles that can successfully navigate real world terrain in a fully autonomous fashion. DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However
James Hendler James Alexander Hendler (born April 2, 1957) is an artificial intelligence researcher at Rensselaer Polytechnic Institute, United States, and one of the originators of the Semantic Web. He is a Fellow of the National Academy of Public Administ ...
, the manager of the DARPA program at the time, expressed some disappointment with the government's ability to create rapid change, and moved to working with the
World Wide Web Consortium The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web. Founded in 1994 and led by Tim Berners-Lee, the consortium is made up of member organizations that maintain full-time staff working ...
to transition the technologies to the private sector. The EU-FP7 funding program provides financial support to researchers within the European Union. In 2007–2008, it was funding AI research under the Cognitive Systems: Interaction and Robotics Programme (€193m), the Digital Libraries and Content Programme (€203m) and the FET programme (€185m).


Current "AI spring"

A marked increase in AI funding, development, deployment, and commercial use has led to the idea of the AI winter being long over. Concerns are occasionally raised that a new AI winter could be triggered by overly ambitious or unrealistic promises by prominent AI scientists or overpromising on the part of commercial vendors. The successes of the current "AI spring" are advances in language translation (in particular,
Google Translate Google Translate is a multilingual neural machine translation service developed by Google to translate text, documents and websites from one language into another. It offers a website interface, a mobile app for Android and iOS, and an API ...
), image recognition (spurred by the ImageNet training database) as commercialized by
Google Image Search Google Images (previously Google Image Search) is a search engine owned by Google that allows users to search the World Wide Web for images. It was introduced on July 12, 2001 due to a demand for pictures of the green Versace dress of Jennifer L ...
, and in game-playing systems such as AlphaZero (chess champion) and AlphaGo (go champion), and
Watson Watson may refer to: Companies * Actavis, a pharmaceutical company formerly known as Watson Pharmaceuticals * A.S. Watson Group, retail division of Hutchison Whampoa * Thomas J. Watson Research Center, IBM research center * Watson Systems, make ...
( Jeopardy champion). Most of these advances have occurred since 2010.


Underlying causes behind AI winters

Several explanations have been put forth for the cause of AI winters in general. As AI progressed from government-funded applications to commercial ones, new dynamics came into play. While ''hype'' is the most commonly cited cause, the explanations are not necessarily mutually exclusive.


Hype

The AI winters can be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified by the
railway mania Railway Mania was an instance of a stock market bubble in the United Kingdom of Great Britain and Ireland in the 1840s. It followed a common pattern: as the price of railway shares increased, speculators invested more money, which further increa ...
and
dotcom bubble The dot-com bubble (dot-com boom, tech bubble, or the Internet bubble) was a stock market bubble in the late 1990s, a period of massive growth in the use and adoption of the Internet. Between 1995 and its peak in March 2000, the Nasdaq Compos ...
. In a common pattern in the development of new technology (known as hype cycle), an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress cannot keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule. For example, in the 1960s the realization that computers could simulate 1-layer neural networks led to a neural-network hype cycle that lasted until the 1969 publication of the book Perceptrons which severely limited the set of problems that could be optimally solved by 1-layer networks. In 1985 the realization that neural networks could be used to solve optimization problems, as a result of famous papers by Hopfield and Tank, together with the threat of Japan's fifth-generation project, led to renewed interest and application.


Institutional factors

Another factor is AI's place in the organisation of universities. Research on AI often takes the form of
interdisciplinary research Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several other fields like sociology, anthropology, psychology, ec ...
. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.


Economic factors

Downturns in a country's national economy cause budget cuts in universities. The "core contents" tendency worsens the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK, when universities had to make cuts and the question was only which programs should go.


Insufficient computing capability

Early in the computing history the potential for neural networks was understood but it has never been realized. Fairly simple networks require significant computing capacity even by today's standards.


Empty pipeline

It is common to see the relationship between basic research and technology as a pipeline. Advances in basic research give birth to advances in applied research, which in turn leads to new commercial applications. From this it is often argued that a lack of basic research will lead to a drop in marketable technology some years down the line. This view was advanced by James Hendler in 2008, when he claimed that the fall of expert systems in the late '80s was not due to an inherent and unavoidable brittleness of expert systems, but to funding cuts in basic research in the 1970s. These expert systems advanced in the 1980s through applied research and product development, but, by the end of the decade, the pipeline had run dry and expert systems were unable to produce improvements that could have overcome this brittleness and secured further funding.


Failure to adapt

The fall of the LISP machine market and the failure of the fifth generation computers were cases of expensive advanced products being overtaken by simpler and cheaper alternatives. This fits the definition of a low-end
disruptive technology In business theory, disruptive innovation is innovation that creates a new market and value network or enters at the bottom of an existing market and eventually displaces established market-leading firms, products, and alliances. The concept ...
, with the LISP machine makers being marginalized. Expert systems were carried over to the new desktop computers by for instance CLIPS, so the fall of the LISP machine market and the fall of expert systems are strictly speaking two separate events. Still, the failure to adapt to such a change in the outside computing milieu is cited as one reason for the 1980s AI winter.


Arguments and debates on past and future of AI

Several philosophers, cognitive scientists and computer scientists have speculated on where AI might have failed and what lies in its future.
Hubert Dreyfus Hubert Lederer Dreyfus (; October 15, 1929 – April 22, 2017) was an American philosopher and professor of philosophy at the University of California, Berkeley. His main interests included phenomenology, existentialism and the philosophy of ...
highlighted flawed assumptions of AI research in the past and, as early as 1966, correctly predicted that the first wave of AI research would fail to fulfill the very public promises it was making. Other critics like
Noam Chomsky Avram Noam Chomsky (born December 7, 1928) is an American public intellectual: a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. Sometimes called "the father of modern linguistics", Chomsky i ...
have argued that AI is headed in the wrong direction, in part because of its heavy reliance on statistical techniques. Chomsky's comments fit into a larger debate with Peter Norvig, centered around the role of statistical methods in AI. The exchange between the two started with comments made by Chomsky at a symposium at MIT to which Norvig wrote a response.Peter Norvig
"On Chomsky and the Two Cultures of Statistical Learning"


See also

* AI effect * History of artificial intelligence * Software crisis


Notes


References

* * * * * * * * * * *


Further reading

* Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", ''
Scientific American ''Scientific American'', informally abbreviated ''SciAm'' or sometimes ''SA'', is an American popular science magazine. Many famous scientists, including Albert Einstein and Nikola Tesla, have contributed articles to it. In print since 1845, it ...
'', vol. 316, no. 3 (March 2017), pp. 58–63. ''Multiple'' tests of
artificial-intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animal cognition, animals and human intelligence, humans. Example tasks in ...
efficacy are needed because, "just as there is no single test of
athletic Athletic may refer to: * An athlete, a sportsperson * Athletic director, a position at many American universities and schools * Athletic type, a physical/psychological type in the classification of Ernst Kretschmer * Athletic of Philadelphia, a ba ...
prowess, there cannot be one ultimate test of
intelligence Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. More generally, it can ...
." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original
Turing test The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluat ...
." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. " rtually every sentence hat people generateis
ambiguous Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement ...
, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a
pronoun In linguistics and grammar, a pronoun ( abbreviated ) is a word or a group of words that one may substitute for a noun or noun phrase. Pronouns have traditionally been regarded as one of the parts of speech, but some modern theorists would not ...
in a sentence—such as "he", "she" or "it"—refers. *


External links


ComputerWorld article (February 2005)



"If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
*
Patterns of Software
'- a collection of essays by
Richard P. Gabriel Richard P. Gabriel (born 1949) is an American computer scientist known for his work in computing related to the programming language Lisp, and especially Common Lisp. His best known work was a 1990 essay "Lisp: Good News, Bad News, How to Win B ...
, including several autobiographical essays
Review of "Artificial Intelligence: A General Survey"''
by John McCarthy
Other Freddy II Robot Resources
Includes a link to the 90 minute 1973 "''Controversy''" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government. {{DEFAULTSORT:Ai Winter Economic bubbles History of artificial intelligence Lisp (programming language) History of software Problems in computer science