long short term memory
   HOME

TheInfoList



OR:

Long short-term memory (LSTM) is an
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected unit ...
used in the fields of
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
and deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. Such a
recurrent neural network A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic ...
(RNN) can process not only single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected
handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other de ...
,
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ...
,
machine translation Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates t ...
, robot control, video games, and healthcare. The name of LSTM refers to the analogy that a standard RNN has both "long-term memory" and "short-term memory". The connection weights and biases in the network change once per episode of training, analogous to how physiological changes in synaptic strengths store long-term memories; the activation patterns in the network change once per time-step, analogous to how the moment-to-moment change in electric firing patterns in the brain store short-term memories. The LSTM architecture aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory". A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three ''gates'' regulate the flow of information into and out of the cell. LSTM networks are well-suited to classifying,
processing Processing is a free graphical library and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching non-programmers the fundamentals of computer programming ...
and making predictions based on
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the
vanishing gradient problem In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural network's ...
that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs,
hidden Markov models A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
and other sequence learning methods in numerous applications.


Idea

In theory, classic (or "vanilla") RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with vanilla RNNs is computational (or practical) in nature: when training a vanilla RNN using
back-propagation In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions gener ...
, the long-term gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity), because of the computations involved in the process, which use finite-precision numbers. RNNs using LSTM units partially solve the
vanishing gradient problem In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural network's ...
, because LSTM units allow gradients to also flow ''unchanged''. However, LSTM networks can still suffer from the exploding gradient problem.


Variants

In the equations below, the lowercase variables represent vectors. Matrices W_q and U_q contain, respectively, the weights of the input and recurrent connections, where the subscript _q can either be the input gate i, output gate o, the forget gate f or the memory cell c, depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, c_t \in \mathbb^ is not just one unit of one LSTM cell, but contains h LSTM cell's units.


LSTM with a forget gate

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are: : \begin f_t &= \sigma_g(W_ x_t + U_ h_ + b_f) \\ i_t &= \sigma_g(W_ x_t + U_ h_ + b_i) \\ o_t &= \sigma_g(W_ x_t + U_ h_ + b_o) \\ \tilde_t &= \sigma_c(W_ x_t + U_ h_ + b_c) \\ c_t &= f_t \odot c_ + i_t \odot \tilde_t \\ h_t &= o_t \odot \sigma_h(c_t) \end where the initial values are c_0 = 0 and h_0 = 0 and the operator \odot denotes the Hadamard product (element-wise product). The subscript t indexes the time step.


Variables

*x_t \in \mathbb^: input vector to the LSTM unit *f_t \in ^: forget gate's activation vector *i_t \in ^: input/update gate's activation vector *o_t \in ^: output gate's activation vector *h_t \in ^: hidden state vector also known as output vector of the LSTM unit *\tilde_t \in ^: cell input activation vector *c_t \in \mathbb^: cell state vector *W \in \mathbb^, U \in \mathbb^ and b \in \mathbb^: weight matrices and bias vector parameters which need to be learned during training where the superscripts d and h refer to the number of input features and number of hidden units, respectively.


Activation function In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be "ON" (1) or " ...
s

* \sigma_g:
sigmoid function A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula: :S(x) = \frac = \ ...
. * \sigma_c:
hyperbolic tangent In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the u ...
function. * \sigma_h: hyperbolic tangent function or, as the peephole LSTM paper suggests, \sigma_h(x) = x.


Peephole LSTM

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM). Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state. h_ is not used, c_ is used instead in most places. : \begin f_t &= \sigma_g(W_ x_t + U_ c_ + b_f) \\ i_t &= \sigma_g(W_ x_t + U_ c_ + b_i) \\ o_t &= \sigma_g(W_ x_t + U_ c_ + b_o) \\ c_t &= f_t \odot c_ + i_t \odot \sigma_c(W_ x_t + b_c) \\ h_t &= o_t \odot \sigma_h(c_t) \end Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. i_t, o_t and f_t represent the activations of respectively the input, output and forget gates, at time step t. The 3 exit arrows from the memory cell c to the 3 gates i, o and f represent the ''peephole'' connections. These peephole connections actually denote the contributions of the activation of the memory cell c at time step t-1, i.e. the contribution of c_ (and not c_, as the picture may suggest). In other words, the gates i, o and f calculate their activations at time step t (i.e., respectively, i_t, o_t and f_t) also considering the activation of the memory cell c at time step t - 1, i.e. c_. The single left-to-right arrow exiting the memory cell is ''not'' a peephole connection and denotes c_. The little circles containing a \times symbol represent an element-wise multiplication between its inputs. The big circles containing an ''S''-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.


Peephole convolutional LSTM

Peephole convolutional LSTM. The * denotes the
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
operator. :3reference (Ot is calculated for ''C''(''t'') intead of ''C''(''t'' − 1)): https://arxiv.org/abs/1506.04214v2"> \begin f_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_f) \\ i_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_i) \\ c_t &= f_t \odot c_ + i_t \odot \sigma_c(W_ * x_t + U_ * h_ + b_c) \\ o_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_o) \\ h_t &= o_t \odot \sigma_h(c_t) \end


Training

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
combined with
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers. Algorithm Th ...
to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight. A problem with using
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to \lim_W^n = 0 if the
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
of W is smaller than 1. However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.


CTC score function

Many applications use stacks of LSTM RNNs and train them by
connectionist temporal classification (CTC) Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as LSTM networks to tackle sequence problems where the timing is variable. It can ...
to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.


Alternatives

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution or by policy gradient methods, especially when there is no "teacher" (that is, training labels).


Success

There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units. In 2018,
Bill Gates William Henry Gates III (born October 28, 1955) is an American business magnate and philanthropist. He is a co-founder of Microsoft, along with his late childhood friend Paul Allen. During his career at Microsoft, Gates held the positions ...
called it a "huge milestone in advancing artificial intelligence" when bots developed by
OpenAI OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promo ...
were able to beat humans in the game of Dota 2. OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads. In 2018,
OpenAI OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promo ...
also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game
Starcraft II ''StarCraft II'' is a military science fiction video game created by Blizzard Entertainment as a sequel to the successful ''StarCraft'' video game released in 1998. Set in a fictional future, the game centers on a galactic struggle for dominance ...
. This was viewed as significant progress towards Artificial General Intelligence.


Applications

Applications of LSTM include: *
Robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics could be controlled in various ways, which includes using ma ...
*
Time series prediction In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
*
Speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ...
*Rhythm learning *Music composition *Grammar learning *
Handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other de ...
A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009. *Human action recognition * Sign language translation *Protein homology detection *Predicting subcellular localization of proteins *Time series
anomaly detection In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority o ...
*Several prediction tasks in the area of
business process management Business process management (BPM) is the discipline in which people use various methods to discover, model, analyze, measure, improve, optimize, and automate business processes. Any combination of methods used to manage a company's business p ...
*Prediction in medical care pathways *
Semantic parsing Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Application ...
*
Object co-segmentation In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames. Challenges It is often challenging to extract segmenta ...
*Airport passenger management *Short-term
traffic forecast Transportation forecasting is the attempt of estimating the number of vehicles or people that will use a specific transportation facility in the future. For instance, a forecast may estimate the number of vehicles on a planned road or bridge, the r ...
*
Drug design Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that acti ...
*Market Prediction


Timeline of development

1991:
Sepp Hochreiter Josef "Sepp" Hochreiter (born 14 February 1967) is a German computer scientist. Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018 ...
analyzed the
vanishing gradient problem In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural network's ...
and developed principles of the method in his German diploma thesis advised by
Jürgen Schmidhuber Jürgen Schmidhuber (born 17 January 1963) is a German computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artifi ...
. 1995: "Long Short-Term Memory (LSTM)" is published in a technical report by
Sepp Hochreiter Josef "Sepp" Hochreiter (born 14 February 1967) is a German computer scientist. Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018 ...
and
Jürgen Schmidhuber Jürgen Schmidhuber (born 17 January 1963) is a German computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artifi ...
. 1996: LSTM is published at NIPS'1996, a peer-reviewed conference. 1997: The main LSTM paper is published in the journal
Neural Computation Neural computation is the information processing performed by networks of neurons. Neural computation is affiliated with the philosophical tradition known as Computational theory of mind, also referred to as computationalism, which advances the th ...
. By introducing Constant Error Carousel (CEC) units, LSTM deals with the
vanishing gradient problem In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural network's ...
. The initial version of LSTM block included cells, input and output gates. 1999:
Felix Gers Felix Gers is a professor of computer science at Berlin University of Applied Sciences Berlin. With Jürgen Schmidhuber and Fred Cummins, he introduced the forget gate to the long short-term memory Long short-term memory (LSTM) is an artificia ...
and his advisor
Jürgen Schmidhuber Jürgen Schmidhuber (born 17 January 1963) is a German computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artifi ...
and Fred Cummins introduced the forget gate (also called "keep gate") into the LSTM architecture, enabling the LSTM to reset its own state. 2000: Gers & Schmidhuber & Cummins added peephole connections (connections from the cell to the gates) into the architecture. Additionally, the output activation function was omitted. 2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models. Hochreiter et al. used LSTM for
meta-learning Meta-learning is a branch of metacognition concerned with learning about one's own learning and learning processes. The term comes from the meta prefix's modern meaning of an abstract recursion, or "X about X", similar to its use in metaknowled ...
(i.e. learning a learning algorithm). 2004: First successful application of LSTM to speech by Schmidhuber's student
Alex Graves Alexander John Graves (born July 23, 1965) is an American film director, television director, television producer and screenwriter. Early life Alex Graves was born in Kansas City, Missouri. His father, William Graves, was a reporter for ''Th ...
et al. 2005: First publication (Graves and Schmidhuber) of LSTM with full
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers. Algorithm Th ...
and of bi-directional LSTM. 2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher. 2006: Graves, Fernandez, Gomez, and Schmidhuber introduce a new error function for LSTM:
Connectionist Temporal Classification Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as LSTM networks to tackle sequence problems where the timing is variable. It can b ...
(CTC) for simultaneous alignment and recognition of sequences. CTC-trained LSTM led to breakthroughs in speech recognition. Mayer et al. trained LSTM to control
robot A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may ...
s. 2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for
reinforcement learning Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine ...
without a teacher. Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of
biology Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary i ...
. 2009: An LSTM trained by CTC won the
ICDAR The International Conference on Document Analysis and Recognition (ICDAR) is an international academic conference which is held every two years in a different city. It is about Optical character recognition, character and symbol recognition, printed ...
connected handwriting recognition competition. Three such models were submitted by a team led by
Alex Graves Alexander John Graves (born July 23, 1965) is an American film director, television director, television producer and screenwriter. Early life Alex Graves was born in Kansas City, Missouri. His father, William Graves, was a reporter for ''Th ...
. One was the most accurate model in the competition and another was the fastest. This was the first time an RNN won international competitions. 2009: Justin Bayer et al. introduced
neural architecture search Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS has been used to design networks that are on par or outperform hand-designed a ...
for LSTM. 2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7%
phoneme In phonology and linguistics, a phoneme () is a unit of sound that can distinguish one word from another in a particular language. For example, in most dialects of English, with the notable exception of the West Midlands and the north-wes ...
error rate on the classic
TIMIT TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time. TIMIT was designed to further acoustic-phonetic knowledge and au ...
natural speech dataset. 2014: Kyunghyun Cho et al. put forward a simplified variant of the forget gate LSTM called
Gated recurrent unit Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an ou ...
(GRU). 2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%. 2015: Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the
Highway network In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous artificial neural networks. It uses skip connections modulated by learned gating mechanisms to ...
, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless
Highway network In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous artificial neural networks. It uses skip connections modulated by learned gating mechanisms to ...
variant called
Residual neural network A residual neural network (ResNet) is an artificial neural network (ANN). It is a gateless or open-gated variant of the HighwayNet, the first working very deep feedforward neural network with hundreds of layers, much deeper than previous neural n ...
. This has become the most cited neural network of the 21st century. 2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the
Google Neural Machine Translation Google Neural Machine Translation (GNMT) is a neural machine translation (NMT) system developed by Google and introduced in November 2016, that uses an artificial neural network to increase fluency and accuracy in Google Translate. GNMT improve ...
system for Google Translate which used LSTMs to reduce translation errors by 60%. Apple announced in its
Worldwide Developers Conference The Worldwide Developers Conference (WWDC) is an information technology conference held annually by Apple Inc. The conference is usually held at Apple Park in California. The event is usually used to showcase new software and technologies in t ...
that it would start using the LSTM for quicktype in the iPhone and for Siri. Amazon released
Polly Polly is a given name, most often feminine, which originated as a variant of Molly (name), Molly (a diminutive of Mary (name), Mary). Polly may also be a short form of names such as Polina (given name), Polina, Polona (given name), Polona, Paula (g ...
, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology. 2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks. Researchers from Michigan State University,
IBM Research IBM Research is the research and development division for IBM, an American multinational information technology company headquartered in Armonk, New York, with operations in over 170 countries. IBM Research is the largest industrial research or ...
, and
Cornell University Cornell University is a private statutory land-grant research university based in Ithaca, New York. It is a member of the Ivy League. Founded in 1865 by Ezra Cornell and Andrew Dickson White, Cornell was founded with the intention to tea ...
published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM. Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory". 2018:
OpenAI OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promo ...
used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. 2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of
Starcraft II ''StarCraft II'' is a military science fiction video game created by Blizzard Entertainment as a sequel to the successful ''StarCraft'' video game released in 1998. Set in a fictional future, the game centers on a galactic struggle for dominance ...
. 2021: According to
Google Scholar Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes ...
, in 2021, LSTM was cited over 16,000 times within a single year. This reflects applications of LSTM in many different fields including healthcare.


See also

* Deep learning * Differentiable neural computer *
Gated recurrent unit Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an ou ...
*
Highway network In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous artificial neural networks. It uses skip connections modulated by learned gating mechanisms to ...
*
Long-term potentiation In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons ...
* Prefrontal cortex basal ganglia working memory *
Recurrent neural network A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic ...
*
Seq2seq Seq2seq is a family of machine learning approaches used for natural language processing. Applications include language translation, image captioning, conversational models and text summarization. History The algorithm was proposed by Mikolo ...
* Time aware long short-term memory *
Time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...


References


External links


Recurrent Neural Networks
with over 30 LSTM papers by
Jürgen Schmidhuber Jürgen Schmidhuber (born 17 January 1963) is a German computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artifi ...
's group at
IDSIA The Dalle Molle Institute for Artificial Intelligence Research ( it, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale, italic=no, IDSIA) is a research institution based in Lugano, in Canton Ticino in southern Switzerland. It was found ...
* * * *
original
with two chapters devoted to explaining recurrent neural networks, especially LSTM. * * * {{DEFAULTSORT:Long Short Term Memory Neural network architectures