HOME

TheInfoList



OR:

Long short-term memory (LSTM) is a type of
recurrent neural network Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which proces ...
(RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "''long'' short-term memory"). The name is made in analogy with
long-term memory Long-term memory (LTM) is the stage of the Atkinson–Shiffrin memory model in which informative knowledge is held indefinitely. It is defined in contrast to sensory memory, the initial stage, and short-term or working memory, the second stage ...
and
short-term memory Short-term memory (or "primary" or "active memory") is the capacity for holding a small amount of information in an active, readily available state for a short interval. For example, short-term memory holds a phone number that has just been recit ...
and their relationship, studied by cognitive psychologists since the early 20th century. An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate, and a forget gate. The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps. LSTM has wide applications in
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
,
data processing Data processing is the collection and manipulation of digital data to produce meaningful information. Data processing is a form of ''information processing'', which is the modification (processing) of information in any manner detectable by an o ...
,
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
analysis tasks,
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
,
machine translation Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statisti ...
, speech activity detection,
robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wirele ...
,
video game A video game or computer game is an electronic game that involves interaction with a user interface or input device (such as a joystick, game controller, controller, computer keyboard, keyboard, or motion sensing device) to generate visual fe ...
s,
healthcare Health care, or healthcare, is the improvement or maintenance of health via the preventive healthcare, prevention, diagnosis, therapy, treatment, wikt:amelioration, amelioration or cure of disease, illness, injury, and other disability, physic ...
.


Motivation

In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using
back-propagation In machine learning, backpropagation is a gradient computation method commonly used for training a Neural network (machine learning), neural network to compute its parameter updates. It is an efficient application of the chain rule to neural ne ...
, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem. The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information. In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, the network can learn grammatical dependencies. An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject ''Dave'', note that this information is pertinent for the pronoun ''his'' and note that this information is no longer important after the verb ''is''.


Variants

In the equations below, the lowercase variables represent vectors. Matrices W_q and U_q contain, respectively, the weights of the input and recurrent connections, where the subscript _q can either be the input gate i, output gate o, the forget gate f or the memory cell c, depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, c_t \in \mathbb^ is not just one unit of one LSTM cell, but contains h LSTM cell's units. See for an empirical study of 8 architectural variants of LSTM.


LSTM with a forget gate

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are: : \begin f_t &= \sigma_g(W_ x_t + U_ h_ + b_f) \\ i_t &= \sigma_g(W_ x_t + U_ h_ + b_i) \\ o_t &= \sigma_g(W_ x_t + U_ h_ + b_o) \\ \tilde_t &= \sigma_c(W_ x_t + U_ h_ + b_c) \\ c_t &= f_t \odot c_ + i_t \odot \tilde_t \\ h_t &= o_t \odot \sigma_h(c_t) \end where the initial values are c_0 = 0 and h_0 = 0 and the operator \odot denotes the Hadamard product (element-wise product). The subscript t indexes the time step.


Variables

Letting the superscripts d and h refer to the number of input features and number of hidden units, respectively: *x_t \in \mathbb^: input vector to the LSTM unit *f_t \in ^: forget gate's activation vector *i_t \in ^: input/update gate's activation vector *o_t \in ^: output gate's activation vector *h_t \in ^: hidden state vector also known as output vector of the LSTM unit *\tilde_t \in ^: cell input activation vector *c_t \in \mathbb^: cell state vector *W \in \mathbb^, U \in \mathbb^ and b \in \mathbb^: weight matrices and bias vector parameters which need to be learned during training


Activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s

* \sigma_g:
sigmoid function A sigmoid function is any mathematical function whose graph of a function, graph has a characteristic S-shaped or sigmoid curve. A common example of a sigmoid function is the logistic function, which is defined by the formula :\sigma(x ...
. * \sigma_c: hyperbolic tangent function. * \sigma_h: hyperbolic tangent function or, as the peephole LSTM paper suggests, \sigma_h(x) = x.


Peephole LSTM

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM). Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state. h_ is not used, c_ is used instead in most places. : \begin f_t &= \sigma_g(W_ x_t + U_ c_ + b_f) \\ i_t &= \sigma_g(W_ x_t + U_ c_ + b_i) \\ o_t &= \sigma_g(W_ x_t + U_ c_ + b_o) \\ c_t &= f_t \odot c_ + i_t \odot \sigma_c(W_ x_t + b_c) \\ h_t &= o_t \odot \sigma_h(c_t) \end Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. i_t, o_t and f_t represent the activations of respectively the input, output and forget gates, at time step t. The 3 exit arrows from the memory cell c to the 3 gates i, o and f represent the ''peephole'' connections. These peephole connections actually denote the contributions of the activation of the memory cell c at time step t-1, i.e. the contribution of c_ (and not c_, as the picture may suggest). In other words, the gates i, o and f calculate their activations at time step t (i.e., respectively, i_t, o_t and f_t) also considering the activation of the memory cell c at time step t - 1, i.e. c_. The single left-to-right arrow exiting the memory cell is ''not'' a peephole connection and denotes c_. The little circles containing a \times symbol represent an element-wise multiplication between its inputs. The big circles containing an ''S''-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.


Peephole convolutional LSTM

Peephole convolutional LSTM. The * denotes the
convolution In mathematics (in particular, functional analysis), convolution is a operation (mathematics), mathematical operation on two function (mathematics), functions f and g that produces a third function f*g, as the integral of the product of the two ...
operator. :3reference (Ot is calculated for ''C''(''t'') intead of ''C''(''t'' − 1)): https://arxiv.org/abs/1506.04214v2"> \begin f_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_f) \\ i_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_i) \\ c_t &= f_t \odot c_ + i_t \odot \sigma_c(W_ * x_t + U_ * h_ + b_c) \\ o_t &= \sigma_g(W_ * x_t + U_ * h_ + V_ \odot c_ + b_o) \\ h_t &= o_t \odot \sigma_h(c_t) \end


Training

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
combined with
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. Algorithm The training data ...
to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight. A problem with using
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to \lim_W^n = 0 if the
spectral radius ''Spectral'' is a 2016 Hungarian-American military science fiction action film co-written and directed by Nic Mathieu. Written with Ian Fried (screenwriter), Ian Fried & George Nolfi, the film stars James Badge Dale as DARPA research scientist Ma ...
of W is smaller than 1. However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.


CTC score function

Many applications use stacks of LSTM RNNs and train them by connectionist temporal classification (CTC) to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.


Alternatives

Sometimes, it can be advantageous to train (parts of) an LSTM by
neuroevolution Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing ...
or by policy gradient methods, especially when there is no "teacher" (that is, training labels).


Applications

Applications of LSTM include: *
Robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wirele ...
* Time series prediction *
Speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
*Rhythm learning * Hydrological rainfall–runoff modeling *Music composition *Grammar learning *
Handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009. *Human action recognition * Sign language translation *Protein homology detection *Predicting subcellular localization of proteins *Time series
anomaly detection In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of ...
*Several prediction tasks in the area of
business process management Business process management (BPM) is the discipline in which people use various methods to Business process discovery, discover, Business process modeling, model, Business analysis, analyze, measure, improve, optimize, and Business process auto ...
*Prediction in medical care pathways *
Semantic parsing Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applicat ...
* Object co-segmentation *Airport passenger management *Short-term traffic forecast *
Drug design Drug design, often referred to as rational drug design or simply rational design, is the invention, inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic compound, organi ...
*Market Prediction * Activity Classification in Video 2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%. 2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%. Apple announced in its
Worldwide Developers Conference The Worldwide Developers Conference (WWDC) is an information technology conference held annually by Apple Inc. The conference is currently held at Apple Park in California. The event is used to showcase new software and technologies in the macO ...
that it would start using the LSTM for quicktype in the iPhone and for Siri. Amazon released
Polly Polly is a given name, most often feminine, which originated as a variant of Molly (name), Molly (a diminutive of Mary (name), Mary). Polly may also be a short form of names such as Polina (given name), Polina, Polona (given name), Polona, Pauline ...
, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology. 2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks. Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory". 2018:
OpenAI OpenAI, Inc. is an American artificial intelligence (AI) organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines ...
used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. 2019:
DeepMind DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Go ...
used LSTM trained by policy gradients to excel at the complex video game of
Starcraft II ''StarCraft II'' is a real-time strategy video game created by Blizzard Entertainment, first released in 2010. A sequel to the successful '' StarCraft'', released in 1998, it is set in a militaristic far future. The narrative centers on a galacti ...
.


History


Development

Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989), cited by the LSTM paper. Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method. His supervisor, Jürgen Schmidhuber, considered the thesis highly significant. An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber, then published in the NIPS 1996 conference. The most commonly used reference point for LSTM was published in 1997 in the journal
Neural Computation Neural computation is the information processing performed by networks of neurons. Neural computation is affiliated with the philosophical tradition known as Computational theory of mind, also referred to as computationalism, which advances the th ...
. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates. ( Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999) introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state. This is the most commonly used version of LSTM nowadays. (Gers, Schmidhuber, and Cummins, 2000) added peephole connections. Additionally, the output activation function was omitted.


Development of variants

(Graves, Fernandez, Gomez, and Schmidhuber, 2006) introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences. (Graves, Schmidhuber, 2005) published LSTM with full
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. Algorithm The training data ...
and bidirectional LSTM. (Kyunghyun Cho et al., 2014) published a simplified variant of the forget gate LSTM called Gated recurrent unit (GRU). (Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles to create the Highway network, a
feedforward neural network Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neur ...
with hundreds of layers, much deeper than previous networks. Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network. A modern upgrade of LSTM called xLSTM is published by a team led by Sepp Hochreiter (Maximilian et al, 2024). One of the 2 blocks (mLSTM) of the architecture are parallelizable like the
Transformer In electrical engineering, a transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple Electrical network, circuits. A varying current in any coil of the transformer produces ...
architecture, the other ones (sLSTM) allow state tracking.


Applications

2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models. Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm). 2004: First successful application of LSTM to speech Alex Graves et al. 2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by
neuroevolution Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing ...
without a teacher. Mayer et al. trained LSTM to control
robot A robot is a machine—especially one Computer program, programmable by a computer—capable of carrying out a complex series of actions Automation, automatically. A robot can be guided by an external control device, or the robot control, co ...
s. 2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for
reinforcement learning Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learnin ...
without a teacher. Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of
biology Biology is the scientific study of life and living organisms. It is a broad natural science that encompasses a wide range of fields and unifying principles that explain the structure, function, growth, History of life, origin, evolution, and ...
. 2009: Justin Bayer et al. introduced neural architecture search for LSTM. 2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves. One was the most accurate model in the competition and another was the fastest. This was the first time an RNN won international competitions. 2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7%
phoneme A phoneme () is any set of similar Phone (phonetics), speech sounds that are perceptually regarded by the speakers of a language as a single basic sound—a smallest possible Phonetics, phonetic unit—that helps distinguish one word fr ...
error rate on the classic
TIMIT TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time. TIMIT was designed to further acoustic-phonetic knowledge and a ...
natural speech dataset. 2017: Researchers from
Michigan State University Michigan State University (Michigan State or MSU) is a public university, public Land-grant university, land-grant research university in East Lansing, Michigan, United States. It was founded in 1855 as the Agricultural College of the State o ...
,
IBM Research IBM Research is the research and development division for IBM, an American Multinational corporation, multinational information technology company. IBM Research is headquartered at the Thomas J. Watson Research Center in Yorktown Heights, New York ...
, and
Cornell University Cornell University is a Private university, private Ivy League research university based in Ithaca, New York, United States. The university was co-founded by American philanthropist Ezra Cornell and historian and educator Andrew Dickson W ...
published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their time-aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.


See also

*
Attention (machine learning) In machine learning, attention is a method that determines the importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented b"soft"weights assigned to eac ...
*
Deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
* Differentiable neural computer * Gated recurrent unit * Highway network *
Long-term potentiation In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neuron ...
*
Prefrontal cortex basal ganglia working memory Prefrontal cortex basal ganglia working memory (PBWM) is an algorithm that Computer simulation, models working memory in the prefrontal cortex and the basal ganglia. It can be compared to long short-term memory (LSTM) in functionality, but is more ...
*
Recurrent neural network Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which proces ...
*
Seq2seq Seq2seq is a family of machine learning approaches used for natural language processing. Applications include language translation, image captioning, conversational models, speech recognition, and text summarization. Seq2seq uses sequence transfor ...
*
Transformer (machine learning model) The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. ...
*
Time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...


References


Further reading

* * * * *
original
with two chapters devoted to explaining recurrent neural networks, especially LSTM.


External links



with over 30 LSTM papers by Jürgen Schmidhuber's group at IDSIA * {{DEFAULTSORT:Long Short Term Memory Neural network architectures Deep learning