Neural History Compressor
   HOME

TheInfoList



OR:

Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
, where the order of elements is important. Unlike
feedforward neural network Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neur ...
s, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences. The fundamental building block of RNNs is the ''recurrent unit'', which maintains a ''hidden state''—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected
handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
,
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
,
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, and
neural machine translation Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. It is the dominant a ...
. However, traditional RNNs suffer from the
vanishing gradient problem In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights ar ...
, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the
long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
(LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later,
gated recurrent unit Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a gating mechanism to input or forget certain features, but lacks a ...
s (GRUs) were introduced as a more computationally efficient alternative. In recent years,
transformers ''Transformers'' is a media franchise produced by American toy company Hasbro and Japanese toy company Tomy, Takara Tomy. It primarily follows the heroic Autobots and the villainous Decepticons, two Extraterrestrials in fiction, alien robot fac ...
, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.


History


Before modern

One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the
cerebellar cortex The cerebellum (: cerebella or cerebellums; Latin for 'little brain') is a major feature of the hindbrain of all vertebrates. Although usually smaller than the cerebrum, in some animals such as the mormyrid fishes it may be as large as it or e ...
formed by
parallel fiber Cerebellar granule cells form the thick granular layer of the cerebellar cortex and are among the smallest neurons in the brain. (The term granule cell is used for several unrelated types of small neurons in various parts of the brain.) Cereb ...
,
Purkinje cell Purkinje cells or Purkinje neurons, named for Czech physiologist Jan Evangelista Purkyně who identified them in 1837, are a unique type of prominent, large neuron located in the Cerebellum, cerebellar Cortex (anatomy), cortex of the brain. Wi ...
s, and
granule cell The name granule cell has been used for a number of different types of neurons whose only common feature is that they all have very small cell bodies. Granule cells are found within the granular layer of the cerebellum, the dentate gyrus of t ...
s. In 1933, Lorente de Nó discovered "recurrent, reciprocal connections" by
Golgi's method Golgi's method is a silver staining technique that is used to visualize nervous tissue under light microscopy. The method was discovered by Camillo Golgi, an Italian physician and scientist, who published the first picture made with the techni ...
, and proposed that excitatory loops explain certain aspects of the
vestibulo-ocular reflex The vestibulo-ocular reflex (VOR) is a reflex that acts to stabilize Gaze (physiology), gaze during head movement, with eye movement due to activation of the vestibular system, it is also known as the cervico-ocular reflex. The reflex acts to im ...
. During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943), which proposed the
McCulloch-Pitts neuron An artificial neuron is a mathematical function conceived as a model of a biological neuron in a neural network. The artificial neuron is the elementary unit of an ''artificial neural network''. The design of the artificial neuron was inspired ...
model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past. They were both interested in closed loops as possible explanations for e.g.
epilepsy Epilepsy is a group of Non-communicable disease, non-communicable Neurological disorder, neurological disorders characterized by a tendency for recurrent, unprovoked Seizure, seizures. A seizure is a sudden burst of abnormal electrical activit ...
and
causalgia Complex regional pain syndrome (CRPS type 1 and type 2), sometimes referred to by the hyponyms reflex sympathetic dystrophy (RSD) or reflex neurovascular dystrophy (RND), is a rare and severe form of neuroinflammatory and dysautonomic disorder ...
. Recurrent inhibition was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the
Macy conferences The Macy conferences were a set of meetings of scholars from various academic disciplines held in New York under the direction of Frank Fremont-Smith at the Josiah Macy Jr. Foundation starting in 1941 and ending in 1960. The explicit aim of th ...
. See for an extensive review of recurrent neural network models in neuroscience.
Frank Rosenblatt Frank Rosenblatt (July 11, 1928July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning for his pioneering work on artificial neural networks. Life and career ...
in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered
perceptron In machine learning, the perceptron is an algorithm for supervised classification, supervised learning of binary classification, binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vect ...
networks whose middle layer contains recurrent connections that change by a
Hebbian learning Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptat ...
rule. Later, in ''Principles of Neurodynamics'' (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks, and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network. Similar networks were published by Kaoru Nakano in 1971,
Shun'ichi Amari , is a Japanese engineer and neuroscientist born in 1936 in Tokyo, Japan. Overviews He majored in Mathematical Engineering in 1958 from the University of Tokyo then graduated in 1963 from the Graduate School of the University of Tokyo. His Ma ...
in 1972, and in 1974, who was acknowledged by Hopfield in his 1982 paper. Another origin of RNN was
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
. The
Ising model The Ising model (or Lenz–Ising model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical models in physics, mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that r ...
was developed by
Wilhelm Lenz Wilhelm Lenz (February 8, 1888 in Frankfurt am Main – April 30, 1957 in Hamburg) was a German physicist, most notable for his invention of the Ising model (named after his student, Ernst Ising), and for his application of the Laplace–Runge–Le ...
and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium ( Glauber dynamics), adding in the component of time. The
Sherrington–Kirkpatrick model In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of Spin (physics), spins at a temperature called the "freezing temperature," ''T''f. In Ferromagnetism, ferroma ...
of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.


Modern

Modern RNN networks are mainly based on two architectures: LSTM and BRNN. At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets". Two early influential works were the Jordan network (1986) and the
Elman network Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which proces ...
(1990), which applied RNN to study
cognitive psychology Cognitive psychology is the scientific study of human mental processes such as attention, language use, memory, perception, problem solving, creativity, and reasoning. Cognitive psychology originated in the 1960s in a break from behaviorism, whi ...
. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent
layers Layer or layered may refer to: Arts, entertainment, and media * ''Layers'' (Kungs album) * ''Layers'' (Les McCann album) * ''Layers'' (Royce da 5′9″ album) *“Layers”, the title track of Royce da 5′9″’s sixth studio album * Layer, a ...
in an RNN unfolded in time. Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.
Long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
(LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture. Bidirectional recurrent neural networks (BRNN) uses two RNN that processes the same input in opposite directions.Schuster, Mike, and Kuldip K. Paliwal.
Bidirectional recurrent neural networks
" Signal Processing, IEEE Transactions on 45.11 (1997): 2673-2681.2. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan
These two are often combined, giving the bidirectional LSTM architecture. Around 2006, bidirectional LSTM started to revolutionize
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
, outperforming traditional models in certain speech applications. They also improved large-vocabulary speech recognition and
text-to-speech Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or Computer hardware, hardware products. A text-to-speech (TTS) system conv ...
synthesis and was used in
Google voice search Google Voice Search or Search by Voice is a Google product that allows users to use Google Search by speaking on a mobile phone or computer, i.e. have the device search for data upon entering information on what to search into the device by sp ...
, and dictation on
Android devices Android is an operating system based on a modified version of the Linux kernel and other open-source software, designed primarily for touchscreen-based mobile devices such as smartphones and tablets. Android has historically been developed by ...
. They broke records for improved
machine translation Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statisti ...
,
language modeling A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation,Andreas, Jacob, Andreas Vlachos, and Stephen Clark (2013)"S ...
and Multilingual Language Processing. Also, LSTM combined with
convolutional neural network A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different ty ...
s (CNNs) improved automatic image captioning. The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. irst version posted to arXiv on 10 Sep 2014/ref> A
seq2seq Seq2seq is a family of machine learning approaches used for natural language processing. Applications include language translation, image captioning, conversational models, speech recognition, and text summarization. Seq2seq uses sequence transfor ...
architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of attention mechanisms and
transformers ''Transformers'' is a media franchise produced by American toy company Hasbro and Japanese toy company Tomy, Takara Tomy. It primarily follows the heroic Autobots and the villainous Decepticons, two Extraterrestrials in fiction, alien robot fac ...
.


Configurations

An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.


Standard

RNNs come in many variants. Abstractly speaking, an RNN is a function f_\theta of type (x_t, h_t) \mapsto (y_t, h_), where *x_t: input vector; * h_t: hidden vector; * y_t: output vector; * \theta: neural network parameters. In words, it is a neural network that maps an input x_t into an output y_t, with the hidden vector h_t playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be
layers Layer or layered may refer to: Arts, entertainment, and media * ''Layers'' (Kungs album) * ''Layers'' (Les McCann album) * ''Layers'' (Royce da 5′9″ album) *“Layers”, the title track of Royce da 5′9″’s sixth studio album * Layer, a ...
are, in fact, different steps in time, "unfolded" to produce the appearance of
layers Layer or layered may refer to: Arts, entertainment, and media * ''Layers'' (Kungs album) * ''Layers'' (Les McCann album) * ''Layers'' (Royce da 5′9″ album) *“Layers”, the title track of Royce da 5′9″’s sixth studio album * Layer, a ...
.


Stacked RNN

A stacked RNN, or deep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows # Layer 1 has hidden vector h_, parameters \theta_1, and maps f_ : (x_, h_) \mapsto (x_, h_) . # Layer 2 has hidden vector h_, parameters \theta_2, and maps f_ : (x_, h_) \mapsto (x_, h_) . # ... # Layer n has hidden vector h_, parameters \theta_n, and maps f_ : (x_, h_) \mapsto (x_, h_) . Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.


Bidirectional

A bidirectional RNN (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows: * The forward RNN processes in one direction: f_(x_0, h_0) = (y_0, h_), f_(x_1, h_1) = (y_1, h_), \dots * The backward RNN processes in the opposite direction:f'_(x_N, h_N') = (y'_N, h_'), f'_(x_, h_') = (y'_, h_'), \dots The two output sequences are then concatenated to give the total output: ((y_0, y_0'), (y_1, y_1'), \dots, (y_N, y_N')). Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The
ELMo Elmo is a Muppet character on the children's television show ''Sesame Street''. A furry red monster who speaks in a high-pitched falsetto voice and frequently refers to himself in the third person, he hosts the last full 15-minute segmen ...
model (2018) is a stacked bidirectional
LSTM Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hi ...
which takes character-level as inputs and produces word-level embeddings.


Encoder-decoder

Two RNNs can be run front-to-back in an encoder-decoder configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional
attention mechanism In machine learning, attention is a method that determines the importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented b"soft"weights assigned to eac ...
. This was used to construct state of the art neural machine translators during the 2014–2017 period. This was an instrumental step towards the development of
transformers ''Transformers'' is a media franchise produced by American toy company Hasbro and Japanese toy company Tomy, Takara Tomy. It primarily follows the heroic Autobots and the villainous Decepticons, two Extraterrestrials in fiction, alien robot fac ...
.


PixelRNN

An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions. For example, the row-by-row direction processes an n \times n grid of vectors x_ in the following order: x_, x_, \dots, x_, x_, x_, \dots, x_, \dots, x_The diagonal BiLSTM uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes x_ depending on its hidden state and cell state on the top and the left side: h_, c_ and h_, c_. The other processes it from the top-right corner to the bottom-left.


Architectures


Fully recurrent

Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a
fully connected network Network topology is the arrangement of the elements ( links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and cont ...
. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.


Hopfield

The
Hopfield network A Hopfield network (or associative memory) is a form of recurrent neural network, or a spin glass system, that can serve as a content-addressable memory. The Hopfield network, named for John Hopfield, consists of a single layer of neurons, where ...
is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using
Hebbian learning Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptat ...
, then the Hopfield network can perform as robust
content-addressable memory Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored ...
, resistant to connection alteration.


Elman networks and Jordan networks

An Elman network is a three-layer network (arranged horizontally as ''x'', ''y'', and ''z'' in the illustration) with the addition of a set of context units (''u'' in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.Cruse, Holk
''Neural Networks as Cybernetic Systems''
2nd and revised edition
At each time step, the input is fed forward and a
learning rule Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, non-human animals, and some machines; there is also evidence for some kind ...
is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standard
multilayer perceptron In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is ...
.
Jordan Jordan, officially the Hashemite Kingdom of Jordan, is a country in the Southern Levant region of West Asia. Jordan is bordered by Syria to the north, Iraq to the east, Saudi Arabia to the south, and Israel and the occupied Palestinian ter ...
networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves. Elman and Jordan networks are also known as "Simple recurrent networks" (SRN). ;Elman network : \begin h_t &= \sigma_h(W_ x_t + U_ h_ + b_h) \\ y_t &= \sigma_y(W_ h_t + b_y) \end ;Jordan network : \begin h_t &= \sigma_h(W_ x_t + U_ s_ + b_h) \\ y_t &= \sigma_y(W_ h_t + b_y)\\ s_t &= \sigma_s(W_ s_ + W_ y_ + b_s) \end Variables and functions * x_t: input vector * h_t: hidden layer vector * s_t: "state" vector, * y_t: output vector * W, U and b: parameter matrices and vector * \sigma:
Activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s


Long short-term memory

Long short-term memory (LSTM) is the most widely used RNN architecture. It was designed to solve the
vanishing gradient problem In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights ar ...
. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components. Many applications use stacks of LSTMs, for which it is called "deep LSTM". LSTM can learn to recognize context-sensitive languages unlike previous models based on
hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
s (HMM) and similar concepts.


Gated recurrent unit

Gated recurrent unit (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants. They have fewer parameters than LSTM, as they lack an output gate. Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. There does not appear to be particular performance difference between LSTM and GRU.


Bidirectional associative memory

Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using
Markov Markov ( Bulgarian, ), Markova, and Markoff are common surnames used in Russia and Bulgaria. Notable people with the name include: Academics * Ivana Markova (1938–2024), Czechoslovak-British emeritus professor of psychology at the University of S ...
stepping were optimized for increased network stability and relevance to real-world applications. A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.


Echo state

Echo state networks (ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
. A variant for spiking neurons is known as a liquid state machine.


Recursive

A
recursive neural network A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by ...
is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in
topological order In physics, topological order describes a state or phase of matter that arises system with non-local interactions, such as entanglement in quantum mechanics, and floppy modes in elastic systems. Whereas classical phases of matter such as gases an ...
. Such networks are typically also trained by the reverse mode of
automatic differentiation In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic Hend Dawood and Nefertiti Megahed (2023) ...
. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
. The Recursive Neural Tensor Network uses a
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other ...
-based composition function for all nodes in the tree.


Neural Turing machines

Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembe ...
resources with which they interact. The combined system is analogous to a
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
or
Von Neumann architecture The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on the '' First Draft of a Report on the EDVAC'', written by John von Neumann in 1945, describing designs discus ...
but is
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
end-to-end, allowing it to be efficiently trained with
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
. Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology. Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of
context free grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose Production (computer science), production rules can be applied to a Terminal and nonterminal symbols, nonterminal symbol regardless of its context. In particular ...
s (CFGs). Recurrent neural networks are
Turing complete Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical comput ...
and can run arbitrary programs to process arbitrary sequences of inputs.


Training


Teacher forcing

An RNN can be trained into a conditionally
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
of sequences, aka autoregression. Concretely, let us consider the problem of machine translation, that is, given a sequence (x_1, x_2, \dots, x_n) of English words, the model is to produce a sequence (y_1, \dots, y_m) of French words. It is to be solved by a
seq2seq Seq2seq is a family of machine learning approaches used for natural language processing. Applications include language translation, image captioning, conversational models, speech recognition, and text summarization. Seq2seq uses sequence transfor ...
model. Now, during training, the encoder half of the model would first ingest (x_1, x_2, \dots, x_n), then the decoder half would start generating a sequence (\hat y_1, \hat y_2, \dots, \hat y_). The problem is that if the model makes a mistake early on, say at \hat y_2, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift \hat y_2 towards y_2, but not the others. Teacher forcing makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see (y_1, \dots, y_) in order to generate \hat y_.


Gradient descent

Gradient descent is a first-order
iterative Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. ...
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear
activation function The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
s are
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
. The standard method for training RNN by gradient descent is the "
backpropagation through time Backpropagation through time (BPTT) is a gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. Algorithm The training data ...
" (BPTT) algorithm, which is a special case of the general algorithm of
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of
automatic differentiation In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic Hend Dawood and Nefertiti Megahed (2023) ...
in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space. In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space. For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time. A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem. The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback. One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.


Connectionist temporal classification

The
connectionist temporal classification Connectionist temporal classification (CTC) is a type of neural network output and associated scoring function, for training recurrent neural networks (RNNs) such as Long short-term memory, LSTM networks to tackle sequence problems where the timi ...
(CTC) is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.


Global optimization methods

Training the weights in a neural network can be modeled as a non-linear
global optimization Global optimization is a branch of operations research, applied mathematics, and numerical analysis that attempts to find the global minimum or maximum of a function or a set of functions on a given set. It is usually described as a minimization ...
problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function. The most common global optimization method for training RNNs is
genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
s, especially in unstructured networks. Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the
chromosome A chromosome is a package of DNA containing part or all of the genetic material of an organism. In most chromosomes, the very long thin DNA fibers are coated with nucleosome-forming packaging proteins; in eukaryotic cells, the most import ...
represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows: * Each weight encoded in the chromosome is assigned to the respective weight link of the network. * The training set is presented to the network which propagates the input signals forward. * The mean-squared error is returned to the fitness function. * This function drives the genetic selection process. Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is: * When the neural network has learned a certain percentage of the training data or * When the minimum value of the mean-squared-error is satisfied or * When the maximum number of training generations has been reached. The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error. Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
or
particle swarm optimization In computational science, particle swarm optimization (PSO) is a computational method that Mathematical optimization, optimizes a problem by iterative method, iteratively trying to improve a candidate solution with regard to a given measure of qu ...
.


Other architectures


Independently RNN (IndRNN)

The independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections.


Neural history compressor

The neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level. The system effectively minimizes the description length or the negative
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events. It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events. A
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
partially overcame the
vanishing gradient problem In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights ar ...
of
automatic differentiation In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic Hend Dawood and Nefertiti Megahed (2023) ...
or
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.


Second order RNNs

Second-order RNNs use higher order weights w_ instead of the standard w_ weights, and states can be a product. This allows a direct mapping to a
finite-state machine A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
both in training, stability, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability.


Hierarchical recurrent neural network

Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher
Henri Bergson Henri-Louis Bergson (; ; 18 October 1859 – 4 January 1941) was a French philosopher who was influential in the traditions of analytic philosophy and continental philosophy, especially during the first half of the 20th century until the S ...
, whose philosophical views have inspired hierarchical models. Hierarchical recurrent neural networks are useful in
forecasting Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
, helping to predict disaggregated inflation components of the
consumer price index A consumer price index (CPI) is a statistical estimate of the level of prices of goods and services bought for consumption purposes by households. It is calculated as the weighted average price of a market basket of Goods, consumer goods and ...
(CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established
inflation In economics, inflation is an increase in the average price of goods and services in terms of money. This increase is measured using a price index, typically a consumer price index (CPI). When the general price level rises, each unit of curre ...
prediction methods.


Recurrent multilayer perceptron network

Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.


Multiple timescales model

A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book ''
On Intelligence ''On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines'' is a 2004 book by Jeff Hawkins and Sandra Blakeslee. The book explains Hawkins' memory-prediction framework theory of the brain an ...
''. Such a hierarchy also agrees with theories of memory posited by philosopher
Henri Bergson Henri-Louis Bergson (; ; 18 October 1859 – 4 January 1941) was a French philosopher who was influential in the traditions of analytic philosophy and continental philosophy, especially during the first half of the 20th century until the S ...
, which have been incorporated into an MTRNN model.


Memristive networks

Greg Snider of
HP Labs HP Labs is the exploratory and advanced research group for HP Inc. HP Labs' headquarters is in Palo Alto, California and the group has research and development facilities in Bristol, UK. The development of programmable desktop calculators, ink ...
describes a system of cortical computing with memristive nanodevices. The
memristors A memristor (; a portmanteau of ''memory resistor'') is a non-linear terminal (electronics), two-terminal electronic component, electrical component relating electric charge and magnetic flux linkage. It was described and named in 1971 by Leon ...
(memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film.
DARPA The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems. Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the
Ising model The Ising model (or Lenz–Ising model), named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical models in physics, mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that r ...
. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of
neuromorphic engineering Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term ...
in which the device behavior depends on the circuit wiring or topology. The evolution of these networks can be studied analytically using variations of the
Caravelli Caravelli (born Claude Vasori; 12 September 1930 – 1 April 2019) was a French orchestra leader, composer and arranger of orchestral music. Biography He was born on 12 September 1930 in Paris. The son of an Italian father and a French mother, ...
TraversaDi Ventra equation.


Continuous-time

A continuous-time recurrent neural network (CTRNN) uses a system of
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives ...
to model the effects on a neuron of the incoming inputs. They are typically analyzed by
dynamical systems theory Dynamical systems theory is an area of mathematics used to describe the behavior of complex systems, complex dynamical systems, usually by employing differential equations by nature of the ergodic theory, ergodicity of dynamic systems. When differ ...
. Many RNN models in neuroscience are continuous-time. For a neuron i in the network with activation y_, the rate of change of activation is given by: :\tau_\dot_=-y_+\sum_^w_\sigma(y_-\Theta_)+I_(t) Where: * \tau_ : Time constant of
postsynaptic Chemical synapses are biological junctions through which neurons' signals can be sent to each other and to non-neuronal cells such as those in muscles or glands. Chemical synapses allow neurons to form circuits within the central nervous syste ...
node * y_ : Activation of postsynaptic node * \dot_ : Rate of change of activation of postsynaptic node * w_ : Weight of connection from pre to postsynaptic node * \sigma(x) :
Sigmoid Sigmoid means resembling the lower-case Greek letter sigma (uppercase Σ, lowercase σ, lowercase in word-final position ς) or the Latin letter S. Specific uses include: * Sigmoid function, a mathematical function * Sigmoid colon, part of the l ...
of x e.g. \sigma(x) = 1/(1+e^). * y_ : Activation of presynaptic node * \Theta_ : Bias of presynaptic node * I_(t) : Input (if any) to node CTRNNs have been applied to
evolutionary robotics Evolutionary robotics is an embodied approach to Artificial Intelligence (AI) in which robots are automatically designed using Darwinian principles of natural selection. The design of a robot, or a subsystem of a robot such as a neural controller, ...
where they have been used to address vision, co-operation, and minimal cognitive behaviour. Note that, by the Shannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent
difference equation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
s. This transformation can be thought of as occurring after the post-synaptic node activation functions y_i(t) have been low-pass filtered but prior to sampling. They are in fact
recursive neural network A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by ...
s with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. From a time-series perspective, RNNs can appear as nonlinear versions of
finite impulse response In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of ''finite'' duration, because it settles to zero in finite time. This is in contrast to infinite impuls ...
and
infinite impulse response Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response h(t) that does not become exactly zero past a certain point but continues indefinitely. This is in ...
filters and also as a nonlinear autoregressive exogenous model (NARX). RNN has infinite impulse response whereas
convolutional neural network A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different ty ...
s have finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a
directed acyclic graph In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called ''arcs''), with each edge directed from one ...
that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled. The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity. Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of
long short-term memory Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, ...
networks (LSTMs) and
gated recurrent unit Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a gating mechanism to input or forget certain features, but lacks a ...
s. This is also called Feedback Neural Network (FNN).


Libraries

Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by
just-in-time compilation In computing, just-in-time (JIT) compilation (also dynamic translation or run-time compilations) is compilation (of computer code) during execution of a program (at run time) rather than before execution. This may consist of source code transl ...
. * Apache Singa * Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (prog ...
and
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
wrappers. * Chainer: Fully in Python, production support for CPU, GPU, distributed training. * Deeplearning4j: Deep learning in
Java Java is one of the Greater Sunda Islands in Indonesia. It is bordered by the Indian Ocean to the south and the Java Sea (a part of Pacific Ocean) to the north. With a population of 156.9 million people (including Madura) in mid 2024, proje ...
and Scala on multi-GPU-enabled Spark. *
Flux Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phe ...
: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia. *
Keras Keras is an open-source library that provides a Python interface for artificial neural networks. Keras was first independent software, then integrated into the TensorFlow library, and later added support for more. "Keras 3 is a full rewrite o ...
: High-level API, providing a wrapper to many other deep learning libraries. *
Microsoft Cognitive Toolkit Microsoft Cognitive Toolkit, previously known as CNTK and sometimes styled as The Microsoft Cognitive Toolkit, is a deprecated deep learning framework developed by Microsoft Research. Microsoft Cognitive Toolkit describes neural networks A ne ...
* MXNet: an open-source deep learning framework used to train and deploy deep neural networks. *
PyTorch PyTorch is a machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is one of the mo ...
: Tensors and Dynamic neural networks in Python with GPU acceleration. *
TensorFlow TensorFlow is a Library (computing), software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for Types of artificial neural networks#Training, training and Statistical infer ...
: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU, mobile *
Theano In Greek mythology, Theano (; Ancient Greek: Θεανώ) may refer to the following personages: * Theano, wife of Metapontus, king of Icaria. Metapontus demanded that she bear him children, or leave the kingdom. She presented the children of M ...
: A deep-learning library for Python with an API largely compatible with the
NumPy NumPy (pronounced ) is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. The predeces ...
library. *
Torch A torch is a stick with combustible material at one end which can be used as a light source or to set something on fire. Torches have been used throughout history and are still used in processions, symbolic and religious events, and in juggl ...
: A scientific computing framework with support for machine learning algorithms, written in C and Lua.


Applications

Applications of recurrent neural networks include: *
Machine translation Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statisti ...
*
Robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wirele ...
*
Time series prediction In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
*
Speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
*
Speech synthesis Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal langua ...
* Brain–computer interfaces *Time series anomaly detection * Text-to-Video model *Rhythm learning *Music composition *Grammar learning *
Handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
*Human action recognition *Protein homology detection *Predicting subcellular localization of proteins *Several prediction tasks in the area of business process management *Prediction in medical care pathways * Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code)


References


Further reading

* *
Recurrent Neural Networks
List of RNN papers by
Jürgen Schmidhuber Jürgen Schmidhuber (born 17 January 1963) is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artifici ...
's group at Dalle Molle Institute for Artificial Intelligence Research. {{DEFAULTSORT:Recurrent Neural Network Neural network architectures