HOME

TheInfoList



OR:

Reinforcement learning (RL) is an interdisciplinary area of
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
and
optimal control Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations ...
concerned with how an
intelligent agent In artificial intelligence, an intelligent agent is an entity that Machine perception, perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge r ...
should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
and
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
. Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed). The search for this balance is known as the exploration–exploitation dilemma. The environment is typically stated in the form of a Markov decision process (MDP), as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large MDPs where exact methods become infeasible.


Principles

Due to its generality, reinforcement learning is studied in many disciplines, such as
game theory Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed ...
,
control theory Control theory is a field of control engineering and applied mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the applic ...
,
operations research Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve management and ...
,
information theory Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
, simulation-based optimization, multi-agent systems, swarm intelligence, and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
. In the operations research and control literature, RL is called ''approximate dynamic programming'', or ''neuro-dynamic programming.'' The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment). Basic reinforcement learning is modeled as a Markov decision process: * A set of environment and agent states (the state space), \mathcal; * A set of actions (the action space), \mathcal, of the agent; * P_a(s,s')=\Pr(S_=s'\mid S_t=s, A_t=a), the transition probability (at time t) from state s to state s' under action a. * R_a(s,s'), the immediate reward after transition from s to s' under action a. The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step , the agent receives the current state S_t and reward R_t. It then chooses an action A_t from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state S_ and the reward R_ associated with the ''transition'' (S_t,A_t,S_) is determined. The goal of a reinforcement learning agent is to learn a ''policy'': \pi: \mathcal \times \mathcal \rightarrow ,1, \pi(s,a) = \Pr(A_t = a\mid S_t =s) that maximizes the expected cumulative reward. Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have ''full observability''. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have ''partial observability'', and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including
energy storage Energy storage is the capture of energy produced at one time for use at a later time to reduce imbalances between energy demand and energy production. A device that stores energy is generally called an Accumulator (energy), accumulator or Batte ...
,
robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wirele ...
, photovoltaic generators,
backgammon Backgammon is a two-player board game played with counters and dice on tables boards. It is the most widespread Western member of the large family of tables games, whose ancestors date back at least 1,600 years. The earliest record of backgammo ...
,
checkers Checkers (American English), also known as draughts (; English in the Commonwealth of Nations, Commonwealth English), is a group of Abstract strategy game, strategy board games for two players which involve forward movements of uniform game ...
, Go ( AlphaGo), and autonomous driving systems. Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of
function approximation In general, a function approximation problem asks us to select a function (mathematics), function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied ...
to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations: * A model of the environment is known, but an analytic solution is not available; * Only a simulation model of the environment is given (the subject of simulation-based optimization); * The only way to collect information about the environment is to interact with it. The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
problems.


Exploration

The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is \varepsilon-greedy, where 0 < \varepsilon < 1 is a parameter controlling the amount of exploration vs. exploitation. With probability 1-\varepsilon, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability \varepsilon, exploration is chosen, and the action is chosen uniformly at random. \varepsilon is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.


Algorithms for control learning

Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.


Criterion of optimality


Policy

The agent's action selection is modeled as a map called ''policy'': :\pi: \mathcal \times \mathcal \rightarrow ,1/math> :\pi(a,s) = \Pr(A_t = a \mid S_t = s) The policy map gives the probability of taking action a when in state s. There are also deterministic policies \pi for which \pi(s) denotes the action that should be played at state s.


State-value function

The state-value function V_\pi(s) is defined as, ''expected discounted return'' starting with state s, i.e. S_0 = s, and successively following policy \pi. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state. :V_\pi(s) = \operatorname \mathbb \mid S_0 = s= \operatorname \mathbb\left sum_^\infty \gamma^t R_\mid S_0 = s\right where the random variable G denotes the discounted return, and is defined as the sum of future discounted rewards: :G=\sum_^\infty \gamma^t R_=R_1 + \gamma R_2 + \gamma^2 R_3 + \dots, where R_ is the reward for transitioning from state S_t to S_, 0 \le \gamma<1 is the discount rate. \gamma is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called ''stationary'' policies. A policy is ''stationary'' if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to ''deterministic'' stationary policies. A ''deterministic stationary'' policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.


Brute force

The brute force approach entails two steps: * For each possible policy, sample returns while following it * Choose the policy with the largest expected discounted return One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.


Value function

Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns \operatorname \mathbb /math> for some policy (usually either the "current" n-policyor the optimal ff-policyone). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from ''any'' initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy \pi by : V^ (s) = \operatorname \mathbb \mid s,\pi where G stands for the discounted return associated with following \pi from the initial state s. Defining V^*(s) as the maximum possible state-value of V^\pi(s), where \pi is allowed to change, :V^*(s) = \max_\pi V^\pi(s). A policy that achieves these optimal state-values in each state is called ''optimal''. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since V^*(s) = \max_\pi \mathbb \mid s,\pi/math>, where s is a state randomly sampled from the distribution \mu of initial states (so \mu(s) = \Pr(S_0 = s)). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s, an action a and a policy \pi, the action-value of the pair (s,a) under \pi is defined by :Q^\pi(s,a) = \operatorname \mathbb \mid s,a,\pi\, where G now stands for the random discounted return associated with first taking action a in state s and following \pi, thereafter. The theory of Markov decision processes states that if \pi^* is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q^(s,\cdot) with the highest action-value at each state, s. The ''action-value function'' of such an optimal policy (Q^) is called the ''optimal action-value function'' and is commonly denoted by Q^*. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q_k (k=0,1,2,\ldots) that converge to Q^*. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.


Monte Carlo methods

Monte Carlo methods Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on Resampling (statistics), repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve pr ...
are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or simulated experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of transition probabilities, which is necessary for dynamic programming methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving
random sampling In this statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the who ...
; however, in this context, it specifically refers to methods that compute averages from ''complete'' returns, rather than ''partial'' returns. These methods function similarly to the bandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem non-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes
value function The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payo ...
s using full knowledge of the Markov decision process (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve optimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience.


Temporal difference methods

The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of ''generalized policy iteration'' algorithms. Many ''actor-critic'' methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical Optimization (mathematics), optimization method known as dynamic programming. It writes the "value" of a decision problem ...
. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called \lambda parameter (0\le \lambda\le 1) that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.


Function approximation methods

In order to address the fifth issue, ''function approximation methods'' are used. ''Linear function approximation'' starts with a mapping \phi that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair (s,a) are obtained by linearly combining the components of \phi(s,a) with some ''weights'' \theta: :Q(s,a) = \sum_^d \theta_i \phi_i(s,a). The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from
nonparametric statistics Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric s ...
(which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency.


Direct policy search

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of
stochastic optimization Stochastic optimization (SO) are optimization methods that generate and use random variables. For stochastic optimization problems, the objective functions or constraints are random. Stochastic optimization also include methods with random iter ...
. The two approaches available are gradient-based and gradient-free methods.
Gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
-based methods (''policy gradient methods'') start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector \theta, let \pi_\theta denote the policy associated to \theta. Defining the performance function by \rho(\theta) = \rho^ under mild conditions this function will be differentiable as a function of the parameter vector \theta. If the gradient of \rho was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). A large class of methods avoids relying on gradient information. These include
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
, cross-entropy search or methods of
evolutionary computation Evolutionary computation from computer science is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms ...
. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, ''actor–critic methods'' have been proposed and performed well on various problems. Policy search methods have been used in the
robotics Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots. Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer s ...
context. Many policy search methods may get stuck in local optima (as they are based on local search).


Model-based algorithms

Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm. Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly.


Theory

Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).


Research

Research topics include: * actor-critic architecture * actor-critic-scenery architecture * adaptive methods that work with fewer (or no) parameters under a large number of conditions * bug detection in software projects * continuous learning * combinations with logic-based frameworks * exploration in large Markov decision processes * entity-based reinforcement learning * human feedback * interaction between implicit and explicit learning in skill acquisition * intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours large-scale empirical evaluations * large (or continuous) action spaces * modular and hierarchical reinforcement learning * multiagent/distributed reinforcement learning is a topic of interest. Applications are expanding. * occupant-centric control * optimization of computing resources * partial information (e.g., using predictive state representation) * reward function based on maximising novel information * sample-based planning (e.g., based on
Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree. MCTS ...
). * securities trading * transfer learning * TD learning modeling
dopamine Dopamine (DA, a contraction of 3,4-dihydroxyphenethylamine) is a neuromodulatory molecule that plays several important roles in cells. It is an organic chemical of the catecholamine and phenethylamine families. It is an amine synthesized ...
-based learning in the brain.
Dopaminergic Dopaminergic means "related to dopamine" (literally, "working on dopamine"), a common neurotransmitter. Dopaminergic substances or actions increase dopamine-related activity in the brain. Dopaminergic pathways, Dopaminergic brain pathways facil ...
projections from the
substantia nigra The substantia nigra (SN) is a basal ganglia structure located in the midbrain that plays an important role in reward and movement. ''Substantia nigra'' is Latin for "black substance", reflecting the fact that parts of the substantia nigra a ...
to the
basal ganglia The basal ganglia (BG) or basal nuclei are a group of subcortical Nucleus (neuroanatomy), nuclei found in the brains of vertebrates. In humans and other primates, differences exist, primarily in the division of the globus pallidus into externa ...
function are the prediction error. * value-function and policy search methods


Comparison of key algorithms

The following table lists the key algorithms for learning a policy depending on several criteria: * The algorithm can be on-policy (it performs policy updates using trajectories sampled via the current policy) or off-policy. * The action space may be discrete (e.g. the action space could be "going up", "going left", "going right", "going down", "stay") or continuous (e.g. moving the arm with a given angle). * The state space may be discrete (e.g. the agent could be in a cell in a grid) or continuous (e.g. the agent could be located at a given position in the plane).


Associative reinforcement learning

Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.


Deep reinforcement learning

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google
DeepMind DeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British–American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc. Founded in the UK in 2010, it was acquired by Go ...
increased attention to deep reinforcement learning or end-to-end reinforcement learning.


Adversarial deep reinforcement learning

Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.


Fuzzy reinforcement learning

By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).


Inverse reinforcement learning

In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function.


Multi-objective reinforcement learning

Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments.


Safe reinforcement learning

Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the ''expected'' return, a ''risk-measure'' of the return is optimized, such as the conditional value at risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success.


Self-reinforcement learning

Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate reward R_a(s,s') after transition from s to s' with action a. It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation. The self-reinforcement algorithm updates a memory matrix W=, , w(a,s), , such that in each iteration executes the following machine learning routine: # In situation s perform action a. # Receive a consequence situation s'. # Compute state evaluation v(s') of how good is to be in the consequence situation s'. # Update crossbar memory w'(a,s) = w(a,s) + v(s'). Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior). Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA). The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion.


Reinforcement Learning in Natural Language Processing

In recent years, Reinforcement learning has become a significant concept in Natural Language Processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label. Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP. A major breakthrough happened with the introduction of Reinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
which incorporates RLHF for improving output responses and ensuring safety. More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models.


Statistical comparison of reinforcement learning algorithms

Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as
T-test Student's ''t''-test is a statistical test used to test whether the difference between the response of two groups is Statistical significance, statistically significant or not. It is any statistical hypothesis testing, statistical hypothesis test ...
and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise.


Challenges and Limitations

Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios.


Sample Inefficiency

RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance, OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay and curriculum learning have been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications.


Stability and Convergence Issues

Training RL models, particularly for deep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable.


Generalization and Transferability

The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining.


Bias and Reward Function Issues

Designing appropriate reward functions is critical in RL because poorly designed reward functions can lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors.


See also

* Active learning (machine learning) * Apprenticeship learning * Error-driven learning * Model-free (reinforcement learning) * Multi-agent reinforcement learning *
Optimal control Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations ...
* Q-learning * Reinforcement learning from human feedback * State–action–reward–state–action (SARSA) * Temporal difference learning


References


Further reading

* * * * * * * * * *


External links


Dissecting Reinforcement Learning
Series of blog post on reinforcement learning with Python code
A (Long) Peek into Reinforcement Learning
{{Computer science Markov models Belief revision