Reinforcement learning (RL) is an area of
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
concerned with how
intelligent agent
In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or c ...
s ought to take
actions in an environment in order to maximize the notion of
cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside
supervised learning and
unsupervised learning
Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and t ...
.
Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
The environment is typically stated in the form of a
Markov decision process (MDP), because many reinforcement learning algorithms for this context use
dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.
Introduction

Due to its generality, reinforcement learning is studied in many disciplines, such as
game theory,
control theory
Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
,
operations research
Operations research ( en-GB, operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve dec ...
,
information theory
Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
,
simulation-based optimization,
multi-agent system
A multi-agent system (MAS or "self-organized system") is a computerized system composed of multiple interacting intelligent agents.Hu, J.; Bhowmick, P.; Jang, I.; Arvin, F.; Lanzon, A.,A Decentralized Cluster Formation Containment Framework fo ...
s,
swarm intelligence, and
statistics. In the operations research and control literature, reinforcement learning is called ''approximate dynamic programming,'' or ''neuro-dynamic programming.'' The problems of interest in reinforcement learning have also been studied in the
theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In
economics
Economics () is the social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analy ...
and
game theory, reinforcement learning may be used to explain how equilibrium may arise under
bounded rationality
Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.
Limitations include the difficulty of ...
.
Basic reinforcement learning is modeled as a
Markov decision process (MDP):
* a set of environment and agent states, ;
* a set of actions, , of the agent;
*
is the probability of transition (at time
) from state
to state
under action
.
*
is the immediate reward after transition from
to
with action
.
The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning.
A basic reinforcement learning agent AI interacts with its environment in discrete time steps. At each time , the agent receives the current state
and reward
. It then chooses an action
from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state
and the reward
associated with the ''transition''
is determined. The goal of a reinforcement learning agent is to learn a ''policy'':
,
which maximizes the expected cumulative reward.
Formulating the problem as an MDP assumes the agent directly observes the current environmental state; in this case the problem is said to have ''full observability''. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have ''partial observability'', and formally the problem must be formulated as a
Partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed.
When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of ''
regret''. In order to act near optimally, the agent must reason about the long-term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.
Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including
robot control,
elevator scheduling,
telecommunications
Telecommunication is the transmission of information by various types of technologies over wire, radio, optical, or other electromagnetic systems. It has its origin in the desire of humans for communication over a distance greater than tha ...
,
backgammon
Backgammon is a two-player board game played with counters and dice on tables boards. It is the most widespread Western member of the large family of tables games, whose ancestors date back nearly 5,000 years to the regions of Mesopotamia an ...
,
checkers
Checkers (American English), also known as draughts (; British English), is a group of strategy board games for two players which involve diagonal moves of uniform game pieces and mandatory captures by jumping over opponent pieces. Checkers ...
and
Go (
AlphaGo
AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind Technologies a subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed u ...
).
Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:
* A model of the environment is known, but an
analytic solution is not available;
* Only a simulation model of the environment is given (the subject of
simulation-based optimization);
* The only way to collect information about the environment is to interact with it.
The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
problems.
Exploration
The exploration vs. exploitation trade-off has been most thoroughly studied through the
multi-armed bandit
In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...
problem and for finite state space MDPs in Burnetas and Katehakis (1997).
Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite MDPs is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.
One such method is
-greedy, where
is a parameter controlling the amount of exploration vs. exploitation. With probability
, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability
, exploration is chosen, and the action is chosen uniformly at random.
is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.
Algorithms for control learning
Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.
Criterion of optimality
Policy
The agent's action selection is modeled as a map called ''policy'':
:
approach entails two steps:
* For each possible policy, sample returns while following it
* Choose the policy with the largest expected return
One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.
These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are
.
Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).
These methods rely on the theory of Markov decision processes, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best-expected return from ''any'' initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.
To define optimality in a formal manner, define the value of a policy
. Defining
A policy that achieves these optimal values in each state is called ''optimal''. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return