HOME

TheInfoList



OR:

Reinforcement learning (RL) is an area of
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
concerned with how
intelligent agent In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or ...
s ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside
supervised learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.


Introduction

Due to its generality, reinforcement learning is studied in many disciplines, such as
game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appli ...
,
control theory Control theory is a field of mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive ...
,
operations research Operations research ( en-GB, operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve decis ...
,
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
, simulation-based optimization,
multi-agent system A multi-agent system (MAS or "self-organized system") is a computerized system composed of multiple interacting intelligent agents.Hu, J.; Bhowmick, P.; Jang, I.; Arvin, F.; Lanzon, A.,A Decentralized Cluster Formation Containment Framework fo ...
s,
swarm intelligence Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, ...
, and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
. In the operations research and control literature, reinforcement learning is called ''approximate dynamic programming,'' or ''neuro-dynamic programming.'' The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In
economics Economics () is the social science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics anal ...
and
game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appli ...
, reinforcement learning may be used to explain how equilibrium may arise under
bounded rationality Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal. Limitations include the difficulty o ...
. Basic reinforcement learning is modeled as a Markov decision process (MDP): * a set of environment and agent states, ; * a set of actions, , of the agent; * P_a(s,s')=\Pr(s_=s'\mid s_t=s, a_t=a) is the probability of transition (at time t) from state s to state s' under action a. * R_a(s,s') is the immediate reward after transition from s to s' with action a. The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent AI interacts with its environment in discrete time steps. At each time , the agent receives the current state s_t and reward r_t. It then chooses an action a_t from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state s_ and the reward r_ associated with the ''transition'' (s_t,a_t,s_) is determined. The goal of a reinforcement learning agent is to learn a ''policy'': \pi: A \times S \rightarrow ,1, \pi(a,s) = \Pr(a_t = a\mid s_t =s) which maximizes the expected cumulative reward. Formulating the problem as an MDP assumes the agent directly observes the current environmental state; in this case the problem is said to have ''full observability''. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have ''partial observability'', and formally the problem must be formulated as a Partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of '' regret''. In order to act near optimally, the agent must reason about the long-term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including
robot control Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics could be controlled in various ways, which includes using ma ...
, elevator scheduling,
telecommunications Telecommunication is the transmission of information by various types of technologies over wire, radio, optical, or other electromagnetic systems. It has its origin in the desire of humans for communication over a distance greater than that ...
,
backgammon Backgammon is a two-player board game played with counters and dice on tables boards. It is the most widespread Western member of the large family of tables games, whose ancestors date back nearly 5,000 years to the regions of Mesopotamia and Pe ...
, checkers and Go (
AlphaGo AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind Technologies a subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed u ...
). Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations: * A model of the environment is known, but an analytic solution is not available; * Only a simulation model of the environment is given (the subject of simulation-based optimization); * The only way to collect information about the environment is to interact with it. The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
problems.


Exploration

The exploration vs. exploitation trade-off has been most thoroughly studied through the
multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...
problem and for finite state space MDPs in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite MDPs is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is \varepsilon-greedy, where 0 < \varepsilon < 1 is a parameter controlling the amount of exploration vs. exploitation. With probability 1-\varepsilon, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability \varepsilon, exploration is chosen, and the action is chosen uniformly at random. \varepsilon is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.


Algorithms for control learning

Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.


Criterion of optimality


Policy

The agent's action selection is modeled as a map called ''policy'': :\pi: A \times S \rightarrow ,1/math> :\pi(a,s) = \Pr(a_t = a\mid s_t =s) The policy map gives the probability of taking action a when in state s. There are also deterministic policies.


State-value function

The value function V_\pi(s) is defined as the ''expected return'' starting with state s, i.e. s_0 = s, and successively following policy \pi. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state. :V_\pi(s) = \operatorname E \mid s_0 = s= \operatorname E\left sum_^\infty \gamma^t r_t\mid s_0 = s\right where the random variable R denotes the return, and is defined as the sum of future discounted rewards: :R=\sum_^\infty \gamma^t r_t, where r_t is the reward at step t, \gamma \in discount-rate._Gamma_is_less_than_1,_so_events_in_the_distant_future_are_weighted_less_than_events_in_the_immediate_future. The_algorithm_must_find_a_policy_with_maximum_expected_return._From_the_theory_of_MDPs_it_is_known_that,_without_loss_of_generality,_the_search_can_be_restricted_to_the_set_of_so-called_''stationary''_policies._A_policy_is_''stationary''_if_the_action-distribution_returned_by_it_depends_only_on_the_last_state_visited_(from_the_observation_agent's_history)._The_search_can_be_further_restricted_to_''deterministic''_stationary_policies._A_''deterministic_stationary''_policy_deterministically_selects_actions_based_on_the_current_state._Since_any_such_policy_can_be_identified_with_a_mapping_from_the_set_of_states_to_the_set_of_actions,_these_policies_can_be_identified_with_such_mappings_with_no_loss_of_generality.


__Brute_force_

The_ discount-rate._Gamma_is_less_than_1,_so_events_in_the_distant_future_are_weighted_less_than_events_in_the_immediate_future. The_algorithm_must_find_a_policy_with_maximum_expected_return._From_the_theory_of_MDPs_it_is_known_that,_without_loss_of_generality,_the_search_can_be_restricted_to_the_set_of_so-called_''stationary''_policies._A_policy_is_''stationary''_if_the_action-distribution_returned_by_it_depends_only_on_the_last_state_visited_(from_the_observation_agent's_history)._The_search_can_be_further_restricted_to_''deterministic''_stationary_policies._A_''deterministic_stationary''_policy_deterministically_selects_actions_based_on_the_current_state._Since_any_such_policy_can_be_identified_with_a_mapping_from_the_set_of_states_to_the_set_of_actions,_these_policies_can_be_identified_with_such_mappings_with_no_loss_of_generality.


__Brute_force_

The_brute-force_search">brute_force_ Brute_Force_or_brute_force_may_refer_to: _Techniques *_Brute_force_method_or_proof_by_exhaustion,_a_method_of_mathematical_proof *_Brute-force_attack,_a_cryptanalytic_attack *_Brute-force_search,_a_computer_problem-solving_technique _People *_Brut_...
_approach_entails_two_steps: *_For_each_possible_policy,_sample_returns_while_following_it *_Choose_the_policy_with_the_largest_expected_return One_problem_with_this_is_that_the_number_of_policies_can_be_large,_or_even_infinite._Another_is_that_the_variance_of_the_returns_may_be_large,_which_requires_many_samples_to_accurately_estimate_the_return_of_each_policy. These_problems_can_be_ameliorated_if_we_assume_some_structure_and_allow_samples_generated_from_one_policy_to_influence_the_estimates_made_for_others._The_two_main_approaches_for_achieving_this_are_#Value_function.html" ;"title="brute-force_search.html" "title="Q-learning#Discount factor">discount-rate. Gamma is less than 1, so events in the distant future are weighted less than events in the immediate future. The algorithm must find a policy with maximum expected return. From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called ''stationary'' policies. A policy is ''stationary'' if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to ''deterministic'' stationary policies. A ''deterministic stationary'' policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.


Brute force

The brute_force_ Brute_Force_or_brute_force_may_refer_to: _Techniques *_Brute_force_method_or_proof_by_exhaustion,_a_method_of_mathematical_proof *_Brute-force_attack,_a_cryptanalytic_attack *_Brute-force_search,_a_computer_problem-solving_technique _People *_Brut_...
_approach_entails_two_steps: *_For_each_possible_policy,_sample_returns_while_following_it *_Choose_the_policy_with_the_largest_expected_return One_problem_with_this_is_that_the_number_of_policies_can_be_large,_or_even_infinite._Another_is_that_the_variance_of_the_returns_may_be_large,_which_requires_many_samples_to_accurately_estimate_the_return_of_each_policy. These_problems_can_be_ameliorated_if_we_assume_some_structure_and_allow_samples_generated_from_one_policy_to_influence_the_estimates_made_for_others._The_two_main_approaches_for_achieving_this_are_#Value_function">value_function_estimation_and_#Direct_policy_search.html" ;"title="brute-force search">brute force Brute Force or brute force may refer to: Techniques * Brute force method or proof by exhaustion, a method of mathematical proof * Brute-force attack, a cryptanalytic attack * Brute-force search, a computer problem-solving technique People * Brut ...
approach entails two steps: * For each possible policy, sample returns while following it * Choose the policy with the largest expected return One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are #Value function">value function estimation and #Direct policy search">direct policy search.


Value function

Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best-expected return from ''any'' initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies. To define optimality in a formal manner, define the value of a policy \pi by : V^ (s) = E \mid s,\pi where R stands for the return associated with following \pi from the initial state s. Defining V^*(s) as the maximum possible value of V^\pi(s), where \pi is allowed to change, :V^*(s) = \max_\pi V^\pi(s). A policy that achieves these optimal values in each state is called ''optimal''. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return \rho^\pi, since \rho^\pi = E V^\pi(S) /math>, where S is a state randomly sampled from the distribution \mu of initial states (so \mu(s) = \Pr(s_0 = s)). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s, an action a and a policy \pi, the action-value of the pair (s,a) under \pi is defined by :Q^\pi(s,a) = \operatorname E \mid s,a,\pi\, where R now stands for the random return associated with first taking action a in state s and following \pi, thereafter. The theory of MDPs states that if \pi^* is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q^(s,\cdot) with the highest value at each state, s. The ''action-value function'' of such an optimal policy (Q^) is called the ''optimal action-value function'' and is commonly denoted by Q^*. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q_k (k=0,1,2,\ldots) that converge to Q^*. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.


Monte Carlo methods

Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: ''policy evaluation'' and ''policy improvement''. Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy \pi, the goal is to compute the function values Q^\pi(s,a) (or a good approximation to them) for all state-action pairs (s,a). Assume (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair (s,a) can be computed by averaging the sampled returns that originated from (s,a) over time. Given sufficient time, this procedure can thus construct a precise estimate Q of the action-value function Q^\pi. This finishes the description of the policy evaluation step. In the policy improvement step, the next policy is obtained by computing a ''greedy'' policy with respect to Q: Given a state s, this new policy returns an action that maximizes Q(s,\cdot). In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed. Problems with this procedure include: 1. The procedure may spend too much time evaluating a suboptimal policy. 2. It uses samples inefficiently in that a long trajectory improves the estimate only of the ''single'' state-action pair that started the trajectory. 3. When the returns along the trajectories have ''high variance'', convergence is slow. 4. It works in episodic problems only. 5. It works in small, finite MDPs only.


Temporal difference methods

The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of ''generalized policy iteration'' algorithms. Many ''actor-critic'' methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's
temporal difference Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, a ...
(TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called \lambda parameter (0\le \lambda\le 1) that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.


Function approximation methods

In order to address the fifth issue, ''function approximation methods'' are used. ''Linear function approximation'' starts with a mapping \phi that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair (s,a) are obtained by linearly combining the components of \phi(s,a) with some ''weights'' \theta: :Q(s,a) = \sum_^d \theta_i \phi_i(s,a). The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from
nonparametric statistics Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distri ...
(which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency.


Direct policy search

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of
stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functi ...
. The two approaches available are gradient-based and gradient-free methods.
Gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
-based methods (''policy gradient methods'') start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector \theta, let \pi_\theta denote the policy associated to \theta. Defining the performance function by :\rho(\theta) = \rho^, under mild conditions this function will be differentiable as a function of the parameter vector \theta. If the gradient of \rho was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). Policy search methods have been used in the
robotics Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrat ...
context. Many policy search methods may get stuck in local optima (as they are based on local search). A large class of methods avoids relying on gradient information. These include
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
, cross-entropy search or methods of
evolutionary computation In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, th ...
. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, ''actor–critic methods'' have been proposed and performed well on various problems.


Model-based algorithms

Finally, all of the above methods can be combined with algorithms that first learn a model. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and 'replayed' to the learning algorithm. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly.


Theory

Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of MDPs is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).


Research

Research topics include: * actor-critic * adaptive methods that work with fewer (or no) parameters under a large number of conditions * bug detection in software projects * continuous learning * combinations with logic-based frameworks * exploration in large MDPs * human feedback * interaction between implicit and explicit learning in skill acquisition * intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours large-scale empirical evaluations * large (or continuous) action spaces * modular and hierarchical reinforcement learning * multiagent/distributed reinforcement learning is a topic of interest. Applications are expanding. * occupant-centric control * optimization of computing resources * partial information (e.g., using predictive state representation) * reward function based on maximising novel information * sample-based planning (e.g., based on
Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree. ...
). *securities trading *
transfer learning Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize ...
* TD learning modeling
dopamine Dopamine (DA, a contraction of 3,4-dihydroxyphenethylamine) is a neuromodulatory molecule that plays several important roles in cells. It is an organic chemical of the catecholamine and phenethylamine families. Dopamine constitutes about 80% o ...
-based learning in the brain.
Dopaminergic Dopaminergic means "related to dopamine" (literally, "working on dopamine"), dopamine being a common neurotransmitter. Dopaminergic substances or actions increase dopamine-related activity in the brain. Dopaminergic brain pathways facilitate do ...
projections from the
substantia nigra The substantia nigra (SN) is a basal ganglia structure located in the midbrain that plays an important role in reward and movement. ''Substantia nigra'' is Latin for "black substance", reflecting the fact that parts of the substantia nigra ap ...
to the
basal ganglia The basal ganglia (BG), or basal nuclei, are a group of subcortical nuclei, of varied origin, in the brains of vertebrates. In humans, and some primates, there are some differences, mainly in the division of the globus pallidus into an ext ...
function are the prediction error. * value-function and policy search methods


Comparison of reinforcement learning algorithms


Associative reinforcement learning

Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.


Deep reinforcement learning

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google
DeepMind DeepMind Technologies is a British artificial intelligence subsidiary of Alphabet Inc. and research laboratory founded in 2010. DeepMind was acquired by Google in 2014 and became a wholly owned subsidiary of Alphabet Inc, after Google's restru ...
increased attention to
deep reinforcement learning Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorpora ...
or end-to-end reinforcement learning.


Adversarial deep reinforcement learning

Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.


Fuzzy reinforcement learning

By introducing fuzzy inference in RL, approximating the state-action value function with
fuzzy rule Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables. Modus ponens and modus tollens are the most important rules of inference. A modus ponens rule is in the form :Premise: ''x is A'' :Implication: ''IF x is ...
s in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).


Inverse reinforcement learning

In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.


Safe reinforcement learning

Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes.


See also

* Temporal difference learning * Q-learning * State–action–reward–state–action (SARSA) * Fictitious play * Learning classifier system *
Optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
*
Dynamic treatment regimes In medical research, a dynamic treatment regime (DTR), adaptive intervention, or adaptive treatment strategy is a set of rules for choosing effective treatments for individual patients. Historically, medical research and the practice of medicine ten ...
* Error-driven learning *
Multi-agent reinforcement learning ] Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does acti ...
*
Multi-agent system A multi-agent system (MAS or "self-organized system") is a computerized system composed of multiple interacting intelligent agents.Hu, J.; Bhowmick, P.; Jang, I.; Arvin, F.; Lanzon, A.,A Decentralized Cluster Formation Containment Framework fo ...
* Distributed artificial intelligence * Intrinsic motivation *
Genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to ge ...
s * Apprenticeship learning


References


Further reading

* * * * * * *


External links


Reinforcement Learning Repository

Reinforcement Learning and Artificial Intelligence
(RLAI, Rich Sutton's lab at the
University of Alberta The University of Alberta, also known as U of A or UAlberta, is a Public university, public research university located in Edmonton, Alberta, Canada. It was founded in 1908 by Alexander Cameron Rutherford,"A Gentleman of Strathcona – Alexande ...
)
Autonomous Learning Laboratory
(ALL, Andrew Barto's lab at the
University of Massachusetts Amherst The University of Massachusetts Amherst (UMass Amherst, UMass) is a public research university in Amherst, Massachusetts and the sole public land-grant university in Commonwealth of Massachusetts. Founded in 1863 as an agricultural college, ...
)
Real-world reinforcement learning experiments
at
Delft University of Technology Delft University of Technology ( nl, Technische Universiteit Delft), also known as TU Delft, is the oldest and largest Dutch public technical university, located in Delft, Netherlands. As of 2022 it is ranked by QS World University Rankings among ...

Stanford University Andrew Ng Lecture on Reinforcement Learning


Series of blog post on RL with Python code
A (Long) Peek into Reinforcement Learning
{{Computer science Markov models Belief revision