Constructing skill trees (CST) is a hierarchical
reinforcement learning
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learnin ...
algorithm which can build skill trees from a set of sample solution trajectories obtained from demonstration. CST uses an incremental MAP (
maximum a posteriori
An estimation procedure that is often claimed to be part of Bayesian statistics is the maximum a posteriori (MAP) estimate of an unknown quantity, that equals the mode of the posterior density with respect to some reference measure, typically ...
) change point detection algorithm to segment each demonstration trajectory into skills and integrate the results into a skill tree. CST was introduced by
George Konidaris,
Scott Kuindersma,
Andrew Barto and
Roderic Grupen in 2010.
Algorithm
CST consists of mainly three parts;change point detection, alignment and merging. The main focus of CST is online change-point detection. The change-point detection algorithm is used to segment data into skills and uses the sum of discounted reward
as the target regression variable. Each skill is assigned an appropriate abstraction. A
particle filter
Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical ...
is used to control the computational complexity of CST.
The change point detection algorithm is implemented as follows. The data for times
and models with prior
are given. The algorithm is assumed to be able to fit a segment from time
to using model with the fit probability
. A linear regression model with Gaussian noise is used to compute
. The Gaussian noise prior has mean zero, and variance which follows
. The prior for each weight follows
.
The fit probability
is computed by the following equation.
:
Then, CST compute the probability of the changepoint at time with model ,
and
using a
Viterbi algorithm
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This i ...
.
:
:
Pseudocode
The following
pseudocode
In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actio ...
describes the change point detection algorithm:
particles := [];
Process each incoming data point
for t = 1:T do
//Compute fit probabilities for all particles
for p ∈ particles do
p_tjq := (1 − G(t − p.pos − 1)) × p.fit_prob × model_prior(p.model) × p.prev_MAP
p.MAP := p_tjq × g(t−p.pos) / (1 − G(t − p.pos − 1))
end
''// Filter if necessary''
if the number of particles ≥ N then
particles := particle_filter(p.MAP, M)
end
''// Determine the Viterbi path''
for t = 1 do
max_path := []
max_MAP := 1/, Q,
else
max_particle := p.MAP
max_path := max_particle.path ∪ max_particle
max_MAP := max_particle.MAP
end
''// Create new particles for a changepoint at time t''
for q ∈ Q do
new_p := create_particle(model=q, pos=t, prev_MAP=max_MAP, path=max_path)
p := p ∪ new_p
end
''// Update all particles''
for p ∈ P do
particles := update_particle(current_state, current_reward, p)
end
end
''// Return the most likely path to the final point''
return max_path
function update_particle(current_state, current_reward, particle) is
p := particle
r_t := current_reward
''// Initialization''
if t = 0 then
p.A := zero matrix(p.m, p.m)
p.b := zero vector(p.m)
p.z := zero vector(p.m)
p.sum r := 0
p.tr1 := 0
p.tr2 := 0
end if
''// Compute the basis function vector for the current state''
Φ := p.Φ (current state)
''// Update sufficient statistics''
p.A := p.A + ΦΦ
p.z := p.z + Φ
p.b := p.b + r p.z
p.tr1 := 1 + p.tr1
p.sum r := sum p.r + r p.tr1 + 2r p.tr2
p.tr2 := p.tr2 + r p.tr1
p.fit_prob := compute_fit_prob(p, v, u, delta, )
Assumptions
CTS assume that the demonstrated skills form a tree, the domain reward function is known and the best model for merging a pair of skills is the model selected for representing both individually.
Advantages
CST is much faster learning algorithm than
skill chaining. CST can be applied to learning higher dimensional policies.
Even unsuccessful episode can improve skills. Skills acquired using agent-centric features can be used for other problems.
Uses
CST has been used to acquire skills from human demonstration in the
PinBall
Pinball games are a family of games in which a ball is propelled into a specially designed table where it bounces off various obstacles, scoring points either en route or when it comes to rest. Historically the board was studded with nails call ...
domain. It has been also used to acquire skills from human demonstration on a mobile manipulator.
See also
*
Prefrontal cortex basal ganglia working memory Prefrontal cortex basal ganglia working memory (PBWM) is an algorithm that Computer simulation, models working memory in the prefrontal cortex and the basal ganglia.
It can be compared to long short-term memory (LSTM) in functionality, but is more ...
*
State–action–reward–state–action
*
Sammon Mapping
References
*
*
*{{Cite conference, last = Fearnhead , first = Paul , author-link = Paul Fearnhead , author2=Zhen Liu, title= On-line Inference for Multiple Change Points , book-title = Journal of the Royal Statistical Society
, year= 2007
Machine learning algorithms