Reinforcement Learning - Theoretical Foundations: Part IV
Value Function Approximation
We know various methods can be applied for function approximation. For this note, we will mainly consider those differentiable methods: Linear Approximation and Neural Nets
1. Stochastic Gradient Descent (SGD)
Here let's review a basic approximation strategy for gradient-based method: Stochastic Gradient Descent.
First our aim is to minimize the mean square error (MSE) between our estimator and the true function. The error is represented by
To attain
A full gradient update
has the issue of converging at local minimum. Hence stochastic sampling with
2. Linearization
We begin by considering a linear model. So
On the other hand, we don't have an oracle for a known
Algorithm analysis
1. linear Monte-Carlo policy evaluation
- To represent
, we use . In every epoch, we apply supervised learning to “training data”: . - The update is now
- Note that Monte-Carlo evaluation converges to a local optimum
- As
is unbiased, it works even when using non-linear value function approximation
2. TD Learning
- We use
for . - TD(0) has the update formula:
- Linear TD(0) converges (close) to global optimum
- On the other hand we can use
-return as substitute. This is a TD( ) method. - Forward view linear TD(
): - Backward view linear TD(
) requires eligibility trace:
3. Convergence of Prediction Algorithms
On\Off-policy | Algorithm | Table-lookup | Linear | Non-Linear |
---|---|---|---|---|
On-Policy | MC | Y | Y | Y |
On-Policy | TD(0) | Y | Y | N |
On-Policy | TD( |
Y | Y | N |
Off-Policy | MC | Y | Y | Y |
Off-Policy | TD(0) | Y | N | N |
Off-Policy | TD( |
Y | N | N |
Action-Value Function Approximation
Now we don't simply approximate a value function
The main idea is just find
Improvements
Gradient TD
Some more recent improves aim to resolve the failure of convergence of off-policy TD algorithms. This gave birth to a Gradient TD algorithm that converges in both linear and non-linear cases. This requires an additional parameter
Least Squares Prediction and Experience Replay
LS estimator is known to approximate
It is found that SGD with Experience Replay converges in this case. By "Experience Replay" we are storing the history in each epoch instead of discarding them after each iteration. And we randomly selection some of these "data" for stochastic update in SGD.
Deep Q-Networks (DQN)
- DQN uses experience replay and fixed Q-targets
- It takes actions based on a
-greedy policy - Store transition
in replay memory (experience replay) - Sample random mini-batch of transitions from
- Compute Q-learning targets w.r.t. old, fixed parameters
(fixed Q-target: not the latest but a computed some batches ago)
In general, LS-based methods work well in terms of convergence but suffers from computational complexity.
Reinforcement Learning - Theoretical Foundations: Part IV