Reinforcement Learning - Theoretical Foundations: Part IV


Value Function Approximation

We know various methods can be applied for function approximation. For this note, we will mainly consider those differentiable methods: Linear Approximation and Neural Nets

1. Stochastic Gradient Descent (SGD)

Here let's review a basic approximation strategy for gradient-based method: Stochastic Gradient Descent.

First our aim is to minimize the mean square error (MSE) between our estimator and the true function. The error is represented by

To attain we need to update the gradient until convergence.

A full gradient update

has the issue of converging at local minimum. Hence stochastic sampling with will work better in general.

2. Linearization

We begin by considering a linear model. So where is the feature vector/representation of the current state space. The stochastic update in SGD is also updated to .

On the other hand, we don't have an oracle for a known in practice, so we need ways to estimate it. This is where algorithm design comes in.

Algorithm analysis

1. linear Monte-Carlo policy evaluation

  • To represent , we use . In every epoch, we apply supervised learning to “training data”: .
  • The update is now
  • Note that Monte-Carlo evaluation converges to a local optimum
  • As is unbiased, it works even when using non-linear value function approximation

2. TD Learning

  • We use for .
  • TD(0) has the update formula:
  • Linear TD(0) converges (close) to global optimum
  • On the other hand we can use -return as substitute. This is a TD() method.
  • Forward view linear TD():
  • Backward view linear TD() requires eligibility trace:

3. Convergence of Prediction Algorithms

On\Off-policy Algorithm Table-lookup Linear Non-Linear
On-Policy MC Y Y Y
On-Policy TD(0) Y Y N
On-Policy TD() Y Y N
Off-Policy MC Y Y Y
Off-Policy TD(0) Y N N
Off-Policy TD() Y N N

Action-Value Function Approximation

Now we don't simply approximate a value function , but approximate action-value function instead.

The main idea is just find . Both MC and TD work the same way exactly by substituting these items inside the expressions.

Improvements

Gradient TD

Some more recent improves aim to resolve the failure of convergence of off-policy TD algorithms. This gave birth to a Gradient TD algorithm that converges in both linear and non-linear cases. This requires an additional parameter to be added and tuned which reprsents the gradient of projected Bellman error. In a similar fashion, a gradient Q-learning is also invented, but with no gurantee on non-linear model convergence.

Least Squares Prediction and Experience Replay

LS estimator is known to approximate well in general. So instead of correctly approximating , it may also be ideal to approximate instead.

It is found that SGD with Experience Replay converges in this case. By "Experience Replay" we are storing the history in each epoch instead of discarding them after each iteration. And we randomly selection some of these "data" for stochastic update in SGD.

Deep Q-Networks (DQN)

  • DQN uses experience replay and fixed Q-targets
  • It takes actions based on a -greedy policy
  • Store transition in replay memory (experience replay)
  • Sample random mini-batch of transitions from
  • Compute Q-learning targets w.r.t. old, fixed parameters (fixed Q-target: not the latest but a computed some batches ago)

In general, LS-based methods work well in terms of convergence but suffers from computational complexity.

Reinforcement Learning - Theoretical Foundations: Part IV

https://criss-wang.github.io/post/blogs/rl/reinfocement-4/

Author

Zhenlin Wang

Posted on

2021-01-28

Updated on

2022-01-14

Licensed under