Blogs · Reinforcement Learning

Reinforcement Learning - Theoretical Foundations: Part IV

RL Continued - Value Function Approximation

2021.01.28 · 4 min read · by Zhenlin Wang · updated 2022-01-14

Value Function Approximation

We know various methods can be applied for function approximation. For this note, we will mainly consider those differentiable methods: Linear Approximation and Neural Nets

1. Stochastic Gradient Descent (SGD)

Here let’s review a basic approximation strategy for gradient-based method: Stochastic Gradient Descent.

First our aim is to minimize the mean square error (MSE) between our estimator and the true function. The error is represented by

$$J(\textbf{w}) = \mathbb{E}_{\pi}[(\hat{v}(S, \textbf{w}) - v_{\pi}(S))^2].$$

To attain $\arg\min\limits_{\textbf{w}} J(\textbf{w})$ we need to update the gradient until convergence.

A full gradient update

$$\Delta(\textbf{w}) = \alpha\mathbb{E}_{\pi}[(v_{\pi}(S) - \hat{v}(S, \textbf{w}) )\nabla_{\textbf{w}}\hat{v}(S, \textbf{w})]$$

has the issue of converging at local minimum. Hence stochastic sampling with $\Delta(\textbf{w}) = \alpha(v_{\pi}(S) - \hat{v}(S, \textbf{w}))\nabla_{\textbf{w}}\hat{v}(S, \textbf{w})$ will work better in general.

2. Linearization

We begin by considering a linear model. So $\hat{v}(S, \textbf{w}) = \textbf{x}(S)^T\textbf{w}$ where $\textbf{x}(S)$ is the feature vector/representation of the current state space. The stochastic update $\nabla_{\textbf{w}}\hat{v}(S, \textbf{w})$ in SGD is also updated to $\textbf{x}(S)$.

On the other hand, we don’t have an oracle for a known $v_{\pi}(S)$ in practice, so we need ways to estimate it. This is where algorithm design comes in.

Algorithm analysis

1. linear Monte-Carlo policy evaluation

2. TD Learning

3. Convergence of Prediction Algorithms

On\Off-policyAlgorithmTable-lookupLinearNon-Linear
On-PolicyMCYYY
On-PolicyTD(0)YYN
On-PolicyTD($\lambda$)YYN
Off-PolicyMCYYY
Off-PolicyTD(0)YNN
Off-PolicyTD($\lambda$)YNN

Action-Value Function Approximation

Now we don’t simply approximate a value function $v_{\pi}(s)$, but approximate action-value function $q_{\pi}(s,a)$ instead.

The main idea is just find $\hat{q_{\pi}}(s, a, \textbf{w}) \approx q_{\pi}(s,a)$. Both MC and TD work the same way exactly by substituting these items inside the expressions.

Improvements

Gradient TD

Some more recent improves aim to resolve the failure of convergence of off-policy TD algorithms. This gave birth to a Gradient TD algorithm that converges in both linear and non-linear cases. This requires an additional parameter $\textbf{h}$ to be added and tuned which reprsents the gradient of projected Bellman error. In a similar fashion, a gradient Q-learning is also invented, but with no gurantee on non-linear model convergence.

Least Squares Prediction and Experience Replay

LS estimator is known to approximate $\textbf{w}$ well in general. So instead of correctly approximating $\textbf{w}$, it may also be ideal to approximate $LS(\textbf{w})$ instead.

It is found that SGD with Experience Replay converges in this case. By “Experience Replay” we are storing the history in each epoch instead of discarding them after each iteration. And we randomly selection some of these “data” for stochastic update in SGD.

Deep Q-Networks (DQN)

In general, LS-based methods work well in terms of convergence but suffers from computational complexity.