We analyze the DQN reinforcement learning algorithm as a stochastic
approximation scheme using the o.d.e. (for 'ordinary differential equation')
approach and point out certain theoretical issues. We then propose a modified
scheme called Full Gradient DQN (FG-DQN, for short) that has a sound
theoretical basis and compare it with the original scheme on sample problems.
We observe a better performance for FG-DQN.