In conclusion, the Q-learning agent converged to a sub-optimal technique as talked about beforehand. Furthermore, a portion of the atmosphere stays unexplored by the Q-function, which prevents the agent from discovering the brand new optimum path when the purple portal seems after the one hundredth episode.
These efficiency limitations could be attributed to the comparatively low variety of coaching steps (400), limiting the chances of interplay with the atmosphere and the exploration induced by the ε-greedy coverage.
Planning, a vital part of model-based reinforcement studying strategies is especially helpful to enhance pattern effectivity and estimation of motion values. Dyna-Q and Dyna-Q+ are good examples of TD algorithms incorporating planning steps.
The Dyna-Q algorithm (Dynamic Q-learning) is a mixture of model-based RL and TD studying.
Mannequin-based RL algorithms depend on a mannequin of the atmosphere to include planning as their major means of updating worth estimates. In distinction, model-free algorithms depend on direct studying.
”A mannequin of the atmosphere is something that an agent can use to foretell how the atmosphere will reply to its actions” — Reinforcement Studying: an introduction.
Within the scope of this text, the mannequin could be seen as an approximation of the transition dynamics p(s’, r|s, a). Right here, p returns a single next-state and reward pair given the present state-action pair.
In environments the place p is stochastic, we distinguish distribution fashions and pattern fashions, the previous returns a distribution of the following states and actions whereas the latter returns a single pair, sampled from the estimated distribution.
Fashions are particularly helpful to simulate episodes, and due to this fact practice the agent by changing real-world interactions with planning steps, i.e. interactions with the simulated atmosphere.
Brokers implementing the Dyna-Q algorithm are a part of the category of planning brokers, brokers that mix direct reinforcement studying and mannequin studying. They use direct interactions with the atmosphere to replace their worth operate (as in Q-learning) and in addition to be taught a mannequin of the atmosphere. After every direct interplay, they will additionally carry out planning steps to replace their worth operate utilizing simulated interactions.
A fast Chess instance
Think about enjoying a great sport of chess. After enjoying every transfer, the response of your opponent permits you to assess the high quality of your transfer. That is much like receiving a constructive or unfavourable reward, which lets you “replace” your technique. In case your transfer results in a blunder, you most likely wouldn’t do it once more, supplied with the identical configuration of the board. Up to now, that is similar to direct reinforcement studying.
Now let’s add planning to the combo. Think about that after every of your strikes, whereas the opponent is considering, you mentally return over every of your earlier strikes to reassess their high quality. You would possibly discover weaknesses that you just uncared for at first sight or discover out that particular strikes had been higher than you thought. These ideas may can help you replace your technique. That is precisely what planning is about, updating the worth operate with out interacting with the actual atmosphere however relatively a mannequin of mentioned atmosphere.
Dyna-Q due to this fact comprises some extra steps in comparison with Q-learning:
After every direct replace of the Q values, the mannequin shops the state-action pair and the reward and next-state that had been noticed. This step known as mannequin coaching.
- After mannequin coaching, Dyna-Q performs n planning steps:
- A random state-action pair is chosen from the mannequin buffer (i.e. this state-action pair was noticed throughout direct interactions)
- The mannequin generates the simulated reward and next-state
- The worth operate is up to date utilizing the simulated observations (s, a, r, s’)