Potential-based Reward Shaping in Sokoban. (arXiv:2109.05022v1 [cs.LG])

Learning to solve sparse-reward reinforcement learning problems is difficult,
due to the lack of guidance towards the goal. But in some problems, prior
knowledge can be used to augment the learning process. Reward shaping is a way
to incorporate prior knowledge into the original reward function in order to
speed up the learning. While previous work has investigated the use of expert
knowledge to generate potential functions, in this work, we study whether we
can use a search algorithm(A*) to automatically generate a potential function
for reward shaping in Sokoban, a well-known planning task. The results showed
that learning with shaped reward function is faster than learning from scratch.
Our results indicate that distance functions could be a suitable function for
Sokoban. This work demonstrates the possibility of solving multiple instances
with the help of reward shaping. The result can be compressed into a single
policy, which can be seen as the first phrase towards training a general policy
that is able to solve unseen instances.

Source: https://arxiv.org/abs/2109.05022

webmaster

Related post