ABSTRACT

Deep reinforcement learning (DRL) has the ability to learn and solve complex and challenging tasks when compared to traditional robot navigation methods. In this study, using DRL robot navigation developing with the ability to detect in a dynamic environment is researched. The robot has no prior knowledge of the environment in navigation tasks. In addition, the position of the random moving obstacle and target changes with each navigation task. Because of this, the navigation task has become more difficult, and the classic reward system has fallen behind. In this study, we propose a new adaptive reward system instead of the classic reward system used in Deep Q-Network (DQN) of DRL to increase efficiency of this algorithm. The results of the classical reward system and the new adaptive reward system were compared with other studies in the literature. As a result of these comparisons, we saw that the reward system we proposed was successful in the success rate and the number of reaching multiple targets.