eprintid: 15728 rev_number: 2 eprint_status: archive userid: 1 dir: disk0/00/01/57/28 datestamp: 2023-11-10 03:30:21 lastmod: 2023-11-10 03:30:21 status_changed: 2023-11-10 02:00:14 type: article metadata_visibility: show creators_name: Ejaz, M.M. creators_name: Tang, T.B. creators_name: Lu, C.-K. title: A fast learning approach for autonomous navigation using a deep reinforcement learning method ispublished: pub keywords: Air navigation; Convolution; Convolutional neural networks; Deep learning; Learning systems; Reinforcement learning; Robots; Virtual reality, Autonomous navigation; Computational power; Fast learning; Learning approach; Learning process; Learning-based methods; Performance; Reinforcement learning method; Reinforcement learnings; Tracked robot, Network architecture note: cited By 1 abstract: Deep reinforcement learning-based methods employ an ample amount of computational power that affects the learning process. This paper proposes a novel approach to speed up the training process and improve the performance of autonomous navigation for a tracked robot. The proposed model named �layer normalization dueling double deep Q-network� has been trained in a virtual environment and then implemented it to a tracked robot for testing in a real-world scenario. Depth images have been used instead of RGB images to preserve the temporal information. Features are extracted using convolutional neural networks, and actions are derived using the dueling double deep Q-network. The input data has been normalized before each convolutional layer, which reduces the covariate shift by 69. This end-to-end network architecture of the proposed model provides stability to the network, relieves the burden of computational cost, and converges in much less number of episodes. Compared with three Q-variant models, the proposed model demonstrates outstanding performance in terms of episodic reward and convergence rate. The proposed model took 12.8 fewer episodes for training compared to other models. © 2021 The Authors. Electronics Letters published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. date: 2021 publisher: John Wiley and Sons Inc official_url: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85108902731&doi=10.1049%2fell2.12057&partnerID=40&md5=c8f9304a5a4360dec387fd511a7fafb6 id_number: 10.1049/ell2.12057 full_text_status: none publication: Electronics Letters volume: 57 number: 2 pagerange: 50-53 refereed: TRUE issn: 00135194 citation: Ejaz, M.M. and Tang, T.B. and Lu, C.-K. (2021) A fast learning approach for autonomous navigation using a deep reinforcement learning method. Electronics Letters, 57 (2). pp. 50-53. ISSN 00135194