Recent Advances and Applications of Optimal Control and Reinforcement Learning in Guidance and Navigation Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering".

Deadline for manuscript submissions: closed (20 August 2021) | Viewed by 12945

Special Issue Editors


E-Mail Website
Guest Editor
Department of Aerospace Engineering, Sejong University, Seoul 05006, Korea
Interests: optimal control and reinforcement learning with applications to aerospace systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Aerospace Engineering, Inha University, Incheon 22212, Korea
Interests: helicopter dynamics; UAV control system; AI based pilot assistant system

Special Issue Information

Dear Colleagues,

Optimal control approach has provided fruitful solutions in many engineering problems such as guidance, navigation and control systems for last few decades. It widened understanding of complex control systems and contributed to flourishing of modern control systems. However, it is considered as a theoretical designing method that results in complex actions which heavily relies on well-specified mathematical models.

On the other hand, reinforcement learning makes model-free predictions and produces actions from the data obtained by interacting with complex environments. Recently, the dramatic progress in reinforcement learning has opened tremendous opportunities in various fields of science and engineering. It is also expected to be able to play an important role in more challenging guidance, navigation, and control applications, including autonomous cars, robots and aerial systems that require agile dynamics.

However, considering its potential, current reinforcement learning is only at the starting point, and has many challenges to overcome. Among them, developing efficient ways of using big data and guaranteeing safe and reliable interaction with a complex and uncertain environment are the main issues. Since the efficiency, safety and reliability problems are also the core topics of the optimal control theory, combining the existing optimal control and reinforcement learning theories shows great possibility of compensating the weaknesses of each and wide acceptance for practical applications.

The aim of this Special Issue is to invite papers for recent advances in theory and applications of optimal control and reinforcement learning in guidance, navigation and control systems, as well as to exploit the novel results of the combination of optimal control and reinforcement learning technology for unmanned vehicles, robots, or any dynamics systems. Potential topics of the Special Issue include, but are not limited to, the following:

- emerging technology in optimal control and reinforcement learning

- novel applications of optimal control and reinforcement learning in guidance, navigation and control systems

- combination of optimal control and reinforcement learning technology

- reinforcement learning based path planning and tracking.

- reinforcement learning based modeling and parameter optimization

- supplementing method of reinforcement learning using optimal control theory

- structure and training method of deep reinforcement learning networks

Prof. Sungsu Park
Prof. Keeyoung Choi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimal control
  • reinforcement learning
  • guidance, navigation and control
  • path planning and tracking
  • deep neural network
  • autonomous vehicle

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 3623 KiB  
Article
Deep Neural Network-Based Guidance Law Using Supervised Learning
by Minjeong Kim, Daseon Hong and Sungsu Park
Appl. Sci. 2020, 10(21), 7865; https://0-doi-org.brum.beds.ac.uk/10.3390/app10217865 - 06 Nov 2020
Cited by 8 | Viewed by 3595
Abstract
This paper proposes that the deep neural network-based guidance (DNNG) law replace the proportional navigation guidance (PNG) law. This approach is performed by adopting a supervised learning (SL) method using a large amount of simulation data from the missile system with PNG. Then, [...] Read more.
This paper proposes that the deep neural network-based guidance (DNNG) law replace the proportional navigation guidance (PNG) law. This approach is performed by adopting a supervised learning (SL) method using a large amount of simulation data from the missile system with PNG. Then, the proposed DNNG is compared with the PNG, and its performance is evaluated via the hitting rate and the energy function. In addition, the DNN-based only line-of-sight (LOS) rate input guidance (DNNLG) law, in which only the LOS rate is an input variable, is introduced and compared with the PN and DNNG laws. Then, the DNNG and DNNLG laws examine behavior in an initial position other than the training data. Full article
Show Figures

Figure 1

13 pages, 6515 KiB  
Article
Study on Reinforcement Learning-Based Missile Guidance Law
by Daseon Hong, Minjeong Kim and Sungsu Park
Appl. Sci. 2020, 10(18), 6567; https://0-doi-org.brum.beds.ac.uk/10.3390/app10186567 - 20 Sep 2020
Cited by 23 | Viewed by 5701
Abstract
Reinforcement learning is generating considerable interest in terms of building guidance law and solving optimization problems that were previously difficult to solve. Since reinforcement learning-based guidance laws often show better robustness than a previously optimized algorithm, several studies have been carried out on [...] Read more.
Reinforcement learning is generating considerable interest in terms of building guidance law and solving optimization problems that were previously difficult to solve. Since reinforcement learning-based guidance laws often show better robustness than a previously optimized algorithm, several studies have been carried out on the subject. This paper presents a new approach to training missile guidance law by reinforcement learning and introducing some notable characteristics. The novel missile guidance law shows better robustness to the controller-model compared to the proportional navigation guidance. The neural network in this paper has identical inputs with proportional navigation guidance, which makes the comparison fair, distinguishing it from other research. The proposed guidance law will be compared to the proportional navigation guidance, which is widely known as quasi-optimal of missile guidance law. Our work aims to find effective missile training methods through reinforcement learning, and how better the new method is. Additionally, with the derived policy, we contemplated which is better, and in which circumstances it is better. A novel methodology for the training will be proposed first, and the performance comparison results will be continued therefrom. Full article
Show Figures

Graphical abstract

13 pages, 1181 KiB  
Article
Optimal Control Approach to Lambert’s Problem and Gibbs’ Method
by Minjeong Kim and Sungsu Park
Appl. Sci. 2020, 10(7), 2419; https://0-doi-org.brum.beds.ac.uk/10.3390/app10072419 - 01 Apr 2020
Cited by 2 | Viewed by 2963
Abstract
This paper presents the optimal control approach to solve both Lambert’s problem and Gibbs’ method, which are commonly used for preliminary orbit determination. Lambert’s problem is reinterpreted with Hamilton’s principle and is converted to an optimal control problem. Various extended Lambert’s problems are [...] Read more.
This paper presents the optimal control approach to solve both Lambert’s problem and Gibbs’ method, which are commonly used for preliminary orbit determination. Lambert’s problem is reinterpreted with Hamilton’s principle and is converted to an optimal control problem. Various extended Lambert’s problems are formulated by modifying the weighting and constraint settings within the optimal control framework. Furthermore, Gibbs’ method is also converted to an extended Lambert’s problem with two position vectors and one orbit energy with the help of the proposed orbital energy computation algorithm. The proposed extended Lambert’s problem and Gibbs’ method are numerically solved with the Lobatto pseudospectral method, and their accuracies are verified by numerical simulations. Full article
Show Figures

Figure 1

Back to TopTop