World war 2 air combat maneuvers1/8/2024 Aiming at the over-the-horizon air combat maneuver decision problem, the heuristic Q-Network method is adopted to train the neural network model in the over-the-horizon air combat training environment. At the same time, heuristic exploration and random exploration are combined. In order to improve the efficiency of the reinforcement learning algorithm for the exploration of strategy space, this paper proposes a heuristic Q-Network method that integrates expert experience, and uses expert experience as a heuristic signal to guide the search process. Based on the characteristics of over-the-horizon air combat, this paper constructs a super-horizon air combat training environment, which includes aircraft model modeling, air combat scene design, enemy aircraft strategy design, and reward and punishment signal design. With the development of information technology, the degree of intelligence in air combat is increasing, and the demand for automated intelligent decision-making systems is becoming more intense. Simulation results show that the proposed maneuver decision model and training method can help the UAV achieve autonomous decision in the air combats and obtain an effective decision policy to defeat the opponent. Finally, one-to-one short-range air combats are simulated under different target maneuver policies. Then, a phased training method, called "basic-confrontation", which is based on the idea that human beings gradually learn from simple to complex is proposed to help reduce the training time while getting suboptimal but efficient results. However, such model includes a high dimensional state and action space which requires huge computation load for DQN training using traditional methods. In this paper, an autonomous maneuver decision model is proposed for the UAV short-range air combat based on reinforcement learning, which mainly includes the aircraft motion model, one-to-one short-range air combat evaluation model and the maneuver decision model based on deep Q network (DQN). A bottleneck that constrains the capability of UAVs against manned vehicles is the autonomous maneuver decision, which is a very challenging problem in the short-range air combat undergoing highly dynamic and uncertain maneuvers of enemies. With the development of artificial intelligence and integrated sensor technologies, unmanned aerial vehicles (UAVs) are more and more applied in the air combats. Air combat, Flight simulation, Performance measurement, Air combat maneuvering, Flight simulators, Flight training, Neural networks. The authors provide details on each of these efforts as well as a review of the ACES model, a presentation of the basics of NNs, and an overview of a software system developed for the implementation and testing of the NN models. For most of the models, validation tests were conducted using data different from that used in training the models. These various models incorporate knowledge about air combat maneuvers and components of maneuvers as well as rudimentary knowledge about maneuver planning and situational awareness. The work investigated several models: (a) NN models that select ACM on the basis of training with the production rules of a model, Air Combat Expert Simulation (ACES) (b) NN models that mimic the action selections of the Automated Maneuvering Logic (AML) System (c) NN models that predict the outcome of engagements flown in the Simulator for Air-to-Air Combat (SAAC) given summary measures of various parameters measured during the engagements and (d) NN models that predict future aircraft control inputs in SAAC engagements given the values of flight parameters at particular points in time. The primary goal of this project was to explore the applicability of artificial neural network (NN) models in the domain of air combat maneuvering (ACM).
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |