Published on Fri Apr 09 2021

Jamming-Resilient Path Planning for Multiple UAVs via Deep Reinforcement Learning

Xueyuan Wang, M. Cenk Gursoy, Tugba Erpek, Yalin E. Sagduyu
0
0
0
Abstract

Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks. In this paper, we aim to find collision-free paths for multiple cellular-connected UAVs, while satisfying requirements of connectivity with ground base stations (GBSs) in the presence of a dynamic jammer. We first formulate the problem as a sequential decision making problem in discrete domain, with connectivity, collision avoidance, and kinematic constraints. We, then, propose an offline temporal difference (TD) learning algorithm with online signal-to-interference-plus-noise ratio (SINR) mapping to solve the problem. More specifically, a value network is constructed and trained offline by TD method to encode the interactions among the UAVs and between the UAVs and the environment; and an online SINR mapping deep neural network (DNN) is designed and trained by supervised learning, to encode the influence and changes due to the jammer. Numerical results show that, without any information on the jammer, the proposed algorithm can achieve performance levels close to that of the ideal scenario with the perfect SINR-map. Real-time navigation for multi-UAVs can be efficiently performed with high success rates, and collisions are avoided.

Sat Apr 03 2021
Machine Learning
Learning-Based UAV Trajectory Optimization with Collision Avoidance and Connectivity Constraints
0
0
0
Tue Jan 16 2018
Artificial Intelligence
Cellular-Connected UAVs over 5G: Deep Reinforcement Learning for Interference Management
An interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. Each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing interference. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium (SPNE) upon convergence.
0
0
0
Thu Aug 05 2021
Machine Learning
RIS-assisted UAV Communications for IoT with Wireless Power Transfer Using Deep Reinforcement Learning
Many of the devices used in Internet-of-Things (IoT) applications are energy-limited. We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
1
0
0
Tue Mar 17 2020
Machine Learning
Simultaneous Navigation and Radio Mapping for Cellular-Connected UAV with Deep Reinforcement Learning
Called cellular-connected unmanned aerial vehicle (UAV) is a promising technology to unlock the full potential of UAVs in the future. How to achieve tumultuous three-dimensional (3D) communication coverage for the UAV's in the sky is a new challenge.
0
0
0
Wed Jun 12 2019
Artificial Intelligence
Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks
Unmanned aerial vehicles (UAVs) are envisioned to complement the 5Gcommunication infrastructure in future smart cities. UAVs may serve as relays with the advantages of low price, easy deployment, line-of-sight links, and flexible mobility.
0
0
0
Thu May 09 2019
Machine Learning
Path Design for Cellular-Connected UAV with Reinforcement Learning
The proposed algorithms only require the raw measured or simulation-generated signal strength as the input. Numerical results show that the proposed path designs can successfully avoid the coverage holes of cellular networks even in the complex urban environment.
0
0
0