The Current Trends of Deep Learning in Autonomous Vehicles: A Review
by Raymond Ning Huang 1 , Jing Ren2,*
, Hossam A. Gabbar 3
1 Department of Mechanical Engineering, University of Toronto, Toronto, M5S 1A4, Canada
2 Department of Electrical and Computer Engineering, Ontario Tech University, Oshawa, L1H 7K4, Canada
3 Department of Energy and Nuclear Engineering, Ontario Tech University, Oshawa, L1H 7K4, Canada
* Author to whom correspondence should be addressed.
Journal of Engineering Research and Sciences, Volume 1, Issue 10, Page # 56-68, 2022; DOI: 10.55708/js0110008
Keywords: Deep learning, Autonomous Vehicles, Control
Received: 15 August 2022, Revised: 07 October 2022, Accepted: 08 October 2022, Published Online: 31 October 2022
APA Style
Ren, J., Huang, R. N., Ren, J., & Gabbar, H. A. (2022). The Current Trends of Deep Learning in Autonomous Vehicles: A Review. Journal of Engineering Research and Sciences, 1(10), 56–68. https://doi.org/10.55708/js0110008
Chicago/Turabian Style
Ren, Jing, Raymond Ning Huang, Jing Ren, and Hossam A. Gabbar. “The Current Trends of Deep Learning in Autonomous Vehicles: A Review.” Journal of Engineering Research and Sciences 1, no. 10 (October 1, 2022): 56–68. https://doi.org/10.55708/js0110008.
IEEE Style
J. Ren, R. N. Huang, J. Ren, and H. A. Gabbar, “The Current Trends of Deep Learning in Autonomous Vehicles: A Review,” Journal of Engineering Research and Sciences, vol. 1, no. 10, pp. 56–68, Oct. 2022, doi: 10.55708/js0110008.
Autonomous vehicles are the future of road traffic. In addition to improving safety and efficiency from reduced errors compared to conventional vehicles, autonomous vehicles can also be implemented in applications that may be inconvenient or dangerous to a human driver. To realize this vision, seven essential technologies need to be evolved and refined including path planning, computer vision, sensor fusion, data security, fault diagnosis, control, and lastly, communication and networking. The contributions and the novelty of this paper are: 1) provide a comprehensive review of the recent advances in using deep learning for autonomous vehicle research, 2) offer insights into several important aspects of this emerging area, and 3) identify five directions for future research. To the best of our knowledge, there is no previous work that provides similar reviews for autonomous vehicle design.
1. Introduction
Autonomous vehicles are the future of road traffic. As intelligent agents equipped with sensing technology including GPS, Inertial Navigation Systems (INS), Lidar, and Cameras combined with advanced control systems, autonomous vehicles have important applications in the future. In addition to improving safety and efficiency from reduced errors compared to conventional vehicles, autonomous vehicles can also be implemented in applications that may prove to be difficult or unsafe for a human driver.
However, before they can be safely commercially introduced, several essential technologies necessary for the design and operation of autonomous vehicles must be developed and refined. The function of an autonomous vehicle can be simplified to be a vehicle that can plan and follow a safe and efficient path from given starting points to endpoints with some control constraints. To achieve this goal, it is important to have strong understanding of human visual system and plan the substitution with an effective visual sensor system with robust data analytics. Beyond simple visual data, other sensory data such as location and inertial data can be integrated to improve safety and performance in control and path planning. In effect, the use of more data can improve system safety and performance. A commons strategy deployed to increase the amount of data AVs has access to is facilitating vehicle-to-vehicle (V2V) communication, allowing vehicles to have access to data from more sensors and more accurate state information of surrounding vehicles. Beyond cellular V2V communication, it is also important to consider the issue of bandwidth efficiency and the application of unmanned autonomous vehicles (UAVs) and other technologies to facilitate communication. However, as communication increases and deep learning is increasingly used more in Avs, it is essential to ensure user data is secure and the vehicles are resistant to any malicious attacks. Because autonomous vehicles require accurate sensory and system information, it is especially relevant to detect and diagnose faults. In this paper, we will consider seven key technologies needed for autonomous vehicles: path planning, computer vision, sensor fusion, data security, fault diagnosis, control, and lastly, communication and networking.
While conventional methods have been tried for these technologies, deep learning is arguably the most promising method. This is due to deep learning’s ability to accurately approximate complex nonlinear relationship using multi-layer transformations. The difference is especially highlighted in imaging tasks such as object recognition and image classification. The key types of deep neural networks are Convolutional Neural Network (CNN), Deep Autoencoder (DAE), Deep Belief Network (DBN) and Deep Reinforcement Learning (DRL). DBNs and DAEs can conduct unsupervised pre-training on the weights, which can ease the difficulty of the subsequent supervised training of the deep networks. However, a fundamental problem in DBNs and DAEs is that there are too many weights to train when the inputs are raw signals or their time-frequency representations.
In contrast, CNN can avoid these issues by using strategies like local receptive field and weight sharing to reduce computational complexity during training. However, CNN has a relevant disadvantage which is its tendency to fall into a local minimum in training. The last key network, Deep Reinforcement Learning (DRL), can achieve exceptional results using a Q-learning or policy gradient algorithm to gain the best rewards for the chosen actions.
Our paper is an extension of “Applying Deep Learning to Autonomous Vehicles: A Survey” [1]. We will review the headway in deep learning algorithms to review these seven essential fields. Some relevant reviews have been published in recent years [2], where some new functions are discussed. The main contributions of our paper are:
- New data security, and communication and networking sections.
- Reviewed most recent papers for path planning, computer vision, sensor fusion, fault diagnosis, and control.
- Identified five directions for future research
2. Deep Learning for Path Planning
In an ordinary environment, path planning aims to guide a vehicle on a collision-free paths for where both static and dynamic obstacles need to be avoided. This method can either have a model-based or model-free design and include local or global planning and optimization for some criteria. Models usually aim to find the shortest time or shortest path method. In the following section, we will review recent literature on path planning for autonomous vehicles and the three primary deep learning tools applied., i.e. DRL, CNN, and Long Short-Term Memory (LSTM).
CNN is commonly used for classification tasks due to its ability to extract features from images. While image classification problems are different from path planning tasks, CNN can be used when generating control signals from sensor data from sources such as cameras and lidars. In [3], a high-level control framework was created for the steering of autonomous angles. The authors mapped input camera data directly to a steering angle to implicitly solve path planning tasks. More commonly, CNN is applied to extract features to be applied in a path planning subtask. In one study, an unmanned aerial vehicle (UAV) was tasked with generating a path through 26 gates in an indoor 3-D environment. In this task, CNN was employed to detect the center of a gate to enable the guidance of the vehicle without collision in real time [4]. CNN can also be applied to tasks of higher difficulty beyond simple 2-D and 3-D single vehicle path planning tasks. In [5], the authors applied CNN in a path planning task of guiding multiple UAVs in a 3-D environment using 2-D CNNs.
LSTM is another type of network used to process image data. As a recurrent neural network, it is often used for sequential image data. In [6], the authors use a model where a LSTM network is applied to extract hidden features to aid in accurately planning sequential moves of AVs. In further works, LSTM has also been successful applied for path planning in an environment with vulnerable road users which includes smaller vehicles such as bicycles and motorcycles as well as pedestrians. In another study, the authors use a model-free planning approach with a deep stacked LSTM network to assess pedestrians’ intentions and plan a vehicle’s motion accordingly [7].
The last primary deep learning tool is deep reinforcement learning which does not rely on labelled data sets and can achieve extraordinary result. In the recent past, DRL has been commonly used in path planning applications for various types unmanned vehicles in diverse environments. The first application involves navigation for road vehicles in an environment with mixed autonomous and traditional vehicles. In this application, DRL is used to generate a shareable driving policy which doesn’t need to consider facts such as system dynamics compared with traditional control methods [8]. DRL has frequently been applied to path planning problems in environments with cluttered obstacles and rough terrain. With an application for robots employed for urban search and rescue missions, the authors in [9] proposed a path planning method using DRL with the input of depth images, elevation maps, and orientation to generate navigation actions. In [10], the authors apply DRL to address the issues of path planning in large complex environments where simultaneous localization and mapping (SLAM) and other conventional methods are less effective and have reduced accuracy due to computational constraints. Thus, DRL is used to directly map sensor input data to output control system directives. While conventional control strategies for aerial vehicles are somewhat mature, there is still reliance on human intervention. Deep learning methods are promising approaches for the control of UAVs. The first of these methods involves using a deterministic policy gradient method in path planning for multiple aerial vehicles. This model can achieve real-time performance in a dynamic environment due to its parameter requirements of only the locations of threat areas, targets, and other UAVs [11]. With applications in search and rescue missions as well as aerial inspections for oil and gas fields, the authors in [12] propose a path planning strategy of UAVs based on a Nokia Snake game strategy. This control policy can be applied to plan more complex paths than comparable previous DRL methods. In [13], path planning for drones using a DQN method which combines deep learning with time-difference learning. The authors used a drone equipped with ultrasonic sensors in a 3D simulation environment. In [14], a similar path planning problem using the DQN method was used to conduct local path planning of unmanned surface vehicles equipped with 360-degree lidar in a 3D simulation environment.
In recent years, mobile edge computing, where computations are done locally at an edge node of a network, has been developed. In [15], a UAV-mounted mobile edge computing system is developed to dynamically take and compute tasks from mobile terminal users. With terminal users starting at random locations with random travel paths, the authors apply DDQN to optimize path plans with time- varying targets. In [16], the authors innovate by applying a multi-agent system to the mobile edge computing network path planning task. Using energy consumption, distance, and computational intensity of each task as parameters, the proposed algorithm is a form of a multi-agent DDPG utilizing a framework of centralized training and decentralized execution. An alternative approach to this problem is shown in [17], where a multi-agent DQL-based algorithm is used. In this study, each UAV is trained with an independent DQN and autonomously executes its actions, only receiving state information from each UAV. In [18], the authors use a mobile edge computing network to tackle the problem of traffic. This study claims that one major issue in traffic and higher fuel consumption in an environment with autonomous vehicles can be solved by having cars travel in a platoon model by reducing changes in speeds and increasing aerodynamic flow. A path can be planned using Q-Learning to optimize speed and fuel consumption.
3. Deep Learning for Computer Vision
Machine learning has always been crucial for environmental perception and computer vision, and their applications such as autonomous vehicles. Deep learning methods have improved on their understanding of sensor data and to some extent, are able to accomplish perception, localization, and mapping. Sensors are divided into two categories: passive sensors such as cameras and active sensors such as lidar, radar, and sonar. Between these two categories, research works on environmental perception generally focus on cameras due to its lower cost and mass availability. Cameras are further divided into two common types, the first of which is a monocular camera with the ability to extract precise information from images in the form of pixel intensities. to the arrangement of pixel densities can be used identify to identify properties such as texture and shape. However, monocular cameras have inadequate accuracy in estimating the size and position of an object because of the limited depth data available in a single image. Thus, a stereo camera system is often used to improve depth estimation. Beyond monocular and stereo cameras, there exists other more specialized cameras, such as Time-of-flight cameras, which can accurately estimate depth from the delay between transmitting and receiving infrared range pulses [19]. An integral part of object detection is distance estimation. While conventional methods have been applicable to only a singular type of camera, the authors in [20] have designed a depth perception model that can be generalized to a variety of camera with varying camera geometries.
Weather and light conditions can negatively influence the accuracy of camera sensors, especially at night or in snowy and rainy weather, where calculations such as depth perception become more complex. In [21], the authors designed a method to address detecting vehicles at night with a monocular camera. This was accomplished using support-vector machines to detect headlight and taillights. While this is an improvement, cameras are still not reliable when used as the only sensor, and lidar still provides the most accurate measurement during night-time. Object detection and identification is necessary for the safe operation of autonomous vehicles, and it is essential to balance the costs of lidar and the reliability of a camera.
To avoid accidents, it is crucial for autonomous ground vehicles to detect domain of the path it takes and any objects along it. Thus, the first step in detecting vehicles, pedestrians, and other obstacles on the road. In [22], the study conducted proposed a road detection model using a CNN network with residual learning and pyramid pooling techniques using monocular vision data. In [23], a method incorporating CNN is proposed to detect various types of speed bumps and solve the problem of the dynamic appearance of speed bumps. Potholes present another problem and the authors in [24] tackles this problem by automatically developing a strategy to detect potholes using information from stereo cameras.
In the operation of vehicles, accidents most commonly occur at road intersections. Therefore, it is critical to obtain information from sensors or connected technologies of nearby road agent positions at intersections. In [25], the authors conducted a study about the effectiveness of various deep neural networks on autonomous vehicles learning road vehicle information from aerial photographs. This can have applications in a smart city where connected technologies enable the sharing of aerial photography data between road agents. Later in this review, we will discuss using wireless connections between vehicles to share various data that may have applications for road intersection safety.
Beyond detecting obstacles and nearby road agents at intersections, it is also crucial to be able to detect and interpret traffic control signals. Traffic Light Recognition (TLR) techniques are comprised of two steps: detecting traffic signals and estimating the state of the signal. Some key challenges are addressing false positives and computational complexity while under dynamic lighting conditions. One proposed TLR method uses a multi-channel high dynamic range (HDR) camera to capture images with multiple exposure values where deep neural networks are used to estimate traffic light signals. In this study, the network detects traffic lights from bright frame data and classifies the signal using dark frame data [26]. In another study, a TLR method with using a video dataset input with six color spaces was proposed. The authors applied three different networks based on region-based deep learning network models with the best result attained with pairing of RGB color space with a R-CNN model [27].
In object detection and identification, key targets for autonomous vehicle systems to reliably detect are passengers and pedestrians. In one approach, the authors of [28] combined RGB-D stereo vision and thermal cameras. In this study, the authors compared Histogram of Oriented Gradient (HOG) and Convolutional Channel Features (CCF) methods with the results indicating CCF superior to HOG for pedestrian detection. In [29], a vision-based system using monocular camera data was developed to predict passenger movement and to detect other objects on the road. This study used a party affinity fields model to estimate the pose of pedestrians in combination with AI to aid in estimating results for risk assessment. In emergency situations, it can be important for autonomous vehicles to detect nearby vehicle passengers to make more informed decisions. This issue is addressed in [30], which proposes a CNN-based method for detecting nearby cars and passengers within those cars using monocular cameras.
With respect to traffic and road rules, CV-based deep learning techniques can also be applied to detecting vehicle road violations. In [31], YoloV3, a real-time object-detection algorithm for detection and tracking integrates a license plate recognition system with LPRNet and MTCNN. Using this, they can track traffic violations and impolite pedestrians.
With eco-friendliness and fuel costs taking greater importance, research has expanded more on optimizing driving patterns to reduce fuel consumption. Applying CNN and computer vision, the authors [32] propose an object detection method that increases the fuel efficiency of hybrid vehicles. In a test, they achieved 96.5% fuel economy of the global optimum with dynamic programming, increasing fuel efficiency by up to 8.8% over an existing method. In [33], the authors propose a different approach to reduce fuel consumption. They proposed a DQN-based car-following strategy and a learning-based energy management strategy to achieve a low fuel consumption while maintaining a safe real-time distance. In [34], they tackled fuel efficiency for fuel cell vehicles using a spatiotemporal-vision-based deep neural network. This method improved the accuracy of predicted speed, especially when the traffic was dense.
4. Deep Learning for Sensor Fusion
Autonomous vehicles are typically equipped with multiple sensors such as Global Positioning System (GPS), Inertia Measurement Unit (IMU), cameras, radar, ultrasound, as well as light detection and ranging (Lidar). While each sensor provides key data, they each have their limitations. However, by combining the strengths of each sensor, together, they can provide autonomous vehicles with superior information to render decisions for control, path planning, and fault management.
Traditionally, sensor data was fused using the Kalman filter algorithm. However, deep learning has become an increasingly more popular method of combining sensor data due to its effectiveness and relative simplicity. One key application in sensor fusion is overcoming the shortfall of cameras. While cameras are used to capture important data such as object size and shape, it lacks the ability to accurately measure key values such as distance and velocity, that sensors like radars and lidars can compensate for. In [35], the authors explore the application of a CNN network in the fusion of camera raw pixels and lidar depth values to generate a feature vector. This study employed a novel temporal-history based attention mechanism which proved to be resilient to errors in sensor signals. To address the lack of clarity in camera data in severe weather conditions and at night, the authors in [36] conducted a study fusing camera and radar data with a model based on RetinaNet. In [37], the authors propose a vehicle detection model based on the fusion of lidar and camera data. In this model, possible vehicle locations are obtained from lidar point cloud data and a CNN network is used to refine and detect vehicle locations. Beyond detecting vehicles, lidar camera fusion was applied in [38] for high accuracy road detection even in difficult road conditions such as in extreme weather. In sensor fusion and deep learning for segmentation tasks, the quality of training data plays a vital role in these studies’ success. In [39], a new collaborative method of collective multi-sensor training data and automatically generating accurate labels is proposed.
In [40], the authors propose a deep convolutional network for vehicle detection with three modalities: color image, lidar reflectance map, and lidar depth map. This model is able to produce a more accurate prediction using joint learning because joint learning generates more data for safely operating vehicles compared to learning environmental data and driving policy independently. In [41], the authors propose a dual-modal DNN to create an improved detection model in severe environmental conditions such as rain, snow, and night-time where features can be blurry. This network fuses color and infrared images and achieves improved performance for low-observable targets. In [42], the authors proposed an Integrated Multimodality Fusion Deep Neural Network, which processed each modality independently before being processed together in further networks, which creates modularity and increased flexibility, thus providing a greater ability for generalization.
In [43], a cooperative perception system was proposed to expand the scope of vehicle perception and eliminate blind spots by integrating data from multiple vehicle sources using both graphic and semantic alignment. In [44], the authors introduce a cooperative visual-free sensor fusion technique combining vehicle detector, remote microwave sensors, and toll collection data to predict fine-grain flow traffic. In [45], the authors presented a model integrating various smartphone sensor data to detect real time vehicle maneuvers. This system uses GPS, gyroscope, accelerometer, and magnetometer to detect turns and other movements to be communicated to enhance safety. In [46], a smartphone sensor-based method is proposed to detect human activity recognition. In [47], the authors propose Gated Recurrent Fusion Units (GRFU) which have gating mechanisms similar to those in LSTM to create a new joint learning mechanism, which proved to have an improved error rate. In [48], a novel end-to-end driving DNN is proposed. This proposed network that incorporates scene understanding, which understands spatial, functional, and semantic relationships and uses lidar and camera.
5. Deep Learning for Data Security
With the rapid development of deep learning, increasing amounts of user data are required to train and models. In some systems, the privacy of user data can be a concern. In addition, it is also essential for systems to accurately detect attacks with intentions to corrupt the model. Current data security methods will be discussed for centralized and federated learning models. Model training requires a large number of data samples. In addition to the information required to train the models, these models also inadvertently include auxiliary that malicious actors can use to infer information about the individuals such as location and trajectory. In [49], the authors use a GAN to generate privacy-preserving data that still retain its usefulness in training.
In recent times, federated learning has been introduced, where a model is downloaded and trained locally with private data before being uploaded, and model aggregation occurs. It became popular due to offloading some computational power to individual devices, and privacy concerns as users don’t have to share their private data [50] [51]. In federated learning, two problems being tackled are detecting bad actors that maliciously upload faulty data to distort the accuracy of the model and preserving privacy before model aggregation. In [52], the authors use a blockchain method for UAVs, which replaces the central curator to combine the learned parameters in the model. In addition, they also use a local differential privacy algorithm to mask personal data. In [53], the authors propose a different method to tackle this problem. They introduce a privacy-preserving model aggregation scheme named FedLoc by using homomorphic encryption and a bounded Laplace mechanism. In [54], the authors introduced CLONE, a collaborative edge learning framework using federated learning techniques. This can be applied to a multitask tracking problem or in EV battery fault detection. In [55], the authors tackle the problem of sharing data for model training for data collected by independent companies/vehicles by using federated learning. A blockchain structure can also be used to enhance privacy. In [56], a hybrid blockchain architecture is proposed to be used in federated learning for vehicular applications. In [57], the authors also use blockchain to facilitate communication in a mobile edge computing application, however, with the innovation of applying a hybrid model intrusion detection system to the data. Their proposed framework has shown reduced false alarm rate and a high accuracy of 99%.
In addition to data-privacy, it is also essential to detect and prevent malicious attacks. These attacks can corrupt the model during training or operation by providing malicious input. One example of an attack using adversarial GPS trajectories against crowdsourced navigation systems is Cybil attacks. In [58], a Bayesian deep learning method was used to identify Sybil attacks. In [59], the authors discuss the development of poisoning and evasion attacks and review recent methods used to address them. The methods used against poisoning attacks include ensemble learning methods to increase resistance against variance and comparing classifiers for each training data set. Adversarial training is the main method used against evasion attacks which works by introducing both legalized and adversarial samples to train the model on detecting adversarial samples. In [60], the authors proposed a multi-strength adversarial training technique which combines adversarial training examples with different adversarial strengths.
6. Deep Learning for Data Security
As the field of autonomous vehicles develops, these vehicles require more sensors and actuators and become increasingly reliant on them. However, the proliferation of sensors and actuators in vehicles which heavily rely on their accuracy also increases the occurrence of a vehicle fault. To ensure the safe operation of autonomous vehicles, fault detection methods need to be able to be more reliable. Conventional fault detection methods can be classified into three categories: model based, signal based and knowledge based. In recent years, DNN-based fault detection has become popularized as it can achieve faster and more accurate results. In addition, deep learning can accurately map complex patterns and signals to accurately assess the health condition of the components, leading to its prospects of becoming a promising research field. In recent years, CNN, DAE and DBN have all been applied in fault diagnosis tasks.
Planetary gearboxes are commonly used in mechanical systems such as transmission systems present in ICE and some electric vehicles. Using a deep residual network with vibration signals as input, the authors in [61] created a model integrating the network with domain knowledge to identify faults and the condition of planetary gearboxes. An alternate approach is taken to detect faults in a planetary gearbox in [62] where a model is formed with transfer learning combined with a deep autoencoder with wavelet activation functions. The resulting model is effective under variable conditions such as changing speed and location.
In-vehicle gateways are modules that connect to and receives data from various sensors in a vehicle. Several studies have been conducted to utilize this information for fault diagnosis. In [63], the authors combined a LSTM network with an in-vehicle gateway to diagnose faults based on fault data by using comparisons with previous sensor data. In [64], an IoT Gateway combined with deep learning is used to diagnose the faults of the sensors. This self-diagnosis information can be used for self-repairing. The inputs of the deep learning network are sensor signals, and the outputs are the condition of parts, of which the driver will be informed through diagnostic results. One innovation of the work is that data collected by a gateway are from different protocols such as CAN, FlexRay, and MOST.
Deep learning can be used to detect faults in many components of the vehicle. In [65], electrical signals are analyzed to detect the fault in the spacecraft’s electronic load system. A deep autoencoder-based clustering system and a CNN-based classification method is used to process high-dimensional signal data to detect and classify faults. In [66] a combined CNN and LSTM model is used to detect the pre-ignition of engine control signals using in-vehicle data. In [67], the training data are generated from the UAV system model. One dimensional signal is then extended to time-frequency domains using wavelet transform. Then, deep learning is performed on the image data to find different sensor or actuator faults.
Electric vehicles have microgrids that encompasses energy storage systems, electric motor, motor drive and protective components. With complete reliance on micro-grids, detecting faults in the micro-grid of an electric vehicle is crucial to the safety of the vehicle. In [68], a CNN-based method was studied to detects false battery data in battery energy storage systems, with application to those in electric vehicles. In [69], the authors used a CNN-based model to solve the fault classification problem for micro-grids. This method uses voltage and other measures from inverters, converter, capacitors to create a fault detection method to reinforce traditional methods. It is especially important to reliably detect faults in unmanned autonomous vehicles due to the higher cost of failure. In [70], the authors presented a strategy for diagnosing faults in actuators of multi-rotor UAVs based on a hybrid LSTM-CNN model. One common constraint in UAVs is the challenge of running complicated fault detection methods in real-time, which have size, weight, and power consumption constraints. To tackle this problem, [71] proposes an LSTM-based fault detection model acceleration engine. In [72], the authors use LSTM to estimate the estimated wheel angle and an improved sequential probability ratio test to detect a fault in vehicle wheel angle signals.
In the future, connected and automated vehicles communicating in real-time are expected to improve road safety. In [73], the problem of anomalous sensor readings is tackled with a CNN-based sensor anomaly detection and identification method. In [74], the authors address detecting malicious actors in connected and automated vehicles during cruise control. Using a multi-agent DRL method, they can cooperatively and accurately detect attackers.
7. Deep Learning for Control Algorithms
Autonomous vehicle control models consist of two parts: perception planning and control paradigm. Traditionally, control systems methods relied on mathematical models including optimal control, robust control, PID control, and adaptive control. While conventional models are more easily interpreted and have a theoretical foundation, they performance worse for more complex data or larger data sets. In comparison, deep learning control approaches are model- free, data-driven which indicates its applicability to both discrete and continuous systems. Due to these differences, deep learning can’t be directly applied in conventional models. Instead, contemporary deep learning solutions for autonomous driving employs end-to-end controllers to improve or provide state estimation.
Recently, deep learning has been the chosen method for the state estimation of different controllers [75] and enhancing state estimation quality [76]. In control systems, the dynamic model that identifies the uncertainties and hidden states form the foundation. However, the conventional system identification method does not easily identify model parameters. Helicopters require complex control systems due to having complex interactions with external forces and internal controls. In [77], the authors propose a deep convolutional neural network-based dynamic identifier for a controller. This scheme can accurately identify the helicopter’s dynamic behavior, maintain stability even in untrained maneuvers. In [75], the authors present a flight control method for autonomous helicopters. A deep learning network is used as the identifier for an adaptive control scheme. The complete model includes a first principle-based dynamic model and a CNN based model for modelling hidden states and uncertainties where all parameters and weights are trained with real flight data. In [76], a deep learning network with the drop out technique is used to improve the performance of the attitude state estimation by the Kalman filter. This network is trained to model the measurement noises, which in turn is used to filter out the noise and enhance the quality of the Kalman filter. Deep learning is used to compensate for delays and measurement noises. The information extracted by a modular deep recurrent neural network is combined with sensory readings before being fed into the Kalman filter for state prediction and update. This deep learning network can detect the hidden states, which are normally difficult to be measured by sensors. In [78], a CNN and LSTM-based observer is presented. First, LSTM processes videos and adds a temporal dimension to the cost map. Then a particle filter is used for state estimation. Lastly, the cost map generated by the deep learning network is combined with readings from IMU and wheel speed encoders to predicate and update the states for model predictive control.
Deep learning can also replace conventional discrete controllers such as PID controllers and instead use a deep learning model to generate output control actions which can be either discrete or continuous [79]. Another application combines the learning with the conventional controller to form a hierarchical or better control system. Oceans and other large bodies of water have complex environments and as a result, existing autonomous underwater vehicles relying on conventional controllers have imposed paths and pre-planned tasks. In [80], the authors present a model based on deep interactive reinforcement learning to facilitate path following. This model uses a dual reward method by which the network can learn from both human and environmental feedback simultaneously. In [81], the authors investigated a low-level DRL-based control strategy for underwater vehicles. A deep reinforcement learning network is introduced with sensory signals serving as the sole input of the network without prior knowledge of vehicle dynamics. In [82], a deep learning tracking control algorithm is applied to improve the accuracy and adaptability of driving trajectory tracking. In [83], the deep learning network is used to analyze the environment to predict lateral and longitudinal control. In this network, two separate models are used for vehicle speed and steering, where the inputs are road images, and the outputs are the speed and steering. In [84], the authors propose to use a CNN network as an end-to-end controller for driving, while two other CNN-based deep networks generate both the feature map and error map to help the controller better understand the scene. An attention model is used to identify the regions that affect the output most. In [85], the authors have explored the use of reinforcement learning for high-level decision-making in the context of a robotic game. In this hierarchy structure, the high-level DL is combined with low-level controllers to deliver a better control performance. The design of the controller with multiple levels can accommodate the challenges in the game, such as action delay. In [86], DRL was applied in intelligent control with a self-organizing control system based on DDPG. Using simulations, the reference signal self-organizing control system was able to stabilize an inverted pendulum using a rotor. Autonomous vehicle control systems often have trouble with hard to predict actions such as cut-in maneuvers. In [87], a control strategy is developed using a two-part training strategy of experience screening followed by policy learning to increase performance in uncertain scenarios.
Deep learning can also be applied to larger-scale control algorithms such as wide-range traffic control and power grids. Inefficient traffic control results in more stop-and-go traffic, increasing wait times and fuel consumption. In [88], RL is applied to adaptive traffic signal control. This is achieved by using multi-agent RL that distributes global control to each local agent with a new decentralized multi-agent RL algorithm that has improved observability and reduced the learning difficulty of each agent. In [89], the authors present a more centralized method of traffic control. Using information about vehicles near a particular intersection, including speed and location as input, they train a model that controls the duration of traffic signal timings to reduce vehicle wait times and trip lengths.
In the electrical grid, conventional control systems are not well optimized and do not adapt to changes well. Although with high difficulty and complexity, AI has been applied in power grid control, which will be an important research front for future autonomous vehicles. A novel two-timescale voltage control system is presented in [90]. Using a feed-forward DQN with physics driven optimization, a two-timescale approach was able to minimize voltage deviations and optimize power flow. [91] developed a DRL based autonomous voltage control for power grid operations. They proposed to use both DQN and DDPG to create autonomous voltage control strategies to better adapt to unknown system changes.
8. Deep Learning for Communication and Networking
Autonomous vehicles can play an important role in communication and networking. In communications, there is often an inefficient use of allocated bandwidth. In [92], a deep learning-based channel and carrier frequency offset equalization technique is proposed to improve bandwidth efficiency. In emergencies, it is common for base stations and power sources to be destroyed, restricting access to communication networks when needed. As UAVs evolved, they have been purposed to assist in emergency communication networks as a base station. A fundamental problem being solved is the optimization of resources, while UAVs are both limited by their coverage area and energy consumption. In [93], the authors propose a novel DRL method to optimize energy consumption. In [94], the authors approach this problem with a DRL based on Q-Learning and CNN to optimize macro base power allocation and UAV service selection. Recently, vehicular ad hoc networks have been used in autonomous vehicles for vehicles to improve safety and comfort [95]. However, vehicles often have a restrictive communication range. To address this issue, communication between vehicles and other types of devices is used. In [96], a deep learning-based algorithm is proposed for transmission mode selection and resource allocation for cellular vehicle-to-everything communication. In [97], the authors propose using UAVs as relays in these networks. Using DISCOUNT, a DRL framework, an organized and intelligent group of UAVs are optimized to increase connectivity and minimize energy consumption. A common issue in cellular-connected UAVs is interference between each relay. In [98], the authors propose a deep learning algorithm based on echo state network architecture to create an interference-aware path planning strategy.
Beyond UAVs, which have range and power constraints, satellites have merit as a solution to improve vehicle-to-vehicle communication on the ground, especially in depopulated areas. However, satellites have limited computing and communication resources. To tackle this issue, the authors in [99] used deep learning with the Lagrange multiplier method to improve joint task offloading and resource allocation. Maritime communications are often bottlenecked by the immense data volumes required. In [100], the authors propose a transmission scheduling strategy based using a deep-Q network. This strategy optimizes the network routing.
9. Discussion
Autonomous vehicles will significantly impact the future of the automobile industry. Fully autonomous vehicles can improve safety and travel comfort as smooth and consistent driving will reduce congestion. They have various benefits and advantages as follows:
- More independent mobility. Better access for people who cannot drive, including the elderly and young people.
- Facilitating car sharing and ride sharing. An increase in car-sharing opportunities will reduce the need to own a car and associated costs.
- More efficient vehicle traffic. Reduces congestion and roadway costs due to more consistent behavior on the road.
- Fewer cars on the road. Reduce drivers’ stress and increase productivity. While traveling, motorists can rest, play, and work.
- Greater safety. Several opinions say that autonomous vehicles will eliminate 95% of all human error. Thus, autonomous vehicles reduce crash risks and high-risk driving since they are not impacted by human emotions or bias while driving.
Because deep learning is able to learn or discover very complex high-dimensional nonlinear patterns or relationships from a large amount of training samples, deep learning has been successfully applied in seven research areas in autonomous vehicles. However, we still need to further investigate new techniques to overcome some limitations associated with most deep learning algorithms, including easily getting trapped in a local minimum, slow convergence during training especially for deep reinforcement learning, requiring a large set of training samples or overfitting for small training datasets.
- Future directions. We still need to overcome serious major challenges before fully autonomous vehicles are ready for public use. In the near future, autonomous vehicles may be limited to some specific scenarios, such as narrower situations and clearer weather.
- Employ deep learning to develop new sensor fusion techniques for autonomous cars with different certain road conditions. Current techniques mainly focus on narrower good road conditions. They cannot handle more complicated road conditions such as changing road conditions, night conditions, unlit roads at night, unmarked roads, even unpaved roads, unexpected conditions such as animals suddenly crossing roads, or combinations of the above situations. These complicated conditions require novel sensor fusion algorithms and probably require to develop new perception devices. Deep learning is more suitable for these very complex scenarios than conventional methods. We need to investigate novel deep learning-based fusion techniques for more complicated road conditions.
- Developing new sensor devices and novel algorithms for challenging bad weather conditions. Current sensing techniques work relatively well with clearer weather. However, like our eyes, vehicle sensors do not work as well in bad weather conditions such as rain, fog, snow, and ice, which not only reduce the visibility but also cause dangerous road conditions. Many autonomous cars employ Lidar technology using lasers. However, snow and ice absorb laser light rather than reflecting it, making these vehicles blind in inclement weather conditions and making it difficult for Lidar to accurately identify obstacles. Therefore, these bad conditions make autonomous cars harder to navigate and cause potential safety issues for other drivers and pedestrians alike. They also make conventional processing algorithms more challenging to obtain accurate perception from low quality sensor data in bad weather conditions. We need to develop novel intelligent sensors to obtain accurate perception and corresponding processing techniques for these bad weather conditions. Deep learning techniques have the potential to deal with these more challenging scenarios.
- Investigate novel deep reinforcement learning techniques to achieve multiple objectives in the design of autonomous vehicles. The design of fully autonomous vehicles often involves multiple, even conflicting, objectives or criteria. For example, car connectivity using vehicle-to-vehicle communication (V2V) communications with surrounding vehicles makes many tasks, such as merging easier, but securing the communication system could be extremely difficult. Thus, it also increases cybersecurity risks since there are more ways to get into them and disrupt what they’re doing as vehicles get more connected. Therefore, we need to investigate new techniques and strategies in order balance the benefits of using V2V communications and cybersecurity risks. We believe DRL is one suitable technique to make the optimal decision for these complicated situations.
- Early fault diagnosis and prognosis of autonomous vehicles. Since the occurrence of faults increases due to the significant increase of sensors and components in autonomous vehicles, early fault diagnosis and prognosis are more important and more challenging to ensure the vehicle safety. We need to investigate new real-time early fault diagnosis methods for more complicated scenarios considering not only the components and sensors associated with the vehicle itself but also the faults or disruption of V2V communications and the reliability of some global information from networking infrastructure such as real-time road conditions. Early fault diagnosis under relatively normal driving conditions is especially important to discover potential issues at an earlier stage, provide early warning or alert, and take actions such as timely checkup and maintenance to prevent getting stuck in the middle of remote roads or even catastrophic accidents. Since many faults at early stage often involve small or subtle changes, it is more difficult to accurately detect these small anomalies, especially under normal driving operations due to the lack of special expensive instruments in car dealers. However, this is a very important research topic for vehicle safety and timely maintenance.
- Developing novel low-cost techniques to make autonomous vehicles more approachable and affordable. Autonomous cars are currently very costly, which makes investing in them difficult for most people. Even current techniques can achieve the required performance criteria under good road and weather conditions, we still need to investigate novel effective and efficient algorithms for more complicated scenarios where they require more advanced sensors and more computation power. However, it is a time-critical mission for autonomous vehicles with rapid response time. We believe deep learning will play an important role in developing novel intelligent technologies for environment perception, planning and navigation on challenging roads while keeping low cost.
10. Conclusion
In this article, we reviewed recent developments in the area of deep learning applications in autonomous vehicles. These seven active research areas are control, computer vision, sensor fusion, path planning, fault diagnosis, communication and networking, and data security. Several types of deep neural networks were reviewed and compared, in which deep learning were successfully applied to various design and operation aspects of autonomous vehicles. Deep learning has taken a significant role in the development of technologies for autonomous vehicles. It will continue to play an important role in the future development and refinement of these technologies.
Table 1: Path Planning Solutions
Research paper | Application(s) | Deep learning method | Network | Comments |
[3] | Unmanned Vehicles | ReLU | CNN | Camera map steering angle |
[4] | Unmanned Aerial Vehicles | – | CNN | Real-Time |
[5] | Unmanned Vehicles | Imitation Learning | CNN | 3D |
[6] | Unmanned Vehicles | LSTM | LSTM | Feature Extraction |
[7] | Unmanned Vehicles | LSTM | LSTM | Pedestrian Detection |
[8] | Unmanned Vehicles | DRL | DDQN | Mixed environment with manual vehicles |
[9] | Unmanned Vehicles | DRL | DDPG | Unknown Environment |
[10] | Unmanned Vehicles | DRL | DQN | 3D, Autonomous, Real-Time |
[11] | Unmanned Vehicles | DRL | MADDPG | Multi-Agent, Dynamic |
[12] | Unmanned Aerial Vehicles | DRL | A3C Model |
|
[13] | Unmanned Aerial Vehicles | DRL | DQN | Ultrasonic Sensor, 3D , Obstacle Avoidance |
[14] | Unmanned Surface Vehicles | DRL | DQN | High Degrees of Freedom
|
[15] | Unmanned Aerial Vehicles | DRL | DQN | Mobile-Edge Computing |
[16] | Multi-UAV (mobile communications system) | DRL | DDPG | Multi-Agent, 3D, Real-Time, Mobile-Edge Computing |
[17] | Multi-UAV (wireless communications system) | DRL | DQN | Mobile Edge Computing |
[18] | Multi-Vehicle optimization | DRL | DQN | Mobile-Edge computing |
Table 2: Fault Diagnosis Solution Comparisons
Research paper | Application(s) | Deep learning method | Network | Comments |
[61] | Fault Detection Gearbox | CNN | DRN | Vibratory Signals |
[62] | Fault Detection Gearbox | Autoencoder |
| Vibratory Signals |
[63] | Part-Diagnosis UAV | CNN | CNN, LSTM | In Vehicle, |
[64] | Fault Detection UAV | – | NN | Vehicle Data, Sensors |
[65] | Spacecraft Electronic Systems | CNN | CNN | High Dimensional Electric Data |
[66] | Fault Detection Parts and Pre-Ignition | CNN | CNN, LSTM | Vehicle Data |
[67] | Fault Detection and Identification UAV | LQR | DNN | CIFTA Graphs |
[68] | False Battery Data Detection | CNN | CNN | Battery Health Sensor, Charging Sensor |
[69] | Fault Detection UEV Microgrid | CNN | DNN, CNN | Converter, Inverter Data |
[70] | Fault Detection UAV | CNN | CNN, LSTM | Actuator, Flight Data |
[71] | Fault Detection UAV | – | LSTM | Real-Time |
[72] | Fault Detection Signal | – | LSTM | Wheel Angle Data |
[73] | Fault Detection UAV Sensor | CNN | CNN | Connected Vehicles, Real-Time |
[74] | Attack Detection UAV | MDP, Gradient Descent | DRL | Connected Vehicles, Multi-Agent |
Acknowledgment
We acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
- J. Ren, H. Gaber and S. S. Al Jabar, “Applying Deep Learning to Autonomous Vehicles: A Survey,” 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD), 2021, pp. 247-252, doi: 10.1109/ICAIBD51990.2021.9458968.
- C. Hodges, S. An, H. Rahmani, & M. Bennamoun. Deep Learning for Driverless Vehicles. In V. Balas, S. Roy, D. Sharma, & P. Samui (Eds.), Handbook of Deep Learning Applications pp. 83-99. ( Smart Innovation, Systems and Technologies; Vol. 136). Springer, 2019
- V. Rausch et al, “Learning a Deep Neural Net Policy for End-to-End Control of Autonomous Vehicles” American Control Conference, Seattle, May 2017.
- H. Jung et al, “Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning” IEEE Robotics and Automation Letters, 2018.
- K. Wu et al, “TDPP-Net: Achieving Three-Dimensional Path Planning via a Deep Neural Network Architecture” Neurocomputing, vol. 357, pp. 151-162, 2019.
- Z. Bai et al, “Deep Learning Based Motion Planning for Autonomous Vehicle Using Spatiotemporal LSTM Network” Chinese Automation Congress, 2018.
- K. Saleh, M. Hossny and S. Nahavandi, “Intent Prediction of Pedestrians via Motion Trajectories using Stacked Recurrent Neural Networks” IEEE Transaction on Intelligent Vehicles,vol. 3, no. 4, 2018.
- K. Makantasis et al, “Deep Reinforcement -Learning-Based Driving Policy for Autonomous Road Vehicles” IET Intelligent Transport Systems, 2020.
- K. Zhang et al, “Robot Navigation of Environments with Unknown Rough Terrain Using Deep Reinforcement Learning” Proc. IEE Int. Symp. Sat. Secur. Rescue Robot., 2018.
- C. Wang et al, “Autonomous Navigation of UAVs in Large-Scale Complex Environment: A Deep Reinforcement Learning Approach” IEEE Transactions on Vehicle Technology, 2019.
- H. Qie et al, “Joint Optimization of Multi-UAV Target Assignment and Path Planning Based on Multi-Agent Reinforcement Learning,” IEEE Access, vol. 7, 2019.
- C. Wu et al, “UAV Autonomous Target Search Based on Deep Reinforcement Learning in Complex Disaster Scene,” IEEE Access, vol. 7, 2019.
- G-T. Tu and J. -G. Juang, “Path Planning and Obstacle Avoidance Based on Reinforcement Learning for UAV Application,” 2021 International Conference on System Science and Engineering (ICSSE), 2021, pp. 352-355, doi: 10.1109/ICSSE52999.2021.9537945.
- H. Zhai, W. Wang, W. Zhang and Q. Li, “Path Planning Algorithms for USVs via Deep Reinforcement Learning,” 2021 China Automation Congress (CAC), 2021, pp. 4281-4286, doi: 10.1109/CAC53003.2021.9728038.
- Liu, L. Shi, L. Sun, J. Li, M. Ding and F. Shu, “Path Planning for UAV-Mounted Mobile Edge Computing With Deep Reinforcement Learning,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5723-5728, May 2020, doi: 10.1109/TVT.2020.2982508.
- Wang, K. Wang, C. Pan, W. Xu, N. Aslam and L. Hanzo, “Multi-Agent Deep Reinforcement Learning-Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing,” in IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 1, pp. 73-84, March 2021, doi: 10.1109/TCCN.2020.3027695.
- Tang, J. Song, J. Ou, J. Luo, X. Zhang and K. -K. Wong, “Minimum Throughput Maximization for Multi-UAV Enabled WPCN: A Deep Reinforcement Learning Method,” in IEEE Access, vol. 8, pp. 9124-9132, 2020, doi: 10.1109/ACCESS.2020.2964042.
- Chen, J. Jiang, N. Lv and S. Li, “An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge,” in IEEE Access, vol. 8, pp. 99059-99069, 2020, doi: 10.1109/ACCESS.2020.2998015.
- Arnold, O.Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis. “A Survey on 3D Object Detection Methods for Autonomous Driving Applications,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 10, Oct. 2019.
- Ravi Kumar et al., “SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras,” in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 10252-10261, Aug. 2022, doi: 10.1109/TITS.2021.3088950.
- Kosaka andG.Ohashi, “Vision-based night-time vehicle detection using CenSurE and SVM,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 5, pp. 2599–2608, 2015.
- A. Yudina, A. Skrynnikb, A. Krishtopika, I. Belkina, and A. I. Panova, “Object Detection with Deep Neural Networks for Reinforcement Learning in the Task of Autonomous Vehicles Path Planning at the Intersection,” Optical Memory and Neural Networks (Information Optics), vol. 28, no. 4, p 283-295, 2019.
- Wang, and L. Zhou “Traffic Light Recognition With High Dynamic Range Imaging and Deep Learning,” IEEE Trans. Intell. Transp. Syst, vol. 20, no. 4, Apr. 2019.
- H. Kim, J. H. Park, and H. Y. Jung, “An Efficient Color Space for Deep-Learning Based Traffic Light Recognition,” Journal of Advanced Transportation, Dec. 2018.
- Z. Chen, and X. Huang, “Pedestrian Detection for Autonomous Vehicle Using Multi-Spectral Cameras,” IEEE Transactions on Intelligent Vehicles, vol. 4, No. 2, Jun. 2019.
- A. Amanatiadis, E. Karakasis, L. Bampis, S. Ploumpis, and A. Gasteratos, “ViPED: On-road vehicle passenger detection for autonomous vehicles,” Robotics and Autonomous Systems, Dec. 2018.
- Y. Li et al., “A Deep Learning-Based Hybrid Framework for Object Detection and Recognition in Autonomous Driving,” in IEEE Access, vol. 8, pp. 194228-194239, 2020, doi: 10.1109/ACCESS.2020.3033289.
- X. Liu, and Z. Deng, “Segmentation of Drivable Road Using Deep Fully Convolutional Residual Network with Pyramid Pooling,” Cognitive Computation, Nov. 2017.
- D. K. Dewangan and S. P. Sahu, “Deep Learning-Based Speed Bump Detection Model for Intelligent Vehicle System Using Raspberry Pi,” in IEEE Sensors Journal, vol. 21, no. 3, pp. 3570-3578, 1 Feb.1, 2021, doi: 10.1109/JSEN.2020.3027097.
- A. Dhiman and R. Klette, “Pothole Detection Using Computer Vision and Learning,” in IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 8, pp. 3536-3550, Aug. 2020, doi: 10.1109/TITS.2019.2931297.
- R. Xu, Y. Chen, X. Chen and S. Chen, “Deep learning based vehicle violation detection system,” 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), 2021, pp. 796-799, doi: 10.1109/ICSP51882.2021.9408935.
- Y. Wang, H. Tan, Y. Wu and J. Peng, “Hybrid Electric Vehicle Energy Management With Computer Vision and Deep Reinforcement Learning,” in IEEE Transactions on Industrial Informatics, vol. 17, no. 6, pp. 3857-3868, June 2021, doi: 10.1109/TII.2020.3015748.
- X. Tang, J. Chen, K. Yang, M. Toyoda, T. Liu and X. Hu, “Visual Detection and Deep Reinforcement Learning-Based Car Following and Energy Management for Hybrid Electric Vehicles,” in IEEE Transactions on Transportation Electrification, vol. 8, no. 2, pp. 2501-2515, June 2022, doi: 10.1109/TTE.2022.3141780.
- Y. Zhang et al., “Improved Short-Term Speed Prediction Using Spatiotemporal-Vision-Based Deep Neural Network for Intelligent Fuel Cell Vehicles,” in IEEE Transactions on Industrial Informatics, vol. 17, no. 9, pp. 6004-6013, Sept. 2021, doi: 10.1109/TII.2020.3033980.
- H. Unlu et al, “Sliding-Window Temporal Attention Based Deep Learning System for Robust Sensor Modality Fusion for UGV Navigation,” IEEE Robotics and Automation Letters, vol. 4, no. 4, 2019.
- F. Nobis et al, “A Deep Learning-Based Radar and Camera Sensor Fusion Architectur for Object Detection,” Sensor Data Fusion: Trends, Solutions, Applications,2019.
- X. Du et al, “Car Detection for Autonomous Vehicle: LIDAR and Vision Fusion Approach through Deep Learning Framework,” International Conference on Intelligent Robots and Systems, 2017.
- C. Luca et al, “Lidar-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks,” Robotics and Autonomous Systems, vol. 111, 2019.
- H. Liu and D. M. Blough, “MultiVTrain: Collaborative Multi-View Active Learning for Segmentation in Connected Vehicles,” 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS), 2021, pp. 428-436, doi: 10.1109/MASS52906.2021.00060.
- A. Alireza et al, “Multimodal Vehicle Detection: Fusing 3D-LIDAR and Color Camera Data,” Pattern Recognition Letters, vol. 115, 2018.
- K. Geng et al, “Low-Observable Targets Detection for Autonomous Vehicles Based on Dual-Modal Sensor Fusion with Deep Learning Approach,” Proceedings of the Institutiion of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2019.
- J. Nie, J. Yan, H. Yin, L. Ren and Q. Meng, “A Multimodality Fusion Deep Neural Network and Safety Test Strategy for Intelligent Vehicles,” in IEEE Transactions on Intelligent Vehicles, vol. 6, no. 2, pp. 310-322, June 2021, doi: 10.1109/TIV.2020.3027319.
- Z. Xiao et al, “Multimedia Fusion at Semantic Level in Vehicle Cooperative Perception,” IEEE International Conference on Multimedia & Expo Workshops, 2018.
- Wang, W. Hao and Y. Jin, “Fine-Grained Traffic Flow Prediction of Various Vehicle Types via Fusion of Multisource Data and Deep Learning Approaches,” in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 11, pp. 6921-6930, Nov. 2021, doi: 10.1109/TITS.2020.2997412.
- Li, M. Abdel-Aty, Q. Cai and Z. Islam, “A Deep Learning Approach to Detect Real-Time Vehicle Maneuvers Based on Smartphone Sensors,” in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 4, pp. 3148-3157, April 2022, doi: 10.1109/TITS.2020.3032055.
- Chen, C. Jiang, S. Xiang, J. Ding, M. Wu and X. Li, “Smartphone Sensor-Based Human Activity Recognition Using Feature Fusion and Maximum Full a Posteriori,” in IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 7, pp. 3992-4001, July 2020, doi: 10.1109/TIM.2019.2945467.
- Narayanan, A. Siravuru and B. Dariush, “Gated Recurrent Fusion to Learn Driving Behavior from Temporal Multimodal Data,” IEEE Robotics and Automation Letters, vol. 5, no. 2, 2020.
- Huang, C. Lv, Y. Xing and J. Wu, “Multi-Modal Sensor Fusion-Based Deep Neural Network for End-to-End Autonomous Driving With Scene Understanding,” in IEEE Sensors Journal, vol. 21, no. 10, pp. 11781-11790, 15 May15, 2021, doi: 10.1109/JSEN.2020.3003121.
- Xiong, Z. Cai, Q. Han, A. Alrawais and W. Li, “ADGAN: Protect Your Location Privacy in Camera Data of Auto-Driving Vehicles,” in IEEE Transactions on Industrial Informatics, vol. 17, no. 9, pp. 6200-6210, Sept. 2021, doi: 10.1109/TII.2020.3032352.
- Yang, Y. He and J. Qiao, “Federated Learning-Based Privacy-Preserving and Security: Survey,” 2021 Computing, Communications and IoT Applications (ComComAp), 2021, pp. 312-317, doi: 10.1109/ComComAp53641.2021.9653016.
- A. Ferrag, O. Friha, L. Maglaras, H. Janicke and L. Shu, “Federated Deep Learning for Cyber Security in the Internet of Things: Concepts, Applications, and Experimental Analysis,” in IEEE Access, vol. 9, pp. 138509-138542, 2021, doi: 10.1109/ACCESS.2021.3118642.
- Wang, Z. Su, N. Zhang and A. Benslimane, “Learning in the Air: Secure Federated Learning for UAV-Assisted Crowdsensing,” in IEEE Transactions on Network Science and Engineering, vol. 8, no. 2, pp. 1055-1069, 1 April-June 2021, doi: 10.1109/TNSE.2020.3014385.
- Kong et al., “Privacy-Preserving Aggregation for Federated Learning-Based Navigation in Vehicular Fog,” in IEEE Transactions on Industrial Informatics, vol. 17, no. 12, pp. 8453-8463, Dec. 2021, doi: 10.1109/TII.2021.3075683.
- Lu, Y. Yao and W. Shi, “CLONE: Collaborative Learning on the Edges,” in IEEE Internet of Things Journal, vol. 8, no. 13, pp. 10222-10236, 1 July1, 2021, doi: 10.1109/JIOT.2020.3030278.
- Y. B. Lim et al., “Towards Federated Learning in UAV-Enabled Internet of Vehicles: A Multi-Dimensional Contract-Matching Approach,” in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 5140-5154, Aug. 2021, doi: 10.1109/TITS.2021.3056341
- Lu, X. Huang, K. Zhang, S. Maharjan and Y. Zhang, “Blockchain Empowered Asynchronous Federated Learning for Secure Data Sharing in Internet of Vehicles,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4298-4311, April 2020, doi: 10.1109/TVT.2020.2973651.
- Kumar, R. Kumar, G. P. Gupta and R. Tripathi, “BDEdge: Blockchain and Deep-Learning for Secure Edge-Envisioned Green CAVs,” in IEEE Transactions on Green Communications and Networking, vol. 6, no. 3, pp. 1330-1339, Sept. 2022, doi: 10.1109/TGCN.2022.3165692.
- J. Q. Yu, “Sybil Attack Identification for Crowdsourced Navigation: A Self-Supervised Deep Learning Approach,” in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4622-4634, July 2021, doi: 10.1109/TITS.2020.3036085.
- Jiang, H. Li, S. Liu, X. Luo and R. Lu, “Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4439-4449, April 2020, doi: 10.1109/TVT.2020.2977378.
- Song et al., “MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks,” 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2018, pp. 476-481, doi: 10.1109/ISVLSI.2018.00092.
- Zhao et al, “Deep Residual Networks with Dynamically Weighted Wavelet Coefficients for Fault Diagnosis of Planetary Gearboxes,” IEEE Transactions on Industrial Electronics, vol. 65, no. 5, 2018.
- He et al, “Improved Deep Transfer Auto-encoder for Fault Diagnosis of Gearbox under Variable Working Conditions with Small Training Samples,” IEEE Access, vol. 7, 2019.
- Kim et al, “A Deep Learning Part-Diagnosis Platform (DLPP) Based on an in-Vehicle on-Board Gateway for an Autonomous Vehicle,” KSII Transactions on Internet and Information Systems, vol. 13, no. 8, 2019.
- Jeong et al, “An Integrated Self-Diagnosis System for an Autonomous Vehicle Based on an IoT Gateway and Deep Learning,” Applied Science, 2018.
- Liu et al, “MRD-Nets: Multi-Scale Residual Networks w. Dilated Convolutions for Classification and Clustering Analysis of Spacecraft Electrical Signal,” IEEE Access, vol. 7, 2019.
- Wolf et al, “Pre-ignition Detection Using Deep Neural Networks: A Step Towards Dat-Driven Automotive Diagnostics,” International Conferernce on Intelligent Transportation Systems, 2018.
- Olyael et al, “Fault Detection and Identification on UAV System with CITFA Algorithm Based on Deep Learning,” Iranian Conference on Electrical Engineering, 2018.
- -J. Lee, K. -T. Kim, J. -H. Park, G. Bere, J. J. Ochoa and T. Kim, “Convolutional Neural Network-Based False Battery Data Detection and Classification for Battery Energy Storage Systems,” in IEEE Transactions on Energy Conversion, vol. 36, no. 4, pp. 3108-3117, Dec. 2021, doi: 10.1109/TEC.2021.3061493.
- Moinul et al, “Deep Learning Based Micro-Grid Fault Detection and Classification in Future Smart Vehicle,” IEEE Transportation and Electrification Conference and Expo, 2018.
- Fu et al, “A Hybrid CNN-LSTM Model Based Actuator Fault Diagnosis for Six-Rotor UAVs,” Chinese Control and Decision Conference, 2019.
- Wang, X. Peng, M. Jiang and D. Liu, “Real-Time Fault Detection for UAV Based on Model Acceleration Engine,” in IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 12, pp. 9505-9516, Dec. 2020, doi: 10.1109/TIM.2020.3001659.
- Zou, W. Zhao, C. Wang and F. Chen, “Fault Detection Strategy of Vehicle Wheel Angle Signal via Long Short-Term Memory Network and Improved Sequential Probability Ratio Test,” in IEEE Sensors Journal, vol. 21, no. 15, pp. 17290-17299, 1 Aug.1, 2021, doi: 10.1109/JSEN.2021.3079118.
- van Wyk, Y. Wang, A. Khojandi and N. Masoud, “Real-Time Sensor Anomaly Detection and Identification in Automated Vehicles,” in IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 3, pp. 1264-1276, March 2020, doi: 10.1109/TITS.2019.2906038.
- Raja, K. Kottursamy, K. Dev, R. Narayanan, A. Raja and K. B. V. Karthik, “Blockchain-Integrated Multiagent Deep Reinforcement Learning for Securing Cooperative Adaptive Cruise Control,” in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9630-9639, July 2022, doi: 10.1109/TITS.2022.3168486.
- G. Maciel-Pearson et al, “Multi-Task Regression-Based Learning for Autonomous Unmanned Aerial Vehicle Flight Control within Unstructured Outdoor Environment,” IEEE Robotics and Automation Letters, vol. 4, no. 4, 2019.
- K. Al-Sharman et al, “Deep-Learning-Based Neural Network Training for State Estimation Enhancement: Application to Attitude Estimation,” IEEE Transactions on Instrumentation and Measurement, vol. 69, issue 1, 2020.
- Kang et al, “Deep Convolutional Identifier for Dynamic Modeling and Adaptive Control of Unmanned Helicopter,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 2, 2019.
- Li et al, “Compensating Delays and Noises in Motion Control of Autonomous Electric Vehicles by Using Deep Learning and Unscented Kalman Predictor,” IEEE Transactions on Systems, Man and Cybernetics, 2018.
- Drews et al, “Vision-Based High-Speed Driving with a Deep Dynamic Observer,” IEEE Robotics and Automation Letters, vol. 4, no. 2, 2019.
- Zhang et al, “Deep Interactive Reinforcement Learning for Path Following of Autonomous Underwater Vehicle,” IEEE Access, 2020.
- Carlucho et al, “Adaptive Low-Level Control of Autonomous Underwater Vehicles Using Deep Reinforcement Learning,” Robotics and Autonomous Systems, 2018.
- Shang and W. Qiao, “Intelligent driving trajectory tracking control algorithm based on deep learning,” 2021 IEEE 4th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), 2021, pp. 618-621, doi: 10.1109/AUTEEE52864.2021.9668724.
- Sharma, G. Tewolde and J. Kwon, “Lateral and Longitudinal Motion Control of Autonomous Vehicles Using Deep Learning,” IEEE International Conference on Electro Information Technology, 2019.
- Yang et al, “Scene Understanding in Deep Learning-based End-to-End Controllers for Autonomous Vehicles,” IEEE Transaction on Systems, Man, and Cybernetics, vol. 49, no. 1, 2019.
- Zhu et al, “Hierarchical Decision and Control for Continuous Multitarget Problem: Policy Evaluation with Action Delay,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 2, 2019.
- Chen, W. Zhao, L. Li, C. Wang and F. Chen, “ES-DQN: A Learning Method for Vehicle Intelligent Speed Control Strategy Under Uncertain Cut-In Scenario,” in IEEE Transactions on Vehicular Technology, vol. 71, no. 3, pp. 2472-2484, March 2022, doi: 10.1109/TVT.2022.3143840.
- Iwasaki and A. Okuyama, “Development of a Reference Signal Self-Organizing Control System Based on Deep Reinforcement Learning,” 2021 IEEE International Conference on Mechatronics (ICM), 2021, pp. 1-5, doi: 10.1109/ICM46511.2021.9385676.
- Chu, J. Wang, L. Codecà and Z. Li, “Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control,” in IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 3, pp. 1086-1095, March 2020, doi: 10.1109/TITS.2019.2901791.
- Boukerche, D. Zhong and P. Sun, “FECO: An Efficient Deep Reinforcement Learning-Based Fuel-Economic Traffic Signal Control Scheme,” in IEEE Transactions on Sustainable Computing, vol. 7, no. 1, pp. 144-156, 1 Jan.-March 2022, doi: 10.1109/TSUSC.2021.3138926.
- Yang, G. Wang, A. Sadeghi, G. B. Giannakis and J. Sun, “Two-Timescale Voltage Control in Distribution Grids Using Deep Reinforcement Learning,” in IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313-2323, May 2020, doi: 10.1109/TSG.2019.2951769.
- Duan et al., “Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations,” in IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 814-817, Jan. 2020, doi: 10.1109/TPWRS.2019.2941134.
- Kumari, K. K. Srinivas and P. Kumar, “Channel and Carrier Frequency Offset Equalization for OFDM Based UAV Communications Using Deep Learning,” in IEEE Communications Letters, vol. 25, no. 3, pp. 850-853, March 2021, doi: 10.1109/LCOMM.2020.3036493.
- H. Liu, Z. Chen, J. Tang, J. Xu and C. Piao, “Energy-Efficient UAV Control for Effective and Fair Communication Coverage: A Deep Reinforcement Learning Approach,” in IEEE Journal on Selected Areas in Communications, vol. 36, no. 9, pp. 2059-2070, Sept. 2018, doi: 10.1109/JSAC.2018.2864373.
- Wang, D. Deng, L. Xu and W. Wang, “Resource Scheduling Based on Deep Reinforcement Learning in UAV Assisted Emergency Communication Networks,” in IEEE Transactions on Communications, vol. 70, no. 6, pp. 3834-3848, June 2022, doi: 10.1109/TCOMM.2022.3170458.
- M Fadda, M. Murroni and V. Popescu, “Interference Issues for VANET Communications in the TVWS in Urban Environments,” in IEEE Transactions on Vehicular Technology, vol. 65, no. 7, pp. 4952-4958, July 2016, doi: 10.1109/TVT.2015.2453633.
- X Zhang, M. Peng, S. Yan and Y. Sun, “Deep-Reinforcement-Learning-Based Mode Selection and Resource Allocation for Cellular V2X Communications,” in IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6380-6391, July 2020, doi: 10.1109/JIOT.2019.2962715.
- 0.S. Oubbati, M. Atiquzzaman, A. Baz, H. Alhakami and J. Ben-Othman, “Dispatch of UAVs for Urban Vehicular Networks: A Deep Reinforcement Learning Approach,” in IEEE Transactions on Vehicular Technology, vol. 70, no. 12, pp. 13174-13189, Dec. 2021, doi: 10.1109/TVT.2021.3119070.
- U.Challita, W. Saad and C. Bettstetter, “Interference Management for Cellular-Connected UAVs: A Deep Reinforcement Learning Approach,” in IEEE Transactions on Wireless Communications, vol. 18, no. 4, pp. 2125-2140, April 2019, doi: 10.1109/TWC.2019.2900035.
- G. Cui, Y. Long, L. Xu and W. Wang, “Joint Offloading and Resource Allocation for Satellite Assisted Vehicle-to-Vehicle Communication,” in IEEE Systems Journal, vol. 15, no. 3, pp. 3958-3969, Sept. 2021, doi: 10.1109/JSYST.2020.3017710.
- T. Yang, J. Li, H. Feng, N. Cheng and W. Guan, “A Novel Transmission Scheduling Based on Deep Reinforcement Learning in Software-Defined Maritime Communication Networks,” in IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 4, pp. 1155-1166, Dec. 2019, doi: 10.1109/TCCN.2019.2939813.