Novel deep learning approaches are achieving state-of-the-art accuracy in the area of radar target recognition, enabling applications beyond the scope of human-level performance. This book provides an introduction to the unique aspects of machine learning for radar signal processing that any scientist or engineer seeking to apply these technologies ought to be aware of. The book begins with three introductory chapters on radar systems and phenomenology, machine learning principles, and optimization for training common deep neural network (DNN) architectures. Subsequently, the book summarizes radar-specific issues relating to the different domain representations in which radar data may be presented to DNNs and synthetic data generation for training dataset augmentation. Further chapters focus on specific radar applications, which relate to DNN design for micro-Doppler analysis, SAR-based automatic target recognition, radar remote sensing, and emerging fields, such as data fusion and image reconstruction. Edited by an acknowledged expert, and with contributions from an international team of authors, this book provides a solid introduction to the fundamentals of radar and machine learning, and then goes on to explore a range of technologies, applications and challenges in this developing field. This book is also a valuable resource for both radar engineers seeking to learn more about deep learning, as well as computer scientists who are seeking to explore novel applications of machine learning. In an era where the applications of RF sensing are multiplying by the day, this book serves as an easily accessible primer on the nuances of deep learning for radar applications.
Inspec keywords: sensor fusion; remote sensing by radar; radar target recognition; Doppler radar; radar computing; passive radar; classification; radar imaging; data structures; radar applications; learning (artificial intelligence); synthetic aperture radar; convolutional neural nets
Other keywords: radar remote sensing; deep convolutional neural networks; radar applications; radar signals; ISAR-based automatic target recognition; radar microDoppler signatures classification; SAR-based automatic target recognition; radar systems; SAR data augmentation; passive synthetic aperture radar imaging; deep representations fusion; machine learning; radar phenomenology; theoretical foundations; multistatic radar networks; daily living activities classification; deep learning; deep neural network design; DNN training; RF data; radar data representation
Subjects: Optical, image and video signal processing; Electrical engineering computing; Radar and radionavigation; General electrical engineering topics; Sensor fusion; Geophysical techniques and equipment; General and management topics; Knowledge engineering techniques; Signal processing and detection; Neural computing techniques
Radar, short for "radio detection and ranging," was first conceived in the 1930s as means to provide early warning of approaching aircraft. It operates by transmitting an electromagnetic (EM) wave and processing the received echoes to measure the distance, velocity and scattering properties of objects and their surroundings. As radar transceivers themselves have become smaller, lighter in weight, and lower in cost due to advances in solid-state microelectronics, so has radar emerged as a key enabler in newer fields, such as automotive sensing and self-driving vehicle technology, gesture recognition for human-computer interfaces, and biomedical applications of remote health monitoring and vital sign detection. The proliferation of radar is indeed remarkable and presents a vast arena where deep learning, cybernetics, and artificial intelligence can enable disruptive technological advancements.
This chapter provides an overview of the basic principles of ML, outlining the fundamental concepts that need to be applied correctly for a broad range of radar applications. We expect the reader to have background knowledge of basic linear algebra and probability theory, which form the foundations of ML. In Section 2.1, we describe the concept of learning from data and introduce the main categories of ML, namely, supervised and unsupervised learning. We also present different tasks that ML can tackle under each category and provide relevant radar-based examples. In Section 2.2, we briefly describe the various components of an ML algorithm. We present several fundamental techniques of supervised and unsupervised learning in Section 2.3. In Section 2.4, we define various performance assessment metrics and describe the design and evaluation of a learning algorithm. More recent learning approaches, such as variants of deep neural networks (DNNs), and more specific ML tools related to the various radar applications will follow in subsequent chapters of this book.
In this chapter, the authors will derive the theoretical foundations of deep neural network architectures. In contrast to shallow neural topologies, deep neural networks comprise more than one hidden layer of neurons. Even though the concept and theory has been around since many decades, efficient deep learning methods were developed in the last years and made the approach computationally tractable. This chapter will hence begin with a short review of historical and biological introduction to the topic. Then, the authors will address the mathematical model of the perceptron that still forms the basis of multilayer architectures. The authors will introduce the backpropagation algorithm as a state-of-the-art method for training of neural networks. The authors will briefly cover some popular optimization strategies that have been successfully applied and are quite relevant for radar applications that are sometimes quite different from the optical domain (e.g., scarce training data sets). These include stochastic gradient decent, cross-entropy (CE), regularization and the optimization of other hyperparameters. The authors will then cover convolutional neural networks (CNNs), some specific processing elements and learning methods for them. We will also look at some well-known and successfully applied architectures. The focus in this chapter lies on supervised learning for classification problems. To round up this chapter, an unsupervised method for representation learning called autoencoder (AE) is illustrated. The structure and theoretical derivations in this chapter follow standard-textbooks [1-3] and online sources [4,5].
In this chapter, we address the problem of data representation and its impact on human motion classification using radar. In examining the motion classifier performance, it has become apparent that some human motion articulations are more distinguish-able in some data domain representations than others. The potential effect of data representation on motion classification performance calls for devising proper strategies and new approaches of how to best manipulate or preprocess the data for the sake of achieving most desirable results. We discuss domain integration and suit-ability using a single range-Doppler radar sensor. No assumptions are made regarding the received radar backscattering data adhering to any specific models or signal structure.
A variety of approaches for addressing the challenges involved in training DNNs for the classification of radar micro-Doppler signatures have been presented in this chapter. The performance metrics shown throughout the chapter reveal the impact of DNN training on the accuracy and target generalization performance of the network. Although high accuracies have been attained for the classification of as much as 12 different activity classes, open areas of research remain in regards to exploitation of radar datasets of opportunity and the generation of kinematically accurate, yet diverse, synthetic data. In this regard, adversarial learning provides opportunities, but ultimately both the training strategies and network architecture for training data synthesis must be designed uniquely for radar datasets. Training approaches must consider not only accurate modeling of target and clutter signatures but also must exploit constraints imposed by the physics of electromagnetic sensing to reduce complexity and increase performance. Advances in this area have the potential to greatly expand the application of RF sensors toward human motion recognition in both civilian and military applications.
The work of ATR for SAR is still a work in progress. The difficulties posed by this particular imaging method, including poor resolution, limited data, and the challenge of accurately creating simulated data mean that this problem may defy a complete solution for a time to come. In particular, the gap between synthetic and measured SAR data poses many challenges for deep learning classification systems, particularly due to the small size of available datasets. Although the approaches in this chapter do not fully solve these challenges, they nevertheless represent an encouraging direction in which deep neural networks can help solve the measured/synthetic gap. This work with mathematical despeckling methods, autoencoders, layer retraining, GANs, and siamese networks represents a wide variety of possible approaches, and many more deep networks are available in the literature. Despite limited success to this point, these other approaches provide fertile ground on which to expand this work.
An electromagnetic signal transmitted by a radar is reflected from a target then returns to the radar with the information of the target characteristics. Doppler information is commonly used to detect moving objects while suppressing clutter. In particular, the micro-Doppler signatures from nonrigid body motions contain diverse information regarding target movement [1-3]. Accordingly, the use of micro-Doppler signatures has a variety of defense, security, surveillance, and biomedicine applications, includ-ing airborne target classification, human detection, human activity classification, and drone detection.
The automatic recognition of targets with radar is an ongoing research field for many years already. Since 2014, a new methodology based on deep neural networks is becoming more established within this field. This chapter gives a short overview with some examples of this short history of target recognition using deep learning (DL) and some comparative results with a basic implementation of a convolutional neural network (CNN). This network is applied to the commonly used Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset and to an inverse synthetic aperture radar (ISAR) dataset measured with the Tracking and Imaging Radar (TIRA) of Fraunhofer FHR.
This chapter presents a data-driven learning and model-based approach to passive synthetic aperture radar (SAR) imaging within a deep learning (DL) framework. This approach enables simultaneous estimation of unknown model parameters, effective incorporation of prior knowledge, lower computational cost and superior performance than other state-of-the-art methods.
This chapter has thoroughly explored the necessity and importance of careful planning in the implementation of data fusion methods and architecture within multistatic radar networks. The identification of opportunities for data fusion in a processing system, depicted in this chapter, showcases a multitude of locations for data fusion to occur. These have been progressively integrated into research throughout recent years and is collectively organised, presented and discussed. The works deliberated consisted of the classification of human micro-Doppler signatures, to the characterisation of payloads carried by a micro-drone.
Although the origins of radar can be traced back to the military, since its inception, civilian applications have flourished, especially those related to remote sensing. Applications such as object (e.g., ship or vehicle) detection directly translate from their military counterparts of airborne and ground-based automatic target recognition (ATR). However, most applications involving the remote sensing of the environment fundamentally reverse the way radar backscattering is perceived. In detection and recognition, scattering from any surface other than the object is regarded as “clutter”- undesirable reflections that ought to be removed or suppressed in the data so that the true nature of the object of interest can be ascertained. However, in environmental remote sensing, it is the surface or volume scattering that we seek to understand and exploit. In many cases, remote sensing aims at extracting geophysical properties, which can be related back to the way materials interact with electromagnetic waves. Examples include soil moisture or water concentration, terrain elevation, biomass, mass movement rates, hydrometeor type, plant health, drought tolerance, crop yield, ice layers, and snow thickness. Because deep learning (DL) was originally developed in consideration of real-valued data and optical images, the potential performance, architectures, and optimization of deep neural networks (DNNs) operating on radar remote sensing data must be reassessed. The relationship between geophysical properties, electromagnetic scattering, and the RF data representations used to reveal these relationships creates vast, complex, multidimensional, and time-varying datasets over which DL can be leveraged. Thus, the rich and unique qualities of remote sensing data present new challenges for DL, which has driven much research in this area.