Uncertainty Quantification of Electromagnetic Devices, Circuits, and Systems describes the advances made over the last decade in the topic of uncertainty quantification (UQ) and stochastic analysis. The primary goal of the book is to educate and inform electronics engineers about the most recent numerical techniques, mathematical theories, and computational methods to perform UQ for electromagnetic devices, circuits, and systems. Importantly, the book offers an in-depth exploration of the recent explosion in surrogate modelling (metamodeling) techniques for numerically efficient UQ. Metamodeling has currently become the most attractive, numerically efficient, and popular approach for UQ. The book begins by introducing the concept of uncertainty quantification in electromagnetic device, circuit, and system simulation. Further chapters cover the theory and applications of polynomial chaos based uncertainty quantification in electrical engineering; dimension reduction strategies to address the curse of dimensionality in polynomial chaos; a predictor-corrector algorithm for fast polynomial chaos based statistical modeling of carbon nanotube interconnects; machine learning approaches towards uncertainty quantification; artificial neural network-based yield optimization with uncertainties in EM structural parameters; exploring order reduction clustering methods for uncertainty quantification of electromagnetic composite structures; and mixed epistemic-aleatory uncertainty using a new polynomial chaos formulation combined with machine learning. A final chapter provides concluding remarks and explores potential future directions for research in the field. The book will be a welcome resource for advanced students and researchers in electromagnetics and applied mathematical modelling who are working on electronic circuit and device design.
Inspec keywords: stochastic processes; chaos; polynomials; electromagnetic devices; learning (artificial intelligence)
Other keywords: numerical analysis; Monte Carlo methods; electromagnetic devices; polynomials; learning (artificial intelligence); engineering computing; chaos; optimisation; product design; stochastic processes
Subjects: Electromagnetic device applications; Interpolation and function approximation (numerical analysis)
Numerical simulations, theoretical analyses, and practical experiments are the three different approaches to observe, study, and examine the behaviour of any system. However, with the continuous improvement in computing hardware, falling hard-ware costs, and increasing memory and data processing capabilities in a computer, it is numerical simulations that have emerged as the more suitable approach for system identification and behavioural analysis in the early product design cycles. In this chapter, we will explore the evolution of numerical simulation techniques for electromagnetic devices, circuits, and systems from the classical deterministic approach to the emergent stochastic approaches. In course of this exploration, we will identify the different sources of parametric uncertainty that can arise in a numerical simulation, the different forms these uncertainties can assume, and the various state-of-the-art mathematical and algorithmic techniques to probe these uncertainties in order to gain deeper insights into the behaviour of the device, circuit, or system under test.
We start by considering the generic stochastic function y = f(ξ), At this stage, y is understood to be an arbitrary output variable of interest that depends on the uncertain parameters ξ, which in turn could be, for example, circuit element values, or geometrical dimensions affected by manufacturing tolerances, or material parameters that are known only to a limited precision. We aim at quantifying the stochasticity of y in terms, for example, of its mean, standard deviation, and probability distribution.
In this chapter, we provide a collection of diverse applications of the polynomial chaos expansion (PCE)-based methods. The examples are in various relevant fields of electrical and electronic engineering and they are grouped by area, based on the class of equations that govern the system. Since some of the test cases are taken from the available literature and were simulated on different machines, we summarize the main features of the various computers that were used for the simulations proposed in the following.
In Chapters 2 and 3, a detailed description of spectral metamodeling approaches behind stochastic electronic design automation (EDA) tools, especially the generalized polynomial chaos (PC) approach has been provided. The main advantage of the PC metamodel lies in its ability to quickly converge to the true stochastic responses of interest with increasing order of its basis functions. However, this advantage is balanced by a key drawback - the near-exponential scaling of the time costs required to evaluate the PC coefficients (i.e., to train the metamodel) as the number of uncertain parameters increases. This disadvantage is often referred to as the curse of dimensionality. In this chapter, we will review mathematical and algorithmic techniques that aim to curb this curse of dimensionality by effectively reducing the number of uncertain parameters (i.e., the dimensionality of the parametric space) that we deal within our problems.
This chapter introduces a new algorithm to address the curse of dimensionality facing the training of polynomial chaos (PC) metamodels, specifically for on-chip high-speed multi-walled carbon nanotube (MWCNT) interconnect structures. This algorithm is referred to as a predictor-corrector algorithm. In this chapter, the mathematics underpinning this predictor-corrector algorithm and the benefits emerging from it are described in details. Moreover, analysis leading to the identification of an upper bound of the numerical efficiency possible through this algorithm is presented. The chapter concludes with multiple numerical examples of MWCNT networks solved using the predictor-corrector algorithm as well as conventional full-blown PC metamodels.
Handling non-Gaussian correlated uncertainty has been a long-standing challenge in electronic design automation. This chapter briefly summarizes our efforts to address the challenge, including numerical methods (e.g., Gram-Schmidt and Choselsky factorization methods) to construct orthogonal and normalized polynomial basis functions, optimization-based numerical quadrature methods for designing stochastic collocation algorithms, as well as the theoretical bound of error and simulation complexity, and some new data efficient techniques to use the build surrogate models in uncertainty-aware design optimization. Specifically, we propose to solve chance-constraint optimization in order to avoid over-conservative design performance. This has been implemented via two methods: moment bounding and polynomial Kinship bounding.
This chapter deals with the application of advanced Machine Learning (ML)techniques to the uncertainty quantification (UQ) of stochastic nonlinear systems. In particular, ML regressions will be adopted as black-box techniques with the aim to construct a surrogate model able to mimic the actual stochastic behaviour of the system under modeling. Several regression techniques will be presented along the Chapter, starting from the classical ones (e.g., the least squares, Ridge and least absolute shrinkage and selection operator (LASSO) regressions) to the more recent ML-based regressions, such as: support vector machine, least squares-support vector machine and Gaussian process regression. All the above techniques will be presented along with their advantages and limitations with the help of several illustrative examples. Moreover, their applicability to the UQ will be investigated by considering realistic application examples belonging to the electronic field.
This chapter provides an overview of recent advances in artificial neural network-based yield optimization with uncertainties in electromagnetic (EM) structure parameters. We first give an introduction to EM-based yield estimation and optimization. Then, we review the formulation for performing yield-driven EM optimization with the traditional Monte Carlo (MC) approach. Following that, we review the space mapping (SM)-based yield optimization where the computationally expensive EM simulations are replaced by the evaluations of SM surrogates. Next, we describe the polynomial chaos (PC)-based approach to EM-based yield estimation and optimization. The PC approach can capture the stochastic properties of EM responses accurately with much fewer EM samples than the MC approach. A waveguide bandpass filter example is provided to demonstrate the advantages of the PC-based approach. Lastly, we discuss the pure artificial neural network-based yield optimization method and its advances, that is, the adaptively weighted yield optimization incorporating neuro-transfer function surrogate. This method is illustrated by a four-pole waveguide filter example.
In this chapter, the numerical simulations relied on the homogenization principle and EMT, to provide accurate and efficient assessment of material shielding effectiveness.
A combination of Bayesian optimisation based on Gaussian process regression and the PC expansion modeling approach is presented in this chapter. This methodology allows one to overcome the computational burden of state-of-the-art methodologies, while retaining the accuracy in estimating epistemic and aleatory variations. The proposed approach aims at minimizing the number of (expensive) simulations to characterize the system under study, and is especially useful for RF and microwave systems, where cumbersome full wave simulations are usually required.
In this concluding chapter, we will summarize the contents of this book and draw conclusions regarding the current state-of-the-art in uncertainty quantification. In addition, we will also discuss the most important unanswered questions and key challenges still dominating this field and try to predict the possible emerging techniques best suited to answer these challenges.