New Publications are available for Information theory
http://dl-live.theiet.org
New Publications are available now online for this publication.
Please follow the links to view the publication.The attribute recognition model of highway conditions evaluation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1400
The attribute recognition model of highway conditions evaluation is established for the problems and deficiencies in current evaluation system.The entropy method improves the credibility of the evaluation model by determining weights of Indicators, so as to avoid the subjectivity. The model is applied to the practical evaluation of highway conditions, and the results show that the method is reasonable, scientific, simple and applicable, so the method of attribute recognition has good application prospects in highway conditions evaluation.Joint estimation of frequency and 2D-DOA by two L-shape arrays based on pm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2009.0172
The joint estimation of frequency and 2D-DOA of signal by two l-shape arrays(TLSA) based on Propagator Method(PM) was proposed. According to the feature of TLSA, 2D-DOA can be estimated effectively, and through analysis of PM, the estimation of frequency can also be obtained. The pairing of frequency and 2D-DOA can be realized automatically. Simulation results indicate that the proposed method takes on good performance in both estimation accuracy and automatic pairing. Besides, when SNR is low, the performance is excellent. (4 pages)Pareto ant colony optimization based on information entropy in multiobjective portfolio problem
http://dl-live.theiet.org/content/conferences/10.1049/cp_20070811
Finding the "best" project portfolio out of a given set of investment proposals is a common and often critical management issue. Decision-makers must regularly consider multiple objectives and often have little a priori preference information available to them. Meta-heuristics provide a useful compromise between the amount of computation time necessary and the quality of the approximated solution space. This paper introduces pareto ant colony optimization which based on information entropy as an especially effective meta-heuristic for solving the multiobjective portfolio problem and compares its performance to P-ACO by means of computational experiments with instances.Keyword-detection approach to automatic image annotation
http://dl-live.theiet.org/content/conferences/10.1049/ic.2005.0705
In this paper we consider the problem of automatically annotating images with keywords. We first discuss performance measures for the problem in some length. We propose a new information-theory based measure de-symmetrised mutual information (DTMI). We then describe a straightforward solution to the annotation problem. We first train a set of classifiers to detect the presence of each individual keyword in the set of training images. For this we use the PicSOM image analysis framework. We then describe a method of converting the classifier outputs back into keyword annotations for the test set. We compare the performance of the proposed method experimentally to that of other methods presented in the literature. For the experiments we use data from the Corel database. The result of the comparison is favourable to the proposed method.An adaptive network for encoding data using piecewise linear functions
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991108
An objective function that encourages an encoder to have the minimum overall Euclidean reconstruction error is shown to lead to encoders that can be implemented using functions that depend only in a piecewise linear fashion on the input vector. From the neural network viewpoint, the optimal form of the probability that each neuron is the next one to fire is a piecewise linear function of the input vector.Time delay estimation with hidden Markov models
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991154
Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a nonlinear non-stationary environment, these techniques are not sufficient. We show how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms the maximum likelihood approach.Natural gradient matrix momentum
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991082
Natural gradient learning is an efficient and principled method for improving online learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithms in a two-layer neural network, using a statistical mechanics framework which allows one to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent.New information theoretical approach to the storage capacity of neural networks with binary weights
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991147
New information theoretical approach for the storage capacities of the perceptron with binary weights w<sub xmlns="http://pub2web.metastore.ingenta.com/ns/">i</sub>∈{0,1}, {-1, +1} are presented. Our main ideas come from the introduction of the minimum distance “d” between input patterns, which dominates the capacity of each neural networks. This approach by means of the new parameter “d” is completely different from the usual replica method in statistical physics, but it can succeed to obtain the almost same storage capacities as those by the replica method. Moreover, this information theoretical approach has some advantages of providing easier and more intuitive understanding of the capacity and the distinguishable minimum distance which characterizes the neural networks.Minimum entropy data partitioning
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991217
Problems in data analysis often require the unsupervised partitioning of a data set into clusters. Many methods exist for such partitioning but most have the weakness of being model-based (most assuming hyper-ellipsoidal clusters) or computationally infeasible in anything more than a 3D data space. We re-consider the notion of cluster analysis in information-theoretic terms and show that minimisation of partition entropy can be used to estimate the number and structure of probable data generators. The resultant analyser may be regarded as a radial-basis function classifier.An information-geometrical method for improving the performance of support vector machine classifiers
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991089
The performance of support vector machine (SVM) largely depends on the kernel. There have been no theories concerning how to choose a good kernel in a data-dependent way. As a first step to this important problem, we propose an information-geometrical method of modifying a kernel function to improve the performance of a SVM classifier. The idea is to enlarge the spatial resolution around the separating boundary surface by a conformal mapping. We gave examples of modifying Gaussian radial basis function kernels. Stability of such processes is also known. Simulation results for both artificial and real data turns out to support our idea.Maximizing information about a noisy signal with a single non-linear neuron
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991172
For noise-free information maximization, the output signal entropy must be maximized. This is not true for a noisy input: rather, it must be the difference between this entropy and the residual output uncertainty. A definition of information density is introduced, which provides a discrete local measure of bandwidth efficiency. Novel training rules are proposed which enforce a uniformity of this density. This entails a different transfer function from that which follows from the maximization of output entropy alone. It is shown to provide higher information transmission properties on real and synthetic data.Learning error-correcting output codes from data
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991200
A polychotomizer which assigns the input to one of K3 classes is constructed using a set of dichotomizers which assign the input to one of two classes. Defining classes in terms of the dichotomizers is the binary decomposition matrix of size K×L where each of the K3 classes is written as error-correcting output codes (ECOC), i.e., an array of the responses of binary decisions made by L dichotomizers. We use linear dichotomizers and by combining them suitably, we build nonlinear polychotomizers, thereby reducing complex decisions into a group of simpler decisions. We propose a method to learn the error-correcting codes from data based on soft weight sharing which forces parameters to take one of a set (here two: -1/+1) values. Simulation results on eight datasets indicate that compared with a linear one-per-class polychotomizer and ECOC proper, these methods generate more accurate classifiers, using less dichotomizers than pairwise classifiers.Optimal hyperplane classifier based on entropy number bound
http://dl-live.theiet.org/content/conferences/10.1049/cp_19991145
Entropy number bound is a capacity measure for learning machines, proposed by Williamson et. al. (1998). Based on this capacity measure and the structural risk minimization principle, we actually implement an optimal hyperplane classifier. In online character recognition experiment using the tangent distance, our method performed better than the conventional optimal hyperplane classifier based on VC dimension.The use of advanced information processing methods in EEG analysis
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980791
In the field of medical signal and image processing the usual uncertainties in our knowledge are often compounded by a poor understanding of the physical mechanisms by which the data is generated and a subjective evaluation of the data by a human observer. In the case of the EEG, the effects of scalp, fluid and bone on the tiny electrical currents generated in the cortex may be modelled only poorly and the large size of scalp electrodes and the effects of muscle and instrument noise all contribute to the difficulty of EEG analysis. The belief that the EEG contains some objective information regarding brain state is tantalising. We do not discuss in detail the problems of noise and artifact removal, nor the problems of modelling the passage of the EEG from cortex to scalp, but instead concentrate on the EEG as is and suggest that, even if such problems are not solved, the EEG contains significant objective information regarding cortical functioning. We focus upon two main areas of information processing, namely information extraction from an EEG record (feature extraction and representation) and pattern recognition and inference. (3 pages)IIR stable, causal and perfect reconstruction uniform DFT filter banks with a real or complex prototype filter
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980296
A new family of stable, causal and perfect reconstruction IIR maximally-decimated parallel uniform DFT filter banks (DFT FB) is presented. Two possible realisations exhibiting a simple and a massively parallel and modular processing structure suitable for a VLSI implementation are shown. In addition, some multipliers in the filters (both the analysis and synthesis) could be made a power or sum of powers of 2, in particular in feedback loops, resulting in a good sensitivity behaviour. Some design examples are provided. (6 pages)Embodied cognition: dynamic and information theoretic implications of embodiment
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980266
Embodiment has been discussed as being essential for understanding the mind. The goal of this presentation is to discuss implications of embodiment in a concrete way by providing case studies on the one hand, and by introducing the necessary theoretical background on the other. In particular we will demonstrate that the function of a neural network-natural or artificial-can only be understood if it is known how the neural network is embedded in the physical agent (we use the term “agent” whenever we do not want to make a distinction between humans, animals, and robots). This includes the nature of the sensors and where they are positioned on the agent. In other words, the kinds of neural signals that the agent's neural network receives depends on its morphology. Equally important, we will demonstrate that embodiment can help us solve two of the very hard problems of cognitive science, namely (a) that agents in the real world are exposed to a continuously changing stream of sensory stimulation, and (b) the problem of object constancy (also called the scaling problem), i.e. that the sensory stimulation from one and the same object varies greatly depending on distance, viewing angle, lighting conditions, etc. (3 pages)Multiple model estimation using the bootstrap filter
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980421
The use of multiple models, each matched to a different hypothetical target motion, has been shown to be a highly effective approach to tracking a manoeuvring target. This article proposes a new extension to the bootstrap filter, a sample based algorithm for recursive Bayesian estimation, for application to the multiple model problem. It is shown that, by using a more general estimator than the Kalman filter, the true model conditioned densities can be propagated and the number of estimators, and therefore the computational load, in the multiple model system can be kept constant, equal to the number of models. A further distinct advantage of this approach is that the multiple model bootstrap filter is directly applicable to nonlinear and non-Gaussian multiple model systems. Simulation results comparing this technique with the IMM algorithm using standard manoeuvring target scenarios are presented using both Cartesian and polar co-ordinates. In the Cartesian case the target model is linear and comparable performance to IMM is achieved. In the polar case the target model is now nonlinear. Good tracking is observed with the multiple model bootstrap filter whereas the IMM implemented using EKFs displays poor adaption to manoeuvres. (3 pages)Multidimensional filter banks and wavelets - a system theoretic perspective
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980162
We review the current status of multidimensional filter banks and wavelet design from the perspective of signal and system theory. The study of wavelets and perfect reconstruction filter banks are known to have roots in traditional filter design techniques. On the other hand, the field of multidimensional systems and signal processing has developed a set of tools intrinsic to itself, and has attained a certain level of maturity over the last two decades. We have noted a degree of synergy between the two fields of wavelets and multidimensional systems. This arises from the fact that many ideas crucial to the wavelet design are inherently system theoretic in nature. While there are many examples of this synergy manifested in previous publications, we provide a flavour of techniques germane to this development by considering a few specific problems in detail. The construction of orthogonal wavelets can be essentially viewed as a circuit and system theoretic problem of design of energy dissipative (passive) filters, the multidimensional version of which has very close ties with classic problem of lumped-distributed passive network synthesis. Groebner basis techniques, matrix completion problems over rings of polynomials or rings of stable rational functions, i.e., Quillen-Suslin type problems are still other examples, which feature in our discussion in an important manner. A number of open problems are also cited. (51 pages)Implementation of digital 3-D IIR filters for stream-processing applications
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980163
Presents an implementation structure for recursive three-dimensional digital filters, based on the filter design presented in Runze and Steffen (1996), which yields either recursive or non-recursive filters. The recursive design results have two interesting properties: the recurrence direction is oriented parallel to one coordinate axis (e.g. time axis in data-stream processing), and the transfer function itself is composed of separable systems. These properties can be exploited to build up the whole three-dimensional system, using only spatial shifts and time-directed one-dimensional recursive filters. (5 pages)Adaptive Kalman filters for manoeuvring target tracking
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980422
Two different adaptive Kalman filter designs, for tracking targets expected to perform varying turn manoeuvres, are presented. In the first one, the process noise covariance level of a second order Kalman filter is adjusted at each time step according to the estimated turn rate. The turning rate is estimated from the magnitude of the calculated acceleration divided by the estimated speed of the target. At each scan the previous and current velocity estimates are used to calculate the acceleration. The second filter uses a scale factor, representing the target unpredictability, which is estimated from the available data after a measurement is taken. The estimated scale factor is then used in the filter in the next scan. The comparison of the performance of the proposed algorithms is made with that of an IMM algorithm, employing three models with different levels of process noise covariance and also to that of a second order Kalman filter. Two different assumptions have been made for selecting the process noise values for the the IMM and Kalman filter algorithms, in the first case it was assumed that there was no prior information about the target motion whereas in the second case it was assumed that the largest turn rate that the target of interest could perform was known. The IMM algorithm utilizing three models gives slightly better estimates during the nonmanoeuvring periods, but the proposed algorithms are superior to the IMM algorithm in terms of estimation errors during manoeuvring periods. (7 pages)2-D polyphase component evaluation via complex integration
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980165
This paper shows an alternative approach to the evaluation of the 2-D complex integral used in the calculation of the z-transform of the sub-sampled sequence of a 2-D signal, and in particular, the polyphase components of a 2-D digital filter, H(z, w). (6 pages)Writer identification based on handwriting
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980678
This paper describes a text-independent writer identification method. The difficulties with writer identification are discussed. These include the sensitivity of the identification algorithm to variations in the size of the training samples, in the words, line and character spacing, point sizes, and scanner resolutions. The work described demonstrates that texture analysis is a useful tool for writer identification based on handwriting. We use multichannel spatial filtering techniques to extract texture features from a nonuniformly skewed and nonskewed handwriting image. There are many available tilters in the multichannel technique. We use Gabor filters, since they have proven to be successful in extracting features for similar applications. We also use grey-scale co-occurrence matrices (GSCM) for feature extraction (for comparison purposes). Two classification techniques are adopted here, namely the weighted Euclidean distance (WED) and the k-NN classifiers. Our algorithm achieves a classification accuracy of 95.3% using 300 test images from 20 writers. (6 pages)DTS-proven technology for low flying aircraft
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980156
This paper gives an insight into the principles of operation, and a summary of the status, of the Digital Terrain System (DTS) currently selected for all UK front-line military fast-jets. The DTS concerned (TERPROM(R)) has been developed over a number of years through a combination of theoretical studies, simulations and flight trials experience. The system provides a number of functions specifically designed to aid the operation of aircraft when flying at very low level. These functions are terrain referenced navigation (TRN), ground proximity warning (GPW), obstacle warning, terrain following (TF) and passive ranging. The benefits provided to a low flying aircraft by this system include reduced pilot workload, reduced observability, increased survivability and improved weapon aiming. (11 pages)Navigation systems integration
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980390
Principles of integrated navigation and optimal estimation theory are discussed. The complexity of navigation system requirements is described, and a comprehensive system design approach is presented. The paper concludes by looking at specific issues which arise in testing integrated navigation systems. The paper is biased towards military vehicle navigation. However the potential for application in commercial vehicles will be evident. (16 pages)Fusion of correlated decisions for writer identification
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980677
Writer identification is carried out using the words of a short sentence. Each word is processed separately and used to decide about the presence of the specific writer. A likelihood ratio decision rule is employed for the word level decision. The individual decisions obtained from the words of the sentence are fused to improve the identification performance. The Bahadur-Lazarsfeld expansion is employed in order to deal with the correlation between individual decisions. The decision rule in the fusion process is a likelihood ratio test. An excellent identification performance is achieved with the proposed procedure. (7 pages)The Bayesian approach to signal modelling
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980444
In this paper, an introduction to Bayesian methods in signal processing is given. The paper starts by considering the important issues of model selection and parameter estimation and derives analytic expressions for the model probabilities of two simple models. The idea of marginal estimation of certain model parameters is then introduced and expressions are derived for the marginal probability densities for frequencies in white Gaussian noise and a Bayesian approach to general change point analysis is given. Numerical integration methods are introduced based on Markov chain Monte Carlo techniques and the Gibbs sampler in particular. (5 pages)Fuzzy modelling techniques applied to an air/fuel ratio control system
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980212
Fuzzy modelling techniques have been applied to the simulated control of the air/fuel ratio. The method enables the identification of structured nonlinear models, due to the existence of the G and H matrices, that could be readily adapted online. The model used within the control scheme has been selected using the Young information criteria, which is used to assess its appropriateness, normally linear models, for control. The performance of the model, as part of the combined feedforward/feedback controller, appears satisfactory. When the nonlinear and time varying nature of the automotive engine system is considered, the fuzzy modelling methods are considered to offer some potential for engine control applications. (7 pages)On implementation and design of filter banks for subband adaptive systems
http://dl-live.theiet.org/content/conferences/10.1049/ic_19980295
This paper introduces a polyphase implementation and design of an oversampled K-channel generalized DFT (GDFT) filter bank, which can be employed for subband adaptive filtering, and therefore is required to have a low aliasing level in the subband signals. A polyphase structure is derived which can be factorized into a real valued polyphase network and a GDFT modulation. For the latter, an FFT realization may be used, yielding a highly efficient polyphase implementation for arbitrary integer decimation ratios NK. We also present an analysis underlining the efficiency of complex valued subband processing. The design of the filter bank is completely based on the prototype filter and solved using a fast converging iterative least squares method, for which we give examples. The design specifications closely correspond with performance limits of subband adaptive filtering, which are under-pinned by simulation results. (8 pages)Control robustness of freefield active noise control systems
http://dl-live.theiet.org/content/conferences/10.1049/cp_19980408
A unique theory for generating electronically controlled acoustic shadows for environmental noise reduction was reported by Write and Vuksanovic (1996). The theory has been extended to complex high frequency sound from large noncompact sources. These studies show that deep acoustic shadows are theoretically possible. This paper considers the implementation of the theory into a practical multichannel freefield control system and its performance. Providing certain stability conditions are met, it is found that deep shadows >60 dB are generated at the microphones, limited only by the ambient noise of the laboratory. This sets the stage for practical systems to be built.Nonlinear dynamics and noise cancellation
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971369
The use of linear and nonlinear prediction in forming the inverse of a linear system is explored. The topic is introduced from the perspective of communications channel equalisers, where the signal of interest is stochastic, and from the perspective of nonlinear noise cancelling, where the signal of interest may be deterministic and chaotic. In both applications a nonlinear inverse to a linear system can produce better results than a linear inverse. The nonlinear architectures considered are linear-in-the-parameter radial basis function (RBF) and Volterra series (VS) networks. The application of nonlinear filtering techniques to the cancellation of noise in a linear duct is also considered. It is demonstrated that the required inverse is provided by the parallel connection of a linear and nonlinear network of different memory lengths. (6 pages)State space reconstruction using interspike intervals
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971373
Essentially all the applications of nonlinear dynamical systems theory to signal processing rely on the ability to reconstruct a dynamical system from a time series of measurements made on it. We to consider a class of systems in which information about a dynamical system is encoded in a sequence of time intervals, rather than a sequence of values of some measurement. We also consider cases where the observations are measurements of some variable, as in Takens' (1981) theorem, but the measurements are not taken at a uniform rate; instead the times between measurements are a function of the system's state. (We call this situation `state dependent sampling'.) The basic idea is that there is a real-valued function, τ, on the state space of the system, which at each point in the state space gives the time interval after which the next measurement is taken. For state dependent sampling we thus consider two values at each sampling time: the first is the value of the measurement (which we record) the second is the value of τ which tells us how long to wait before making the next observation. In the interspike intervals scenario, we only consider τ, which we record as well as using to determine the next sampling time. Sauer's `integrate and fire' model is a special case of this, in the sense that `integrate and fire' is essentially a way of defining τ. One of our motivations for considering the state dependent sampling case is the hope that nonuniform sampling may make easier the analysis of time series using embedding techniques. (7 pages)Outdoor active noise control
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971328
A unique theory for generating electronically controlled acoustic shadows for environmental noise reduction was reported previously by the authors (1996). The theory (1997) has been extended to complex high frequency sound from large non-compact sources. These studies show that deep shadows >120 db are theoretically possible. The implementation of the theory into a multichannel free field control system and its operation has also been reported (1997). Providing certain stability conditions are met, it is found that deep shadows >60 db are generated at the microphones, limited only by the ambient noise of the laboratory. This sets the stage for practical systems to be built. (3 pages)Foetal ECG separation
http://dl-live.theiet.org/content/conferences/10.1049/ic_19970066
The extraction of the foetal electrocardiogram (FECG) from skin electrode signals recorded from a pregnant woman is a problem of concern to Signal Processing which accepts a model-based approach. Taking on board the bioelectrical phenomena which rule the cardiac activity and the propagation of heartbeat signals across the body, the FECG reconstruction may be modelled in the context of blind signal separation (BSS). Experimental results show the applicability of such BSS techniques to this biomedical problem. (6 pages)Noise reduction: multiple solutions
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971370
It is an interesting property of chaotic systems that, given a knowledge of the underlying dynamics, even a series of quite crude or noisy observations is sufficient to allow the state-space trajectory of the system to be reconstructed to a very high level of accuracy-far higher accuracy than say a simple moving time-average. The success of chaotic noise reduction is due to the stretching properties of chaotic dynamics. Each observation may define rather a large cloud of compatible points in the state-space. However as time progresses this cloud evolves with the dynamics. For chaotic dynamics this implies an exponential stretching. Unstable directions in the cloud are exponentially extended; stable directions are exponentially contracted. Combining the information from past and future observations can thus substantially reduce uncertainty in the present. The noise reduction is especially powerful for deterministic dynamics. Dynamically stable directions then continue to contract indefinitely, until they have negligible width. The whole cloud from a past observation thus eventually evolves into an arbitrarily thin surface, known as the unstable manifold corresponding to just the expanding directions of its past dynamics. Noise reduction is compared to Kalman filtering. (6 pages)A new adaptive multi-channel technique for vibration control: frequency domain adaptive control
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971332
Introduces a frequency domain adaptive control (FDAC) method for vibration and sound control and does not merely control a few harmonics. This means that control of broadband signals is aimed at and frequency domain estimation is used as a convenient means the gather information on the plant to compute suitable controllers. The approach taken is different from the work of Elliott and Rafaely (1997) in both estimation and control design. Here the main contribution is the analysis of the benefits and difficulties of the new scheme. (3 pages)A subband adaptive filter
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971307
A new real-valued oversampled filter bank has been proposed which reduces the “inband” alias. The filter bank consists of at least three channels which are subsampled by different subsampling ratios. We investigate the applicability of this filter bank for adaptive subband filtering and compare the setup with existing subband and fullband techniques. (6 pages)Time series of EIT chest images and singular value decomposition
http://dl-live.theiet.org/content/conferences/10.1049/ic_19970067
The data being analysed were obtained from an electrical impedance tomograph comprising 32 independently programmable current sources and 32 voltage measurement channels attached respectively to separate electrodes around the chest of a male volunteer. Each image was obtained by applying-in sequence-the first 20 spatial trigonometric current patterns. Two sequences of images were made, the first during normal breathing. The second set of images were made while the subject held his breath, thus removing the respiratory element from the data and allowing the cardiac-synchronous component to be more obvious. The reconstruction problem is nonlinear and highly ill-posed. Time series and SVD of the matrix of temporal mean are presented. (4 pages)Multi-sensor data fusion for situational assessment - a critical element of systems integration, some theory and application to collision avoidance
http://dl-live.theiet.org/content/conferences/10.1049/ic_19970110
Concerns multisensor data fusion (MSDF) for situational assessment for real time complex processes. All three elements of the SHORE (stimulus-hypothesis-response) paradigm have been considered by the ISIS group for demonstrating this architecture for systems integration of a fully autonomous road, cross-country and drilling vehicle on CEC Project Panorama. MSDF is a continuous process dealing with the association correlation, and combination of data and information from multiple disparate sources to achieve a refined state estimate about the environment and timely assessment of the situation. Here we only consider the processes of data integration and state estimation. To integrate data from disparate data sources such as sensors, look-up tables, human experiences/observations, data bases, etc a common currency of information content and data representation is required. Existing theories such as Bayesian, Dempster-Shafer, artificial neural networks (ANN), case-based reasoning, method of endorsement, blackboard expert systems, fuzzy logic etc.-all of which have been used for MSDF-are inadequate or inappropriate. We propose neurofuzzy algorithms, since they readily incorporate database knowledge/symbolic/linguistic knowledge in the form of fuzzy rules, and sensory data in a single environment/processor. (3 pages)Signal processing of chaotic impacting series
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971374
Impact dynamics is considered to be one of the most important problems which arise in mechanical vibrating systems. Such impacting may occur in the motion of oscillators with amplitude constraining stops. Different types of impacting response due to different ranges of driving frequency or control parameters can be predicted from bifurcation diagrams. In practice the impacting signals normally cannot be measured directly from the impacting sources. In other words, the original impacting signals are “filtered” by linear or nonlinear substructures with unknown measurement noise. We focus on the signal processing of the observed impacting signals which is recorded from the experimental model in the chaotic region. We review the theoretical model of simple impacting systems and the problems we are facing. The experimental model is described and the observed impacting series are introduced. Data analysis is described, and the two stages of the signal processing (a) blind deconvolution and optimisation of the observed data, and (b) Lyapunov exponents and noise reduction, are discussed. (6 pages)On-line tracking for blind source separation using zero-point probability
http://dl-live.theiet.org/content/conferences/10.1049/ic_19971320
We have developed a novel on-line method for separating instantaneous, linear mixtures of super-Gaussian sources. The method uses a simple, constantly updating estimate of the central part of the probability distributions of the candidate mixed signals which can then be used to update the unmixing coefficients. The method is simple to implement and its concentration on the central part of the probability distribution makes it insensitive to outliers in the data. This is in contrast both with methods involving explicit estimates of higher order statistics, which are very sensitive to outliers, and implicit methods that raise signals to a high order power as part of their estimation process. In this paper we outline the details of the “zero-point probability” as a contrast for source separation and compare its resistance to outliers with standard fourth-order contrasts. (5 pages)Performance improvements of adaptive FIR filters using adjusted step size LMS algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970840
In this paper possible improvements in the performance of adaptive FIR filters in nonstationary environments are reported. Nonstationary environments means that the statistical property of the noise path are time varying such as HF channel time variations. A modified LMS algorithm incorporating a recursively adjusted adaptation step size based on rough estimate of the performance surface gradient square are proposed, and shows superior performance in the nonstationary environments. These proposed algorithms could be a promising algorithms for a variety of applications such as tracking the HF channel that is used to provide high data rate communications over a nominal 3 kHz bandwidth HF channel, adaptive noise canceling, line enhancement, etc. Beside of the good behavior of maintaining the trade off between misadjustment and tracking ability, the algorithm requires less computations making the practical real time application attractive.Non-linear principal components analysis using genetic programming
http://dl-live.theiet.org/content/conferences/10.1049/cp_19971197
Principal components analysis (PCA) is a standard statistical technique, which is frequently employed in the analysis of large highly correlated data-sets. As it stands, PCA is a linear technique which can limit its relevance to the highly nonlinear systems frequently encountered in the chemical process industries. Several attempts to extend linear PCA to cover nonlinear data sets have been made, and will be briefly reviewed in this paper. We propose a symbolically oriented technique for nonlinear PCA, which is based on the genetic programming (GP) paradigm. Its applicability will be demonstrated using two simple nonlinear systems and industrial data collected from a distillation column. It is suggested that the use of the GP-based nonlinear PCA algorithm achieves the objectives of nonlinear PCA, while giving high a degree of structural parsimony.Complexity modelling and stability characterisation for long term iterated time series prediction
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970701
The authors describe a method of estimating and characterising appropriate data and model complexity in the context of long term iterated time series forecasting. In addition they also examine the stability of the neural network approach by extracting the dominant Lyapunov exponent from the neural network model itself. They extend the philosophy that the iterated prediction of a dynamical system can be interpreted through a model of the system dynamics. An embedding of a signal is obtained which decouples multiple time scale effects such as seasonality and trend. The performance of the technique is tested using a synthetic series, and real world time series problems including electricity load forecasting, and financial futures contracts.Application of fuzzy signal processing to three dimensional vision
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970163
Three types of filter based on Sugeno fuzzy systems have been presented. These filters aim to improve on the results of a depth from image sequences algorithm using either a moving average or median filter as a depth map smoother. The fuzzy filters attempt to do this by using additional information on the uncertainties in the depth map and information on edge location in the original grey scale images. The results of the first two types of filter described show an improvement in RMS error on sequences of simulated images. The results using the edge information are disappointing, but work is ongoing to investigate the reasons for the poorer RMS error when this type of filter is used.An improved novelty criterion for resource allocating networks
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970700
The author introduces a new novelty criterion for resource allocating RBF networks (RANs) based on standard signal processing theory. This network growth prescription is considerably less sensitive to noise and outliers than those of previous RANs, and also removes the need for ad-hoc hyperparameters. An added advantage of this novelty criterion is that, as it is independent of the parameters of the extended Kalman filter training algorithm, the filter can be modified for application to slowly varying nonstationary environments without adversely affecting the network's capacity for growth. The author demonstrates the relative improvement of this criterion on two non-stationary real-world problems: electricity load forecasting and exchange rate prediction.Estimations of error bounds for RBF networks
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970731
The training and optimisation of neural networks to perform function approximation tasks is well documented in the literature. The usefulness of neural networks will be enhanced if a further capacity is added to them: the ability to estimate the accuracy of the results which they generate. Not only will this provide users of neural networks with a confidence index, it will also enable the estimates from the neural networks to be included as part of an overall estimation scheme in which several estimates are combined in a Bayesian manner to guarantee the optimality (in terms of minimum variance) of the result. For example, it would enable the results from a neural network estimator to be included in a Kalman filter cycle with full mathematical rigour. The suitability of a perturbation model to perform such a task is examined.ROI approach to wavelet-based, hybrid compression of MR images
http://dl-live.theiet.org/content/conferences/10.1049/cp_19971013
This paper presents a novel medical image compression technique for inhomogeneous spatial reconstruction of MR images of the brain. The images are decomposed to various scales using the wavelet transform and a new multiscale segmentation algorithm is used to select areas of high diagnostic importance (regions of interest-ROI). Those areas, corresponding to brain tissue, tumours, and other structures in the head are compressed for maximum reconstruction quality, while neighbouring areas are coarsely approximated. The background is rejected since it contains only noise and no useful data. The quality of the reconstructed image is very good, even at low bit-rates, since the bit allocation is performed after diagnosis and reflects the diagnostic importance of each region. No useful data is lost at the selected ROI.Automatic selection of Gabor filters for pixel classification
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970998
This paper has described a technique for filter selection. The off-line genetic algorithm search ensures that, for practical applications involving texture processing, only a small number of filters need to be convolved with the image rather than an entire filter bank, hence reducing computation time. The reduced number of filters also decreases the complexity of the task required of the classifier. Many segmentation, classification and analysis techniques involving Gabor filters will benefit from this approach. The method of tuning a filter set by off-line training is easily extendible to most approaches for texture analysis.Entropy-constrained design of quadrate video coding schemes
http://dl-live.theiet.org/content/conferences/10.1049/cp_19970849
The variable length code design of a complete quadtree-based video codec is addressed, in which we jointly optimize the entropy-coding for the parameters of motion-compensated prediction together with the residual coding. The quadtree coding scheme selected for this optimization allows easy access to the rate-distortion costs, thus making it possible to perform rate-distortion optimized bit allocation without exhaustive computation. Throughout the paper, we view the quadtree coder as a special case of tree-structured entropy-constrained vector quantization and derive a design algorithm which iteratively descends to a (locally) optimal quadtree video codec. Experimental results evaluate the performance of the proposed design algorithm.Vibration data compression with optimal wavelet coefficients
http://dl-live.theiet.org/content/conferences/10.1049/cp_19971178
The paper presents an application of data compression in vibration analysis. A linear transformation procedure based on the wavelet transform is used for feature selection in the data. A simple genetic algorithm is employed to extract wavelet coefficients which represents these features in the time-scale domain. The method is applied to the compression of gearbox vibration spectra showing a potential for storage, transmission and fault feature selection for condition monitoring.