New Publications are available for Interpolation and function approximation (numerical analysis)
http://dl-live.theiet.org
New Publications are available now online for this publication.
Please follow the links to view the publication.Installation and testing of the signalling system
http://dl-live.theiet.org/content/conferences/10.1049/ic.2012.0050
The following paper is a personal interpretation of installation and testing of the signalling system. Testing is a developing science. As technology moves on, the techniques employed will need to change. The basic requirement remains the same. Therefore, a sufficiently robust series of iterative tests is required. Iterative tests that will guarantee that an installation has been designed and installed to a standard that is both safe and fit for purpose.Multi-frame super resolution using edge directed interpolation and complex wavelet transform
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0447
In this paper, a multi frame super resolution technique is proposed which uses edge directional interpolation (EDI) and dual-tree complex wavelet transform (DT-CWT). In the proposed technique a super resolution process is applied for each frame to generate the low frequency component. On the other hand, high frequency components are generated by DTCWT decomposition followed by EDI. Finally, the composition of the generated subbands using inverse DTCWT (IDT-CWT) reconstructs the super resolved output frame. Experimental results on a number of benchmark video sequences with respect to their PSNR measures confirm the superiority of the suggested method over the state of the art video resolution enhancement methods. (5 pages)Novel fingerprint segmentation with entropy-Li MCET using log-normal distribution
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0455
Fingerprint recognition is an important biometric application. This process consists of several phases including fingerprint segmentation. This paper proposes a new method for fingerprint segmentation using a modified Iterative Minimum Cross Entropy Thresholding (MCET) method. The main idea is to model fingerprint images as a mixture of two Log-normal distributions. The proposed method was applied on bi-modal fingerprint images and promising experimental results were obtained. Evaluation of the resulting segmented fingerprint images shows that the proposed method yields better estimation of the optimal threshold than does the same MCET method with Gamma and Gaussian distributions. (6 pages)A Bayesian look at the optimal track labelling problem
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0406
In multi-target tracking (MTT), the problem of assigning labels to tracks (track labelling) is vastly covered in literature, but its exact mathematical formulation, in terms of Bayesian statistics, has not been yet looked at in detail. Doing so, however, may help us to understand how Bayes-optimal track labelling should be performed or numerically approximated. Moreover, it can help us to better understand and tackle some practical difficulties associated with the MTT problem, in particular the so-called “mixed labelling” phenomenon that has been observed in MTT algorithms. In this paper, we rigorously formulate the optimal track labelling problem using Finite Set Statistics (FISST), and look in detail at the mixed labeling phenomenon. As practical contributions of the paper, we derive a new track extraction formulation with some nice properties and a statistic associated with track labelling with clear physical meaning. Additionally, we show how to calculate this statistic for two well-known MTT algorithms. (6 pages)A combined image approach to compression of volumetric data using delaunay tetrahedralization
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0461
We present a method for lossy compression of three dimensional gray scale images that is based on a 3D linear spline approximation to the image. We have extended an approach that has previously been successfully applied in two dimensions. In our method, we first select significant points in the data, and use them to create a 3D tetrahedralization. The tetrahedrons of the tetrahedralization are used as cells for a linear interpolation spline that gives an approximation of the original image. The compression is done by storing the positions of the vertices of the tetrahedralization and the values there instead of the value of the approximation at each grid point. We introduce a novel concept of using a smoothed version of the original image to improve the quality of the approximating spline. To increase the efficiency of the algorithm, we combine it with a refinement/decimation technique. We compare our compression technique to JPG2000 3D. We show that our algorithm performs similarly to, and in some cases even outperforms it, for high compression ratios. Our approach gives images that have significantly different properties than ones created using wavelets, and have the potential of being more suitable for some applications. In addition, this type of compression is particularly suitable for visualization. (6 pages)Software effort estimation with a generalized robust linear regression technique
http://dl-live.theiet.org/content/conferences/10.1049/ic.2012.0027
Background. Outliers and corrupted data points may unduly bias software development effort estimation models. However, given the usually limited size of software engineering data sets, removing too many data points may seriously reduce the power of the statistical tests used and the likelihood of statistically significant result. Also, statistical techniques are typically based on assumptions that are either believed to be true a priori or, at best, checked via statistical tests, without ever achieving 100% certainty on their truthfulness. Estimation models based on less strict assumptions have broader applicability and lower risks of drawing unwarranted conclusions. Aim. We investigate the usefulness of Robust Regression when building effort estimation models, by varying the degree of robustness and, thus, the number of data points that are excluded from the data analysis as outliers. Method. We have used Least Quantile of Squares (LQS) Robust Regression, a generalization of the Least Median of Squares (LMS). LMS builds a regression line by minimizing the median squared residual. LQS minimizes the order statistic of square residuals corresponding to any specified quantile, and not just the median, which is the order statistic corresponding to the 50% quantile. We have extended a statistical significance test for univariate LQS regression models. We have also built a weighted model, obtained from statistically significant LQS models, where each LQS model contributes proportionally to the quantile used. Results. We have applied LQS Linear Regression to estimate development effort on four projects from the PROMISE data set and obtained valid and significant univariate models. Conclusions. LQS may provide a valid alternative to LMS and Ordinary Least Square regressions to build estimation models when (1) balancing the need for excluding outliers and keeping enough data points to build statistically significant models and (2) using less strict assumptions underlying the regression technique.Face recognition using kernel collaborative representation and multiscale local binary patterns
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0457
Collaborative Representation with regularized least square (CRC-RLS) is state-of-the-art face recognition method that exploits the role of collaboration between classes in representing the query sample. However, this method views the image as a point in a feature space, and the performance can be degraded when the cropped face image is misaligned and/or the lighting conditions change. Histogram-based features, such as Local Binary Patterns (LBP) have gained reputation as powerful and attractive texture descriptors showing excellent results in terms of accuracy in face recognition. In this paper, LBP features are introduced in CRC-RLS to confront these problems such as illumination. In addition, motivated by the recent success of non-linear approaches, a new kernel-based nonlinear regularized least square classifier with collaborative representation (KCRC-RLS) is proposed in this paper. The proposed system is evaluated on two benchmarks: ORL and Extended Yale B. The results indicate a significant increase in the performance when compared with state-of-the-art face recognition methods. (4 pages)An application of sequential Monte Carlo samplers: an alternative to particle filters for non-linear non-gaussian sequential inference with zero process noise
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0413
Particle filters are not applicable in sequential parameter estimation scenarios, ie scenarios involving zero process noise. Sequential Monte Carlo (SMC) samplers provide an alternative sequential Monte-carlo approximation to particle filters that can address this issue. This paper aims to provide a description of SMC samplers that is accessible to an engineering audience and illustrate the utility of SMC samplers through their application to a specific problem. The problem involves processing a stream of bearings-only measurements to perform localisation of a stationary tar get. The SMC sampler solution is shown to outperform an Extended and Unscented Kalman filter in nonlinear scenarios (as defined by a novel metric for nonlinearity that this paper describes). The SMC sampler offers a computational cost that is near-constant over time on average. Future work aims to investigate the utility of Approximate Bayesian Computation and apply the technique within a Simultaneous Localisation and Mapping context. (8 pages)Particle learning methods for state and parameter estimation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0412
This paper presents an approach for online parameter estimation within particle filters. Current research has mainly been focused towards the estimation of static parameters. However, in scenarios of target maneuver-ability, it is often necessary to update the parameters of the model to meet the changing conditions of the target. The novel aspect of the proposed approach lies in the estimation of non-static parameters which change at some unknown point in time. Our parameter estimation is updated using change point analysis, where a change point is identified when a significant change occurs in the observations of the system, such as changes in direction or velocity. (6 pages)Bayes optimal knowledge exploitation for target tracking with hard constraints
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0411
Nonlinear target tracking is a well known problem and its Bayes optimal solution, based on particle filtering techniques, is nowadays applied in high performance surveillance systems. Oftentimes, additional information about the environment and the target is available, and can be formalized in terms of constraints on target dynamics. Hence, a Constrained version of the Bayesian Filtering problem has to be solved to achieve optimal tracking performance. In this paper we consider the Constrained Filtering problem for the case of perfectly known hard constraints. We clarify that in such a case the Particle Filter (PF) is still Bayes optimal if we can correctly model the constraints. We then show that from a Bayesian viewpoint, exploitation of the available knowledge in the prediction or in the update step are equivalent. Finally, we consider simple techniques to exploit constraints in the prediction and update steps of a PF, and use the Kullback-Leibler divergence to illustrate their equivalence through simulations. (6 pages)Online optimized stator flux reference approximation for maximum torque per ampere operation of interior permanent magnet machine drive under direct torque control
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0266
This paper presents an online optimized stator flux reference approximation scheme for application of direct torque control (DTC) technique to interior permanent magnet (IPM) brushless AC (BLAC) drives with maximum torque per ampere (MTPA) operation. It is found that by considering dq-axis stator flux components instead of stator flux magnitude, straightforward mathematical functions for computing stator flux reference from the relevant torque reference to achieve MTPA operation can be derived. It is also demonstrated that by properly selecting initial value for approximating the proposed stator flux equation utilizing the Newton-Raphson method, a high degree of accuracy can be obtained utilizing only one computing step. It is shown that MTPA operation can be achieved for a DTC-based IPM BLAC drive using the proposed stator flux reference approximation scheme. Simulation results confirm validity of the proposed method. (6 pages)Implementation of time-varying observers used in direct field orientation of motor drives by trapezoidal integration
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0162
The paper discusses the problem of implementing the state observers associated with direct field orientation (DFO) of motor drives using trapezoidal integration (Tustin method). Typically, the discrete-time equations of observers are obtained by emulating the continuous-time equations using the Euler method (forward rectangular rule). With Euler integration, the resulting equations are simple and the real-time implementation requires low computational effort. However, Euler-based observers become inaccurate if a small sampling time cannot used or if the motor drive operates at high frequency-this is because, as the sampling time increases, the Euler approximation of the integral starts losing more and more area from under the curve. The Tustin method (trapezoidal integration) offers an interesting alternative it is theoretically a more accurate integration method, however, it is more complicated. The paper discusses the emulation procedure required to discretize continuous-time observers based on trapezoidal integration. The permanent magnet synchronous motor (PMSM) is used as an example of a time-varying plant-the paper develops a trapezoidal integration based observer for the PMSM and compares this with an Euler-based observer in terms of computational complexity and performance. The two observers are simulated comparatively in order to establish the conditions when trapezoidal integration outperforms the Euler method. (6 pages)Accurate estimation of electric vehicle speed using Kalman filtering in the presence of parameter variations
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0315
The mechanical drivetrain dynamics of electric vehicles can have a detrimental effect on the performance of the vehicle speed controller. This is mainly caused by the feedback only being available from the motor encoder, with no measurement of the actual vehicle speed. In this paper it is shown how the vehicle driveability can be greatly improved if estimates of vehicle speed and mass are obtained. This has been realised using a Kalman Filter (KF) and a Recursive Least Squares (RLS) estimator, and validated with experimental results. (6 pages)Examination of new current control methods for modern PMW controlled AC electric locomotives
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0314
A railway electrification system supplies electrical energy to railway locomotives and multiple units. There are several different electrification systems in use throughout the world. The single-phase AC network systems are widespread (25 kV 50 Hz or 15 kV 16 2/3 Hz). The Hungarian system is 25 kV 50 Hz AC. This article is just dealing with the AC network supplied locomotives. Nowadays in our country the series wound DC traction motor driven locomotives are still widely used. These vehicles are equipped with diode or thyristor rectifier circuits that inject harmonics into the AC line and distort the line voltage. In our work we examined and compared current control methods that can be achieved by "network-friendly" locomotives connected to distorted line. We worked out a new current control strategy that possesses several advantages. The modern locomotives endeavour to consume sinusoidal current from the AC network, in phase with the network voltage fundamental. In generator mode these endeavour to supply back to the grid sinusoidal current in antiphase to the voltage fundamental. We compared current control methods with this "common" strategy. One of them can reduce the consumed root mean square (RMS) or fundamental current of a distorted line connected modern locomotive in motor mode. Other one can increase the generated RMS and fundamental current in generator mode. With these strategies the harmonic currents can be used for active power. Moreover it turned out that the harmonic content of the network can be reduced by the "new" strategies. For the study, we built a test system. We can model the line side converter of a modern locomotive DC-link frequency converter with the system. A common solution in locomotives is when several line-side converters feed two DC-links. In the test system we modelled these with one converter, while the motor-side voltage source inverters and the electric traction motors were taken into account as a controllable current source DC-link. (5 pages)Widely linear complex extended Kalman filters
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0156
Complex signals are generally second order noncircular (improper), that is, their probability distributions are rotation dependent, and conventional algorithms that assume second order circular distributions are generally inadequate. Recently the widely linear (augmented) complex extended Kalman filter (ACEKF), which utilises augmented complex statistics, has been proposed for dealing with the generality of complex signals, both second order circular and noncircular. In this paper, we analyse the ACEKF and show that it has an equivalent (dual) real valued extended Kalman filter, and that this duality can be used to reduce its computational complexity. We also provide a mean square analysis of the linear conventional complex Kalman filter (CCKF) and the augmented complex Kalman filter (ACKF), and show that the ACKF has superior performance for second order noncircular signals. Simulations using both synthetic and real world proper and improper signals support the analysis. (5 pages)Multispeaker direction of arrival tracking for multimodal source separation of moving sources
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0143
An improvement is proposed in the audio-visual approach to solve the problem of source separation of physically moving speakers by exploiting multiple video cameras, a circular microphone array and robust spatial beamforming. The challenge of separating moving sources is that the mixing filters are time varying; as such the unmixing filters should also be time varying but these are difficult to determine from only audio measurements. Therefore the visual modality is utilized to track the direction of each speaker to the microphone array by using a Markov chain Monte Carlo particle filter (MCMC-PF). The proposed direction of arrival (DOA) tracker improves the computational complexity with respect to a previously employed 3-D multi-speaker position tracker. The DOA information is used in a robust least squares frequency invariant data independent (RLSFIDI) beamformer to separate the audio sources. Experimental results show that the proposed technique efficiently tracks the DOA with improved computational complexity and enhanced source separation. (5 pages)Audio classification based on sparse coefficients
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0153
Audio signal classification is usually done using conventional signal features such as mel-frequency cepstrum coefficients (MFCC), line spectral frequencies (LSF), and short time energy (STM). Learned dictionaries have been shown to have promising capability for creating sparse representation of a signal and hence have a potential to be used for the extraction of signal features. In this paper, we consider to use sparse features for audio classification from music and speech data. We use the K-SVD algorithm to learn separate dictionaries for the speech and music signals to represent their respective subspaces and use them to extract sparse features for each class of signals using Orthogonal Matching Pursuit (OMP). Based on these sparse features, Support Vector Machines (SVM) are used for speech and music classification. The same signals were also classified using SVM based on the conventional MFCC coefficients and the classification results were compared to those of sparse coefficients. It was found that at lower signal to noise ratio (SNR), sparse coefficients give far better signal classification results as compared to the MFCC based classification. (5 pages)A switched-order FLOM STAP algorithm in heterogeneous clutter environment
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0160
The normalized fractionally-lower order moment (NFLOM) algorithm exhibits fast convergence but low steady- state signal-to-interference-plus noise ratio (SINR) when the order is less than two. In this paper, we propose a switched-order NFLOM algorithm to adaptively select the best order to achieve both fast convergence and good steady-state performance. The basic idea is to constrain the order within a range of appropriate values, to compute the space-time adaptive processing (STAP) the best order that maximizes the output SINR. The proposed algorithm is assessed with simulated data considering a heterogeneous clutter environment. The simulation results illustrate that our proposed algorithm outperforms the normalized least mean squares (NLMS) algorithm and the NFLOM algorithm, and has an easier parameter setting than the existing variable-order algorithms. (5 pages)Wiener system identification using B-spline functions with De Boor recursion
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0138
A simple and effective algorithm is introduced for the system identification of Wiener system based on the observational input/output data. The B-spline neural network is used to approximate the nonlinear static function in the Wiener system. We incorporate the Gauss-Newton algorithm with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialization scheme. The efficacy of the proposed approach is demonstrated using an illustrative example. (5 pages)Compressive sensing reconstruction techniques with magnitude prior information
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0151
This paper considers compressive sensing (CS) reconstruction with magnitude prior information. The magnitude prior information is described by mean and covariance of the unknown signal. Towards a reconstruction with minimum mean square errors (MMSE), we propose several CS reconstruction algorithms that use the magnitude prior information. Numerical simulations demonstrate that our approach reduces the reconstruction distortion. Potential applications of the proposed techniques include radio spectrum surveillance, sensor networks, etc. (5 pages)CT-based robust statistical shape modeling for forensic craniofacial reconstruction
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0126
Estimating the facial outlook from an unidentified skull is a challenging task in forensic investigations. This paper presents the definition and implementation of a craniofacial model for computerized craniofacial reconstruction (CFR). The craniofacial model consists of a craniofacial template that is warped towards an unidentified target skull. The allowed transformations for this warping are statistically defined using a PCA-based transformation model, resulting in a linear combination of major modes of deformations. This work builds on previous work [1] in which a statistical model was constructed based on facial shape (represented as a dense set of points) variations and sparse soft tissue depths at 52 craniofacial landmarks. The main contribution of this work is the extension of the soft tissue depth measurements to a dense set of points derived from a database of head CT-images of 156 patients. Despite the limited amount of training data compared to the number of degrees of freedom, the reconstruction tests show good results for a larger part of the test data. Root mean squared error (RMSE) values between reconstruction results and ground truth data smaller than 4 mm over the total head and neck region are observed. (6 pages)Eigen values and vectors computations on VIRTEX-5 FPGA platform cyclic Jacobi's algorithm using systolic array architecture
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0044
The parallel iterative algorithms are the major advancements in the field of computing. These algorithms lead to efficient usage of hardware as well as obtaining faster results. In this paper, we describe architecture to compute eigen values and eigen vectors of a matrix having dimensions up to 50 × 50 using cyclic Jacobi's Algorithm. Systolic array architecture is used to apply it to matrices of larger dimensions. We have implemented the architecture on FPGA Vertex-5 that takes about 8059 LUT slices out of 69120 slices for matrices of dimensions 50 × 50.Self-dependent 3D face rotational alignment using the nose region
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0101
One of the challenging issues for 3D face recognition is face alignment. Many alignment algorithms are computationally expensive, making them unsuitable for real-time biometrics, or not robust enough to detect large variations in pose. In this work, a novel algorithm for 3D face rotational alignment is proposed, that uses the nose region. After preprocessing and nose region identification, alignment is performed by applying two energy functions to the nose footprint, identified as the largest filled region in the inverted depth map. These functions are minimised using Simulated Annealing and the Levenberg-Marqurdt algorithm. The energy minimisation and segmentation procedures continue iteratively until a stopping criterion is met. The method has been applied to images from the Face Recognition Grand Challenge (FRGC) v2 dataset and the consistency of its alignment has been verified using the iterative closest point (ICP) algorithm. As a self-dependent algorithm, it does not require a pre-aligned image as a reference and also has a high computational speed, approximately three times faster than the brute force ICP technique. (6 pages)Clustering performance analysis of FCM algorithm on iterative relaxed median filtered medical images
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0069
Noise removal is a major concern in image processing particularly in medical imaging. In this paper, a novel noise removal technique called Iterative relaxed median filter (IRMF) has been proposed and the effect of noise removal, by means of median filtering, on Fuzzy C-Means Clustering (FCM) has been analysed. Noise removal is carried out by various median filtering methods such as standard median filter (SMF), adaptive median filter (AMF), hybrid median filter (HMF) & relaxed median filter (RMF) and the performance of these methods is compared with the proposed method.Analysis of time invariant state equation using blend function
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0447
"The Blend Function" is a combination of Sample-and-Hold (SHF) function set and Right Hand Side Triangular Function (RHTF) set. It is a new set of Piece-wise Constant Basis Function (PCBF). Any square integrable function can be approximated in this domain. Here, the blend function set is used to find response of a linear time invariant system described by a linear state equation and the result is compared with block pulse function domain analysis (the most fundamental component of PCBF family).A novel 8×8 transform method applied in video coding
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0846
Transform Coding has been playing an important role in video coding and increasingly becomes a research focus especially in the current popular standards such as H.264/AVC, AVS and HEVC. It is important to select an excellent transform method as transform module has a direct impact on the efficiency of video codec. This paper proposes a new 8×8 transform method as well as its integer approximation applied in video coding. Experiments show that it achieves a higher performance.Bootstrapping neural network regression model for motor drive vibration optimization through genetic algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0074
This work proposes an optimization procedure based on a bootstrapped neural network interpolation approach and the Genetic Algorithm method. The bootstrapped neural network is used to generate designed data sets in order to estimate a mapping from input to output space in an intrinsic experiment in a motor drive vibration study. The optimization procedure is aimed to minimize the motor vibration by adjusting some drive control parameters.An interpolation motion compensation method for video sequence
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0274
The video processing technology is becoming the indispensable bridge between the person and the information. Typically, a video processing pipeline consists of video signal demodulation, video decoding and video post processing. In order to adapt to the kinds of modern video formats, it is necessary that the additional frames are interpolated based on the original video sequence. The movement should be to accurately estimate so that the proper motion compensation of the frames can do in video. The importance is well considered between the operation speed and the accuracy of the algorithm. The paper introduces a bilinear interpolation algorithm, along the movement direction, that selects the proper pixels and normalizes. Taking advantage of the common video library, lots of experiments are carried out and compared to the common interpolation algorithm. It is verified that the proposed algorithm may improve the performance of motion compensation. (6 pages)Research on triangle rasterization and texture coordinate interpolation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0305
In this paper, the author modifies the triangle rasterizationa and texture coordinate interpolation algorithm. The new algorithms reduce the time of triangle rasterization and texture coordinate interpolation. The proposed triangle rasterization based on edge function, and improved the efficiency with equal hardware resource. The proposed texture coordinate interpolation algorithm based the characteristics of lookup table. Compared with the previous texture coordinate interpolation algorithm, the proposed algorithm can reduce the execution time, and implement more easily. (4 pages)Investigations on power flow solutions using Interline Power Flow Controller (IPFC)
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0336
This paper presents a mathematical model of IPFC, termed as power injection model (PIM). The model is incorporated in a MATLAB power flow program based on Newton-Raphson (NR) algorithm to study the power flow control in transmission lines in which IPFC is placed. By utilizing this device(IPFC), an enhanced controllability over independent transmission systems or those lines whose sending-end are connected to a common bus, can be obtained. The power flow through the line can be regulated by controlling both magnitudes and angles of the series voltages injected by an IPFC. Generally, the IPFC employs multiple dc-to-ac inverters providing series compensation for a different line respectively. A program in MA TLAB has been written and numerical results are carried out on a standard 2 machine 5 bus system and IEEE 30 bus system. The results without and with IPFC are compared in terms of voltages, active and reactive power flows to demonstrate the performance of the IPFC model.A deflated preconditioned conjugate gradient solver for electro-quasistatic finite element simulations
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0090
The occurrence of materials with large differences in permittivity and conductivity has negative impact to the convergence to the preconditioned conjugate gradient solver used as a linear system solver in an implicit time integrator to solve Electro-Quasistatic Problems. Combining the preconditioned Conjugate Gradient method with a deflation technique can converging faster compared to an Incomplete Cholesky Conjugate Gradient method.Relations model of urban development level and macroscopic road network operation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1387
Based on the statistical analysis of Beijing urban development level, this article puts forward the principles for selecting the indicators of urban development level. It creates an integrated indicator system for urban development level consisting of 16 indicators in three aspects-social development, transportation supply and transportation demand. Also this article establishes indicators for macroscopic road network operation based on road network congestion index (CI) and arterial network's peak-hour travel speed. By gathering the actual statistics data of Beijing during the year 2003-2009, it identifies the input indicators for the relations model, establishes the relations model between urban development level and road network operation using the “Partial Least Square (PLS) method”. The relation model is validated by using the existing data. The result shows that the model's precision is within 10%, being able to satisfactorily meet the need of actual use. The research model in this article can provide important support for Beijing's future urban transportation development decision-making and transportation development strategy.A new method of parameters optimization based on self-calling SVR
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1452
Parameters optimization selection is a key point in Support Vector Regression (SVR). Exhaustive search spends a lot of time, especially when large-scale samples need to be trained. A new method based on Parameters Subsection Selection and Self-Calling (PSS-SC) SVR is proposed. First of all, parameters optimization selection involves in penalty coefficient c, kernel parameter g and non-sensitive coefficient p, and the combination (c,g,p) will make a great effect on the prediction accuracy of SVR. The proposed method is used to select the optimal parameter combination with less time to achieve the better performance of SVR. Firstly, trisection is adopted according to the span of each parameter, thus, three medians as test points could be available for each parameter. Totally 27 parameter combinations (c,g,p) and MSEs of corresponding SVRs could be achieved. Then the mapping relationship between the 27 combinations (c,g,p) and their MSEs could be established. And then, the MSEs of the remaining parameter combinations could be conducted with the mapping relationship. Thus, the N parameters combinations corresponding to the first N minimum MSEs are selected as the candidates TOP-N. Finally, the TOP-N combinations (c,g,p) are applied to SVR to achieve their MSEs separately. The minimum MSE corresponds to the best parameter combination. Experiments on 5 benchmark datasets illustrate that the new method not only can assure the prediction precision but also can reduce training time extremely.Minimum vertex cover problem based on ant colony algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1389
By applying Ant-Cycle model of Ant Colony Algorithm, and modifying the state transition probability, an approximation algorithm is obtained for the minimum vertex cover problem. The time complex of the algorithm is O(n<sup xmlns="http://pub2web.metastore.ingenta.com/ns/">2</sup>) , where n is the number of vertices in a network. In the end, an example is given to illustrate the process of the algorithm.An improved complementary matching pursuit algorithm for compressed sensing signal reconstruction
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1497
The complementary matching pursuit (CMP) algorithm is analogous to the classical matching pursuit (MP), but performs the complementary action. It deletes (N-1) atoms from the sparse approximation at each iteration and keeps only one atom while other algorithms select one atom and add it into the sparse approximation, which makes CMP have better reconstruction quality. However, remaining only one atom at each iteration costs more time in CMP. In this work, an improved CMP algorithm is proposed to shorten the reconstruction time. The proposed CMP algorithm selects more than one atoms at each iteration following a certain rule from Sparsity Adaptive Matching Pursuit(SAMP). In which, the number of selected atoms changes with Adaptive Size (AS) every iteration. The experiment results show that the improved method could achieve better reconstruction quality with less time than the Gradient Pursuit (GP), Orthogonal Matching Pursuit (OMP) and original CMP.A low area clipping engine in 3D graphics system
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0936
With the development of computer graphics, the requirement of using assistant hardware to enhance the efficiency of GPU becomes more and more popular. In this paper, we present a low area algorithm to implement a dual-path clipping engine, which is placed in geometry module. It has a lower area compared with previous algorithms by adapting the calculation interpolation unit. Ultimately, we evaluate this algorithm on Xilinx virtex2 FPGA platform.A forecasting method of trip generation based on land classification combined with OD matrix estimation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1367
Based on the traffic analysis zones (TAZ) divided according to the type of land use, the trip generation volume of each TAZ can be calculated by using trip generation rates in land classification method, which urges us to find appropriate generation rates to ensure the reliability of the results. While OD matrix estimation method can acquire trip generation volume according to the volume of road sections with the drawback that this method requires to complete road traffic data and reasonable prior matrix. The paper combines the advantages of the two methods mentioned above. First, the trip generation volume of each TAZ is still calculated based on the generation rates, which can be used to calibrate the OD matrix estimation model. Once the OD matrix is estimated, the generation rates can be refined, and thus a new OD matrix can be computed. Then keep adjusting the data until it meets the accuracy demand. At last, a statistical test which can be used to judge the iteration will be presented.A kind of adaptive filter based on a new sparsity measure function
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1480
We propose a new sparsity measure function which effectively reflects the vector sparsity. Using the correspondingly developed Iterative Shrinkage/Thresholding Algorithms (ISTA) in filter's adaptation process, our algorithm reduces the impact of measurement noise on the filter performance and converges accurately to the sparse solution. We also apply the Barzilai-Borwein (BB) method, which is developed for determinate environments, to calculate the step-size of adaptive filters with random input. The validity of BB method in adaptive filters is verified by its fast convergence rate in our tests. The idea of our method can actually be applied to general adaptive filters in sparse environments with performance improvements. Numerical simulations prove the effectiveness of the method: sparsity based adaptive algorithms achieve lower mean square error than original algorithms without sacrificing convergence rate.Mixed noise removal using cellular automata and Gaussian scale mixture in digital image
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0986
We describe a method for removing mixed noise from digital images which are contaminated by salt and pepper noise and Gaussian noise, based on cellular automata and Gaussian scale mixture. First we learn some rules by training on the salt and pepper noise images. These rules can then be used on the mixed noise images and remove the salt and pepper noise by CA filtering, after this, we decompose the image into subbands using the steerable pyramid, and then model the neighborhoods of coefficients using the Gaussian scale mixture: the product of a Gaussian random vector and an independent hidden random scalar multiplier. With this model, Bayesian least squares estimator is used to remove the residual noise. Denoising by this method can preserve the edges and details better than others.Analysis of the widely linear complex Kalman filter
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0228
The augmented complex Kalman filter (ACKF) has been recently proposed for the modeling of noncircular complex-valued signals for which widely linear modelling is more suitable than a strictly linear model. This has been achieved in the context of neural network training, however, the extent to which the ACKF outperforms the conventional complex Kalman filter (CCKF) in standard adaptive filtering applications remains unclear. In this paper, we show analytically that the ACKF algorithm achieves a lower mean squared error than the CCKF algorithm for noncircular signals. The analysis is supported by illustrative simulations. (4 pages)On the convergence behavior of recursive adaptive noise cancellation structure in the presence of crosstalk
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0230
In this paper, we address the problem of determining the equilibrium point of a feedback structure for two-channel adaptive noise cancellation in the presence of crosstalk. We focus on an important characteristic of adaptive filters, namely the steady-state mean-square error that remains after the algorithm has converged but independently of the considered algorithm. Our approach relies on analysis of the relationships between the desired signals and their artifacts (distortion, residual noise) at the system outputs. We show that the equilibrium state is obtained when the energy of the distortion on the output signals is the same on each channel. Using this equilibrium state, we provide answers to questions for which no satisfactory answers are currently available for this structure. Examples are given to illustrate that even qualitatively, these answers can be good approximations. Results of simulations sustain our claims. (5 pages)Direct learning architectures for digital predistortion of nonlinear Volterra systems
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0226
Digital compensation of nonlinear distortion due to nonlinear characteristic of electronic or electromechanical device is becoming more and more important. This paper considers Direct Learning Architectures (DLAs) for predistortion of nonlinear systems described using Volterra series. The adaptive predistorter which is connected in tandem with the nonlinear system can be modeled as a Volterra filter or using linear and nonlinear FIR filters. Also, the coefficients of the adaptive predistorter are estimated in this paper using two approaches. The first approach is based on the Nonlinear Filtered-x Least Mean Squares (NFxLMS) algorithm. The second approach is based on using the Spectral Magnitude Matching (SMM) method that minimizes the sum squared error between spectral magnitudes of output signal of the nonlinear system and desired signal. The coefficients of the predistorter in this case are estimated recursively using the generalized Newton iterative algorithm. A comparative simulation study between these different architectures and approaches is given in this paper. (5 pages)Method of separation for characterized curve errors of helicoidal surfaces based on dynamic GM(1,1) and least-squares
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.1288
For evaluating the characterized curve errors of helicoidal surfaces, it is very important to separate the errors into form errors and angle errors. The existence of abnormal data reduces the quality of the measurement data to a great extent, and results in inaccurate separation results for the characterized curve errors. Hence how to detect and remove abnormal data is very critical for evaluating the characterized curve errors. The common characteristic of the existing methods for detecting abnormal data is that they strongly depend on the prior knowledge and sample size of the primary measurement data, and need large amounts of calculation. Unfortunately it is difficult to get large sample sizes in some measurements. The existing methods are therefore limited in applications. Based on the dynamic GM(1,1), this paper presents a novel effective method for detecting abnormal data. The model by implementing the dynamic GM( 1,1) for the primary measurement data can be a good approximation to normal data, while insensitive to abnormal data. Through comparing the model with the primary measurement data, abnormal data can be effectively detected. Then the least-squares method is used to separate the characterized errors into form errors and angle errors.Online bayesian inference for mixture of known components
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0496
In this paper, A Bayesian approach is proposed for parameter inference of mixture models. There is, however, a difficulty with computational cost, since the standard conjugate prior is not available in this case. Recently, the Variational Bayes (VB) algorithm has become a practical solution, due to its computational efficiency. The objective of this paper is to examine the full derivation of the VB approximation and to explain how VB reduces the dimensional expansion of the posterior distribution at each Bayesian inference step, especially in the case of Hidden Markov model, (HMM). Two interesting applications, model order inference and inference of a HMM, will illustrate this effective procedure.Super resolution image reconstruction algorithm based on S+P transformation and interpolation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0683
According to characteristics of image stratification transformation and interpolation processing, this paper proposes an image sequences super resolution reconstruction method based on S+P transformation and interpolation algorithm, which can improve the image resolution. Experiments show that this method can properly retain the details in original image. After being processed using interpolation and S+P reconstruction, the image obtained will be of higher resolution, better visual effect, higher Peak Signal Noise Ratio (PSNR) and more detail information. Therefore, this algorithm is an effective method of super resolution image reconstruction.An incremental least-mean square algorithm with adaptive combiner
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0667
In this paper we propose a new incremental least-mean square (LMS) algorithm with adaptive combination strategy. The adaptive combination strategy improves the robustness of the proposed algorithm to the spatial variation of signal-to-noise ratio (SNR). The performance of our algorithm is more remarkable in inhomogeneous environments when there are some noisy nodes (nodes with low SNR) in the network. The simulation results show that the proposed algorithm outperforms than the incremental LMS algorithm.Comparative analysis of vehicle to pole collision models established using analytical methods and neural networks
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0817
This paper presents a comparison between two modeling approaches of vehicle to pole collision. Firstly, analytical and curve fitting methods are explained and subsequently they are utilized to create lumped parameter models. Having parameters of such systems and their responses we proceed to brief description of the radial basis function neural network and its application to the linear models' coefficients' identification. Comparative analysis of the models formulated according to those two different manners is done. (6 pages)Extraction of traffic status based on spatial-temporal trajectory reconstruction
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.1133
Intelligent transportation system (ITS) is recognized as an important means to solve traffic problems. The most crucial way to implement ITS is real time traffic control and dynamic traffic guidance, which are all based on the acquisition of traffic status timely and accurately. This paper puts forward a prototype system of IVICS (integration of vehicle infrastructure cooperation system), which completely changes traditional traffic information collection method. In IVICS, equipped vehicles can be regarded as moving sensors on road network, which can collect local traffic parameters distributedly and transmit to road-side station. This paper also proposes traffic status extraction method, by interpolation of incomplete trajectory data of equipped vehicles, we can reconstruct spacial-temporal trajectories to estimate traffic density, average speed and travel time of certain road segment. Simulations of NGSIM dataset show that even equipped rate is very low, such method can get precise estimation of traffic status, which demonstrates the effectiveness of the above algorithm.FPGA based high accuracy optical flow algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0497
Motion estimation of a scene is an interesting problem in computer vision since it is the basis for the dynamic analysis of a scene. However this task is computationally intensive for conventional processors. In this work, an FPGA-based hardware architecture for real-time motion estimation is proposed. The algorithm implemented in hardware is a gradient based inverse finite element method for optical flow computation. It manages the motion estimation of the image by calculating the Gradient, Laplacian, and Velocities of each pixel in a parallel design which improves computational speed. The algorithm used in this paper has been benchmarked against many of the well known algorithms and shows superior performance in terms of average angular error and standard deviation. The FPGA design is presented with preliminary results and discussed.Combinatorial double auctions based on subgradient algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0569
Due to the advantages of higher efficiency over fixed-price trading and the ability to discover equilibrium prices quickly, auction is a popular way of trading goods. In business-to-business e-commerce (B2B), many goods with complementarities or substitutabilities are being traded using auctions. Combinatorial auctions can be applied to improve the efficiency of trading in B2B marketplaces. In combinatorial auctions, a bidder can bid on a combination of goods with one limit price for the total combination. This improves the efficiency when the procurement of one good is dependent on the acquisition of another. Most of the combinatorial auctions studied in the literature are one-sided: either multiple buyers compete for commodities sold by one seller, or, multiple sellers compete for the right to sell to one buyer. Combinatorial double auctions in which both sides submit demand or supply bids are much more efficient than several one-sided auctions combined. However, combinatorial double auctions are notoriously difficult to solve from computation point of view. In this paper, we formulate the combinatorial double auction problem and propose an algorithm for finding near optimal solutions. The algorithm is developed by decomposing the combinatorial double auction problem into several subproblems and applying the subgradient algorithm to iteratively adjust the shadow prices for the subproblems.