New Publications are available for Computational complexity
http://dl-live.theiet.org
New Publications are available now online for this publication.
Please follow the links to view the publication.Reinforcement learning based ALOHA for multi-hop wireless sensor networks with informed receiving
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0582
In this paper, an ALOHA based Medium Access Control (MAC) protocol (RL-ALOHA with Informed Receiving) is proposed for multi-hop Wireless Sensor Networks (WSNs), which overcomes the traditional problems of low throughput, while exploiting their advantages of simplicity, low computational complexity and overheads. Reinforcement Learning (RL) is implemented as an intelligent slot assignment strategy in order to avoid collisions with minimal additional overheads. To improve the energy efficiency, Informed Receiving (IR) and ping packets are applied to avoid idle listening and overhearing. The simulation results show that this approach significantly increases the energy efficiency, achieves over twice throughput of Slotted ALOHA and reduces the end-to-end delay. (6 pages)High-capacity colour image watermarking using multi-dimensional Fourier transforms and semi-random LDPC codes
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0456
In this paper, we propose a colour image watermarking scheme based on the Spatio-Chromatic Fourier Transform (SCFT) with spread-spectrum signaling enhanced by error correction using semi-random low density parity check (SRLDPC) codes. The SCFT transform enables efficient use of the embedding properties of the complex Fourier representations without incurring additional computational complexity. The watermark detection is based on a statistical maximum likelihood approach using a Weibull distribution known to be well-suited for modelling the SCFT coefficients. The proposed embedding scheme is image-adaptive, and provides control over the watermark embedding strength according to the local properties of the SCFT representation of the host image. The efficiency and data hiding capacity of the proposed watermark embedding scheme are found to be greatly enhanced by the use of SR-LDPC codes. Simulation results and comparisons with colour-component Discrete Fourier Transform (DFT)-based schemes demonstrate the increased robustness of the proposed LDPC-coded, colour image watermarking algorithm against standard attacks including additive white Gaussian noise and JPEG compression. (5 pages)A new mutual information based similarity measure for medical image registration
http://dl-live.theiet.org/content/conferences/10.1049/cp.2012.0424
Medical image registration (IR) is the systematic process of aligning spate images, often involving different modalities with common reference framework, so complementary information can be combined and compared. This paper presents a new similarity measure which uses Expectation Maximization for Principal Component Analysis allied with mutual information (EMPCA-MI) for medical IR. The new measure has been analysed on multimodal, three band magnetic resonance images (MRI) T1, T2 and PD weighted, in the presence of both intensity non-uniformities (INU) and noise. Both quantitative and qualitative experimental results clearly demonstrate both improved robustness and lower computational complexity of the new EMPCA-MI paradigm compared with existing MI-based similarity measures, for various MRI test datasets. (6 pages)Multispeaker direction of arrival tracking for multimodal source separation of moving sources
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0143
An improvement is proposed in the audio-visual approach to solve the problem of source separation of physically moving speakers by exploiting multiple video cameras, a circular microphone array and robust spatial beamforming. The challenge of separating moving sources is that the mixing filters are time varying; as such the unmixing filters should also be time varying but these are difficult to determine from only audio measurements. Therefore the visual modality is utilized to track the direction of each speaker to the microphone array by using a Markov chain Monte Carlo particle filter (MCMC-PF). The proposed direction of arrival (DOA) tracker improves the computational complexity with respect to a previously employed 3-D multi-speaker position tracker. The DOA information is used in a robust least squares frequency invariant data independent (RLSFIDI) beamformer to separate the audio sources. Experimental results show that the proposed technique efficiently tracks the DOA with improved computational complexity and enhanced source separation. (5 pages)Low complexity robust beamformer for general-rank signal model based on uniform linear array
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0165
Based on the uniform linear array (ULA) structure and the resultant persymmetric structure of its covariance matrices, a low complexity robust beamformer for an incoherently general-rank signal model is proposed by introducing a preprocessing transformation matrix. The computational complexity is reduced significantly due to a real-valued close-form solution for the optimum weight vector. Moreover, the proposed algorithm can achieve a higher output signal-to-interference-plus-noise ratio (SINR). (4 pages)A new technique to solve minimum spanning tree (MST) problem using modified shuffled frog-leaping algorithm (MSFLA) with GA cross-over
http://dl-live.theiet.org/content/conferences/10.1049/ic.2011.0046
A minimum spanning tree (MST) of a connected, weighted (non-negative), undirected graph G = (V,E) is such that vertices of the graph G is connected by edges which have minimum weight and it forms a tree. Finding the MST from a graph is a NP-hard problem. In this paper a new technique is proposed to solve MST problem using Modified Shuffled Frog- Leaping Algorithm (MSFLA) with Genetic Algorithm (GA) cross-over. SFLA is a meta-heuristic search method inspired by natural memetics. It combines the benefits of both meme-based Memetic Algorithm (MA) and social behaviour based Particle Swarm Optimization (PSO). In this paper some modification of SFLA is done and applied it to MST problem. Extensive experimental results show that the algorithm performs very well compare to other algorithms and gives accurate results with minimum no of iterations.A robust segmentation algorithm for branch structure and its implementation
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0298
Medical image segmentation is a serious challenge in medical image processing. The medical images have low contrast, the variability of the organizational characteristics, the ambiguity of the border of the tissues and the complexity of microstructures (e.g. blood vessels, nerves). These features restrict the segmentation of the branch structure in the medical images. This paper presents a robust medical segmentation algorithm that combines the active contour model and region growth segmentation method. General locations, given by the region growth segmentation method, act as the initial position of snake model for segmentation. In this way, we can get the branch structure in the abdominal CT, such as the abdominal aorta, the celiac trunk and mesenteric artery. The experimental result shows that the revised algorithm achieves a better practical effect through surface rendering from VTK. It can help the doctor diagnose the illness wisely and objectively. (5 pages)Research on underground wireless video encoding algorithm based on region of interest for interprediction
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0992
The low illumination and odious environment in the coal mine underground have greatly affected the quality of video image. Aimed at the specialty that the background is unchanged basically at coal mine underground wireless video monitoring point with foreground objects moving only, the inter-prediction algorithm based on region of interest is brought forward in the article. By simulating, the algorithm could greatly reduce the complexity of video encoding and encoding rate, while the quality of video basically keeping unchanged or even enhanced.A study of performance of longest common subsequence identification with sequence identity of biosequences
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0446
Searching for clue to the result with biosequences is an important area of research for computational scientists in bioinformatics. The sequences are longer and demand more and more computational power in order that the result yields benefits to the society. More often the computational results are used in obtaining quick clue to the expected results of lengthy laboratory process. The identity and similarity between sequences provide the basic clue and guidance as to how to progress with work. This paper analyses SRLCS algorithm with the tools like CLUSTAL-W, and MUSCLE in identifying Longest Common Subsequence (LCS) with reference to identity between the bio sequences.A novel heuristic for overlay mapping with enhanced resilience and QoS
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0727
The problem of overlay mapping with enhanced resilience and QoS is NP-hard and previous heuristics are oblivious to substrate topology information and thus cannot provide effective protection. This is because diversity of the working and backup paths is essential to ensure resilience since a single link failure in the lower layer can result in failures of all the upper-layer links that go through it for applications involving multiple layers. In this paper, a novel and effective heuristic that considers the substrate network topology information is proposed. The effectiveness of the new heuristic is verified through extensive simulations. As confirmed by the evaluation, the proposed heuristic can ensure that physical paths are diversified though additional substrate nodes are involved in the overlay mapping solution. Moreover, the robustness of the proposed heuristic with incomplete substrate network information is also examined.A novel self-adaptive differential evolution algorithm for efficient design of multiplier-less low-pass FIR filter
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0460
Variety of real-world optimization problems can be successfully solved by employing a powerful technique, called Differential Evolution (DE) algorithm. The popularity of DE has grown tremendously since its inception as it includes a very few number of control parameters. However, the selection or tuning of these parameters plays a crucial role in determining the performance of the algorithm in terms of its convergence behaviour. In this paper, a novel Self-Adaptive DE (SADE) approach has been proposed for the de sign of a multiplier-less low-pass linear-phase FIR filter to improve the computational efficiency of the algorithm. For this purpose, the convergence behaviour of the SADE technique has been presented and it has been compared with that of traditional DE technique. Additionally, the performance of the SADE-optimized filter has been evaluated in terms of its magnitude response. The corresponding magnitude response for the DE-optimized filter has also been presented for comparison. It has been established that the proposed SADE algorithm outperforms the traditional DEfor this particular design problem.Research on mixed set programming for aircraft schedule recovery
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.1371
Bad weather and aircraft failure are often the causes that the airline's flight schedule cannot be processed as planned. Aircraft Schedule Recovery problem is a typical NP-Hard problem. Different from Mixed Integer Programming, this -research proposes a Mixed Set Programming method to solve the problem by building a Natural Constraint Language model and designing efficient search rules. Instances of different scales are tested respectively using the Greedy Simulated Annealing Algorithm and MSP to analyze the feasibility of the MSP method in solution quality and time efficiency.An abstract to calculate big O factors of time and space complexity of machine code
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0483
Algorithms are generally written for solving some problems or mechanism through machines, the algorithms may be several in numbers, further to these the efficiency of the produced algorithms for the said issue need to be quantified: the factors which are to be quantified are time complexity, space complexity, administrative cost and faster implementation etc.,..One of the effective methods for studying the efficiency of algorithms is Big-O notations, though the Big-O notation is containing mathematical functions and their comparison step by step, it illustrates the methodology of measuring the complexity facto. The output is always expected as a smooth line or curve with a smaller and static slope.Optimal quantization for DC coefficient of Wyner-Ziv frame in unidirectional distributed video coding
http://dl-live.theiet.org/content/conferences/10.1049/cp.2011.0839
This paper presents a rate-distortion based optimal quantization scheme for DC coefficient in the unidirectional Wyner-Ziv (WZ) video coding system. In the proposed scheme, a new rate-distortion (RD) model is developed to find out an optimal quantization step (OQS) for DC coefficient. More specifically, the effects of quantization step on both distortion and rate are considered in our presented RD model. And OQS, optimal in term of RD cost, can be solved for DC coefficient, at the expense of increasing the coding complexity slightly. The comparison of RD performance between the Wyner-Ziv video coding system using our proposed scheme and the baseline system without using OQS is presented. The results show that quantization with OQS for DC coefficient can improve the average PSNR of WZ frames by 0.3~1.0 dB.A/D conversion based on signal decomposition for DSP applications
http://dl-live.theiet.org/content/conferences/10.1049/cp.2010.0614
Digital Signal Processing (DSP) systems operating on analog inputs are often limited by sampling rates of Analog to Digital Converters (ADC). ADCs are critical components since they tend to determine the overall system performance. Hence it is important that ADCs possess characteristics like high sampling rate, low conversion delay and high resolution that enhance system performance. In this paper, we propose a novel ADC to be used in signal processing based on signal decomposition. The proposed scheme folds the input signal symmetrically prior to quantization by high speed comparators. However, unlike folding ADCs that divide the input analog signal into two components, corresponding to coarse and fine quantization, the proposed converter minimizes delay and circuit complexity by introducing a Folding Bit. It does not use explicit circuits for coarse and fine quantization but uses only folding circuits to generate both most significant and least significant bits directly eliminating synchronizing problems normally encountered in the recombination of coarse and fine quantized bits.Robustness with respect to error specifications
http://dl-live.theiet.org/content/conferences/10.1049/ic.2010.0121
Summary form only given. Formal specifications used in automatic verification typically describe the desired behavior of a system only in absence of environment failures. That is, specifications are often of the form A -> G, where A is an assumption on the environment and G is the guarantee, the system should provide. This approach leaves the behavior of the system unspecified when A is not fulfilled and neither verification tools nor synthesis tools take such behavior into account. In practice, however, the environment may fail, due to incomplete specifications, operator errors, faulty implementations, transmission errors, and the like. Thus, a system should not only be correct, it should also be robust, meaning that it "behaves 'reasonably' even in circumstances that were not anticipated in the requirements specification. In this talk we present a formal notion of robustness through graceful degradation for discrete functional safety properties: A small error by the environment should induce only a small error by the system, where the error is defined quantitatively as part of the specification, for instance, as the number of failures. Given such an error specification, we define a system to be robust if a finite environment error induces only a finite system error. As a more fine-grained measure of robustness, we define the notion of k-robustness, meaning that on average, the number of system failures is at most k times larger than the number of environment failures. We show that the synthesis question for robust systems can be solved in polynomial time as a one-pair Streett game and that the synthesis question for k-robust systems can be solved using ratio games. Ratio games are a novel type of graph games in which edges are labeled with a cost for each player, and the aim is to minimize the ratio of the sum of these costs. We show that ratio games are positional, that the associated decision problem is in NP Π co-NP, and that they can be solved in pseudopolynomial time. They can be solved in polynomial time if the cost of a failure is assumed to be constant.Multi-view video coding with adaptive selection of prediction mode based on hierarchical B picture
http://dl-live.theiet.org/content/conferences/10.1049/cp.2009.1960
In this paper, we analyze hierarchical B picture (HBP). Aiming at its high computational complexity, the concepts of forward region and backward region are introduced, and a new multi-view video coding scheme is proposed. The new scheme can analyze the correlations without any additional complexity, according to a by-product result of encoding the pictures in the current temporal level, which enables the scheme to adaptively select prediction mode for the next level within the forward or backward region. Experiment results show that our scheme can efficiently reduce computational complexity while maintaining high coding efficiency compared with HBP, particularly for the strongly temporally correlated sequences.Arithmetic research about Boolean operation of surfel model based on hierarchical bounding volumes
http://dl-live.theiet.org/content/conferences/10.1049/cp.2009.1480
As CSG primitive, the point is not studied deeply in application of digital geometry. With no topology and adjacency of point models, it is unnecessary to keep consistent with the topology at the time of modeling, but it brings more difficulties to drawing and keeping the surface flow. Review the point model-based geometrical expressions, this paper puts forward a improved data structure of surfel, achieves the complicated surfel modeling with constructive solid geometry technique and brings forward the Boolean operators for the complicated surfel modeling. Based on the point modeling process, solves the quick intersection and lowers the time complexity of algorithm with the theory of hierarchical bounding volumes. This paper describes the intersection test with the grid-topological relations to further quicken the intersection test and puts forward that the time complexity of n primitives is reduced from O(n<sup xmlns="http://pub2web.metastore.ingenta.com/ns/">2</sup>) to O(log<sup xmlns="http://pub2web.metastore.ingenta.com/ns/">d-1</sup> n + K<sub xmlns="http://pub2web.metastore.ingenta.com/ns/">b</sub>) in the bounding box way of the Boolean operators based on the improved basic surfel data structures. (5 pages)In-depth investigation of the optimal coordinated voltage emergency control model
http://dl-live.theiet.org/content/conferences/10.1049/cp.2009.1796
This paper investigates the concerns associated with the previously proposed optimal coordinated voltage emergency control (OCVEC) model. The issues in concern include the influence of control delay on the performance of OCVEC, the effects of parameter errors of load model on the OCVEC performance, and the methodology to select the vulnerable buses being included in the voltage stability performance index. In addition, the computational complexity of OCVEC model is assessed. (6 pages)Application and effect of mobile models on robotics
http://dl-live.theiet.org/content/conferences/10.1049/ic_20080268
The implications of signed epistemologies have been far-reaching and pervasive. Given the current status of event-driven methodologies, scholars shockingly desire the compelling unification of SMP and Moore's Law, which embodies the intuitive principles of complexity theory. Using the neural network, the experiments are designed and carried out. Through experiments and results, it is concluded that the efforts concentrate on validating that neural networks can be made random, flexible, and introspective.A AST and context based duplicated code detecting method
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080748
This paper presents a duplicated code detecting method based on AST (Abstract Syntax Tree) and context, and designs a flexible rule-based parameterized matching method which is able to locate the consecutive duplicated code by applying the string matching algorithm to some special kinds of AST nodes. Then, the method calculates the searching distance according to the variable' association between code lines in the context to detect the nonconsecutive duplicated code. Both the time complexity and the space complexity of the method are O(N). The experiment results prove that the method can detect nonconsecutive duplicated codes.A partial protection scheme based on layer dependency of scalable video coding
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080416
MPEG scalable video coding (SVC), which also known as H.264 scalable extension, makes video adaptation more flexible to meet user consumption preferences, application requirements, and also varying network conditions. An SVC bit stream consists of one or more scalable layers that form hierarchical dependency such that a layer has dependency on its lower layers. Therefore a layer cannot be decoded perfectly if all of its lower layers are not decoded a prion. In this paper, we propose layered protection, which is a partial protection scheme, for SVC bit streams by fully utilizing layer dependency characteristics of SVC. Layered protection scheme can achieve protection degree as high as that of given by protection of entire SVC bit streams (referred as total protection). Therefore it reduces complexity for protecting/unprotecting SVC bit streams. We use a modified structural similarity index (SSIM) to show effectiveness of layered protection scheme in term of visual perception degradation. Our experiments show that layered protection scheme is not only effective in term of visual degradation, but also efficient in term of computational complexity. For our test videos, in average, we only need to protect 42.45% of input bit stream to achieve the same visual perception degradation as that of given by total protection (protecting 100% of input bit stream).Fast object-based image registration using principal component analysis for super-resolution imaging
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080404
In this paper, an object-based image registration method with real-time performance and no constraints on the three registration parameters (i.e., translation, rotation, and scaling) involved is proposed. The coordinate values of the translation parameters can be quickly estimated based on the center of mass of the binary mask which corresponds to the segmented region of interest. For computing the values of rotation and scaling parameters, the principal component analysis (PCA) is conducted on a 2 × 2 symmetric covariance matrix, which is established from the same binary mask. The formulas derived from the eigenvalues and eigenvectors of the covariance matrix provide accurate estimation of the amount of rotation and scaling. The proposed image registration method is further compared with a frequency-domain image registration approach in terms of their performance achieved in super-resolution imaging application using both real and synthetic video sequences. Experimental results clearly show that the proposed method achieves superior performance on the aspects of reconstructed high-resolution images (due to its accurate registration) and its real-time delivery (due to its low computational complexity).Video retrieval using VQ-based global motion features
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080380
In this paper, we propose a vector quantisation (VQ)-based video retrieval algorithm using global motion features. The contribution includes two points: first, we design a VQ-based algorithm for eliminating the singular points in the motion vector space; second, we utilise the global motion vector index histogram of all frames in the query video clip to match those of all video clips in the database. To retrieval histograms instead of original motion vectors can reduce the time computation complexity, and thus assure a real-time video retrieval process. The experimental results show that the proposed algorithm can effectively extract the statistical characteristics of the global motion features. In addition, the computation complexity is low while the retrieval accuracy is high.Dense optical flow from multiple sparse candidate flows using two pass dynamic programming
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080309
Optical flow forms an important initial processing stage for many machine vision tasks. A framework is presented for the recovery of dense optical flows from image sequences containing large motions. Sparse feature correspondences are used to assign multiple candidate optical flows to each image pixel. This set of flows is then augmented with additional perturbed flows to allow for non-rigid motions. An energy functional comprising of a matching term and smoothness term is then minimized using a two pass dynamic programming algorithm to produce a final smooth optical flow field. The proposed algorithm shows a clear increase in recovered optical flow accuracy when compared to a hierarchical approach and a brute force block matching approach of similar computational complexity.Gait recognition based on combined invariant moment
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080774
Gait-based human identification is a new biometrics recognition technology arisen in recent years. Zernike moment applied in gait recognition and the analysis ability of wavelet moment in local made use of, a gait recognition method based on combined invariant moment is proposed by combining wavelet moment with Zernike moment as recognition features. In order to reduce the computation complexity because of increasing dimension of feature vector, an improved BP neural network is applied in recognition to ensure effectiveness of the classification and reduce computation complexity. The experimental results show that gait recognition based on combined invariant moment is superior to others based on single invariant moment in the recognition rate.Using direct interconnection networks for load-balanced architecture
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080843
Load-balanced architectures appear to be a promising way to scale Internet to very high capacities because of the O(1) scheduling complexity. However, traditional load-balanced architectures are based on crossbar switch of implementation complexity N<sup xmlns="http://pub2web.metastore.ingenta.com/ns/">2</sup>, where N is the port number. This high implementation complexity and the centralized structure of crossbar render these traditional architectures not scalable. In this paper we consider constructing load- balanced architecture based on direct interconnection network to solve these two problems while maintaining the O(1) scheduling complexity. We propose the scalable direct interconnection network load-balanced (SDINL). Architecture for extra-high capacity routers. Then we consider the impact of topology properties and different switch fabrics on the linecard on the corresponding load- balanced architectures. Finally, we provide a paradigm named the PLB architecture based on the novel unidirectional direct interconnection topology we previously proposed, P2i.Fast intra prediction mode decision for H.264/AVC based on edge direction detection
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080358
The rate-distortion optimization (RDO) technique is employed in H.264/AVC intra frame coding to achieve the best rate-distortion performance. But the computational complexity is very high because it encodes the current block by exhaustively searching all possible modes to select the best one. To reduce the computational complexity we propose a new fast algorithm based on edge direction detection (EDD) to reduce the number of candidate modes for further RDO calculation. The proposed algorithm can accurately detect the edge direction of different blocks to reduce the number of candidate modes for further RDO calculation. The simulation results show that the proposed algorithm contributes to the bit-rate reduction compared to that of previous algorithms and saves around 62% of the encoding time with negligible PSNR degradation.A novel clustering mechanism for cluster-tree networks
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080943
In this paper, we analyze the current clustering algorithms and introduce the cluster-tree network model; then benefiting from the good organization of cluster-tree networks, we propose a novel clustering mechanism that conceptually separate the whole networks into clusters according to several formulas. Then we not only make use of the advantages of cluster routing but avoid the computation and memory usage for forming and maintaining clusters. We also exploit the proposed novel clustering mechanism in well-known AODV routing. Simulation results that our clustering mechanism largely reduces routing overhead and average end-to-end delay, and performs better in terms of packets delivery ratio than AODV routing without logical clusters for cluster-tree networks, which will make the real-time and dense mesh networks more efficient and scalable.A low complexity encoding scheme for coarse grain scalable video coding
http://dl-live.theiet.org/content/conferences/10.1049/cp_20080412
In this paper, a low complexity encoding scheme for coarse grain scalability in scalable video coding is proposed. The coarse grain scalability is a kind of SNR quality scalability in scalable video coding. It utilizes the similar coding scheme with spatial scalability using inter-layer prediction and has a base layer and several enhancement layers. While CGS scalability supports video at the different quality levels, the computational complexity also dramatically increases as the number of quality levels increases. The proposed method exploits the statistics of residuals between current and reference blocks computed using the macroblock mode predicted from the previous quality layer. Since the quality between consecutive layers is significantly different, the block mode from the previous layer cannot be utilized directly. To test how efficient is some modes in the current layer, the statistical hypothesis testing for the variances of the residual sub-blocks is performed. If the variances tests for sub-block of residuals are accepted, the mode from the previous layer is regarded as the optimal in the current layer. Otherwise, one of other modes is recommended according to the result of the statistical test. The proposed method reduces the total encoding time when three CGS scalability layers are encoded up to 51%. However, the quality degradation and bit- rate increment of the each layer are negligible.A modified iterative equalization scheme based on precoding
http://dl-live.theiet.org/content/conferences/10.1049/cp_20081014
This paper proposes a modified iterative equalization (MIE) scheme based on precoding. First we analyze and compare the performance of the improved MAX-Log-MAP (I-MLM) algorithm called MIE-I-MLM scheme and two traditional equalization algorithms in MIE scheme. Second, based on the good understanding of the performance of precoding and MIE-I-MLM scheme, we exploit precoding in MIE-I-MLM, and then find that the performance of precoded MIE-I-MLM is obviously better than that of non-precoded and can even exceed the bound of an ISI-free channel in higher SNR, while its performance is lower than the one of non-precoded in low SNR. Therefore, we propose a novel scheme to get better performance by using a mixture of precoded and non- precoded scheme. Both analytical and simulation results demonstrate that the proposed novel scheme can achieve better performance without increase in computational complexity.WP4PI: a proof system based on the weakest preconditions for the applied π-calculus
http://dl-live.theiet.org/content/conferences/10.1049/cp_20070750
The π-calculus provides a simple but powerful formal basis for specification and verification of parallel and distributed systems with evolving structures. Applied π-calculus is a general extension of the π-calculus with value passing, primitive functions and equations among terms. In this paper, we present WP4PI, a proof system based on the weakest preconditions for the applied π-calculus. An algorithm is developed to compute the fixpoint of the weakest precondition for a recursive process and a balanced binary tree structure is used to deal with the complexity introduced by conditional construct of the applied π-calculus. Primitive functions in the applied π-calculus are handled by equipping the proof system with an extensible equation sub-system. Based on the weakest preconditions and Hoare's logic, WP4PI is a target-oriented proof system and captures the term substitution property of the applied π-calculus.Efficient tri-ary search tree based packet classification algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp_20070278
With the increasing development of wireless network technology, the traffic in the Internet grows rapidly recently. Meanwhile, ISPs have to provide more value added services. Therefore, efficient packet classification algorithms are highly demanded. In this paper, a novel tri-ary search tree for multidimensional classifiers is developed. The data structure, search strategy and its improvement idea are presented. This technique can avoid the memory blowup and its search speed is reasonable, compared with other related schemes. The memory complexity and the search complexity of it may reach O(WN) and O(log<sub xmlns="http://pub2web.metastore.ingenta.com/ns/">3</sub>N), where W and N represent the width of multi-dimensional field and the number of the rules respectively. The experiments provide the evidence that it has outstanding performance.Reducing complexity and memory accesses in motion compensated interpolation in video codecs
http://dl-live.theiet.org/content/conferences/10.1049/cp_20070703
Motion compensation is the main computational bottleneck in real-time, high quality video decoding applications. The interpolation unit is a time-consuming of motion compensation. In this paper, an interpolation method is presented to reduce computing complexity and memory accesses, which are the main obstacles to achieving a real-time video codec on a power constrained mobile platform. Although the coding efficiency of the proposed method is degraded by 16.9% and 10.2% for standard test sequences, the complexity of the entire decoder improves by 24%, and the proposed motion compensation function is 60% less complex compared to the standard H.264/AVC method.A multi-agent ant colony optimization algorithm for earliness/tardiness scheduling with different due window on non-uniform parallel machines
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060730
Earliness/tardiness job-shop scheduling problems, which play very important roles in the field of job-shop scheduling, are NP (non-polynomial) hard typically, and classical methods for solving them usually result in exponential computational complexities. On the other hand, most of former scholars paid more attention to earliness/tardiness problems with common due window on single machine. More generally, to solve the earliness/tardiness job-shop scheduling problems with distinct due window on non-uniform machines, a novel algorithm named MAACO (multi-agent ant colony optimization), which is more efficient and effective than classical methods, is presented in this paper, and a detailed mathematical model for the problem above is proposed. The presented algorithm introduces competition-cooperation and self-study mechanism into behaviours of agent ants, which improves the convergence rate and optimization precision of ant colony optimization (ACO) greatly. Simulation experiments of the problem are made at different scales. The results show that MAACO is very efficient and effective in obtaining near-optimal solutions to the earliness/tardiness job-shop scheduling problems, especially when the scale of problems is very large.An algorithm of shortening idle time for a kind of special assembly problem
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060792
According to actual assembly conditions, an improved assembly algorithm is presented for a kind of special problem which the operations can be separated into the dependent operations that have the only immediate predecessor and immediate successor and the independent ones. The independent operations are no predecessor and successor. According to the characteristics of the independent operations, the algorithm assembles them by size and assembles idle times by size. This algorithm compares the two sequences and lengthens the idle time if the idle time doesn't fit the independent operation, then inserts the independent operation to the corresponding idle time. This algorithm can reduce the total idle time of the machines and let the total assembly time be less than the total assembly time of the operations on the critical path. Experiments show that this algorithm not only has better complexity but also can shorten the total assembly time. Because the assembly operation tree can be decomposed the sub trees by recursion, this algorithm can be applied to common assembly problems by recursion and can shorten the total assembly time for common jobs.Modeling of manufacturing information system based on complexity science
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060765
Manufacturing information system (MIS) has properties of complex system. To analyze, plan and optimally design MIS, a new modeling approach based on flexible coupling automata of complexity science was presented. Firstly the complex characteristic of MIS is investigated. Secondly the model system of MIS based on complexity system is built. Thirdly mutual interactive between the units based on the two-way automata was constructed, which was suitable for communication between the components in MIS. Further, the ontology was employed to describe the attributes of the components to eliminate the difference of information in variant components. The proposed modeling approach not only makes it convenient for analysis, design and computer implementation of MIS, but also can be applied to model of other complex systems.Complexity scalable 3D video coding based on mixed transform technique
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060530
Mobile devices, performing video coding and streaming over wireless communication networks are of limited battery power. To prolong the operational lifetime of these devices a complexity scalable embedded video coding scheme is desired. We present a framework for the systematic analysis of the computational complexity of the 3D video encoding scheme in terms of average processor cycles. A mixed transform based 3D video codec is proposed with no motion estimation/compensation. A parametric approach to control the computational complexity of this 3D video coding scheme by defining a set of complexity control parameters. We generate a wide range of PSNR-rate-complexity operating points for different sequences, by modifying the complexity control parameters.Power-law correlation in human EEG at various anaesthesia depths
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060377
The depth of anaesthesia estimation has been of a great interest in recent decades. In this paper we present a new methodology to quantify the depth of anaesthesia by quantifying the power-law correlations of the EEG signal. Extraction of useful information about the nonlinear dynamics of the brain during anaesthesia has been proposed with the optimum fractal scaling exponent. This optimum solution is based on the best domain of box sizes in the detrended fluctuation analysis (DFA) algorithm which have meaningful changes at different depth of anaesthesia. The experimental results confirm that our new algorithm on the raw EEG data can clearly discriminate between aware to moderate and deep anaesthesia levels and have robust relations with the well known depth of anaesthesia index (BIS). Moreover, it significantly reduces the computational complexity and results in a faster reaction to the transients in patients' consciousness levels. (4 pages)An algorithm of JSSP with dynamic collection of job with priority
http://dl-live.theiet.org/content/conferences/10.1049/cp_20060793
In this article a new algorithm for job-shop scheduling problem is proposed. In the algorithm a manufacturing tree is constructed on the basis of JSSP. Then the priorities of the nodes are set according to the levels. Except the condition that some operations being scheduled need dynamic adjusting, from the very beginning to the end one principle must be followed that is to keep the machine busy, which means that the job is dispatched to the machine incessantly so long as the machine is idle. During the scheduling an operation collection is dynamically generated according to the tree. Then some operation in the collection is scheduled according to the priority and other scheduling strategy (short-time strategy, long-path strategy, and dynamic-adjustment strategy) until the collection is empty, which means the entire job is finished. It is validated that the algorithm in the article is able to get better result for job-shop scheduling problem.A novel multidimensional IP packet algorithm
http://dl-live.theiet.org/content/conferences/10.1049/cp_20061535
Since confliction exists in rule database, non-conflict rule database is created at first. Then, based on TSS (Tuple Space Search) algorithm and non-collision hash function, a novel IP packet classification named NCHTSS (non-collision hash TSS) was proposed. NCHTSS strengthens the scalability of TSS and makes TSS can be used in multidimensional packet classification more easily. As can be seen from simulation results, NCHTSS is better than modular in time complexity and has much better comprehensive performance. Only some disadvantages exist in memory cost. (4 pages)Constrained list Viterbi algorithm and its application in image transmission via noisy channels
http://dl-live.theiet.org/content/conferences/10.1049/cp_20061471
The Viterbi algorithm (VA) is a maximum likelihood (ML) decoder for convolutionally encoded data. A more advanced decoding method is list Viterbi algorithm (LVA) which produces the list of the L best output sequences over a certain block length in decoding a terminated convolutional code. The constrained Viterbi algorithm (C-VA) is a method proposed to makes full use of the information of the known correctly decoded bits. In this paper, advantages of these two methods are completely exploited together in image transmission. Simulation results show that the decoding complexity can be reduced dramatically and the performance can be improved greatly by using constrained list Viterbi algorithm (CLVA). (4 pages)A novel MB selection method for adaptive intra refresh in H.264/AVC video coding standard
http://dl-live.theiet.org/content/conferences/10.1049/cp_20050110
In this paper, we investigate the H.264/AVC codec from two angles. The first is complexity of the reference H.264/AVC encoder. The second is error resilience update of the encoder using adaptive intra refresh (AIR). We propose a novel intra macroblock (MB) selection method to improve the error robustness of the H.264 codec. The proposed algorithm outperforms the one in the reference H.264/AVC codec without any increase in computational complexity.Complexity reduction of H.264 using Lagrange optimization methods
http://dl-live.theiet.org/content/conferences/10.1049/cp_20050115
A complexity reduction algorithm for an H.264 encoder is proposed. Computational savings are achieved by identifying, prior to motion estimation, macroblocks that are likely to be skipped and hence saving further computational processing of these macroblocks. This early prediction is made by estimating a Lagrangian rate-distortion cost function which incorporates an adaptive model for the Lagrange multiplier parameter based on local sequence statistics. Simulation results demonstrate that the algorithm can achieve computational savings of 19-65% (depending on the source sequence) with no significant loss of rate-distortion performance.Macroblock skip-mode prediction for complexity control of video encoders
http://dl-live.theiet.org/content/conferences/10.1049/cp_20030473
We propose a macroblock skip-mode prediction algorithm to reduce the computational effort of video encoders. The algorithm classifies each macroblock as "skipped" or "not skipped" by estimating the energy of low-frequency quantized coefficients prior to coding, making it possible to significantly reduce computation by not coding these skipped macroblocks. Results show that the algorithm can achieve substantial computational savings with only a small degradation in rate-distortion performance.Motion field interpolation for improved motion compensation and frame-rate conversion
http://dl-live.theiet.org/content/conferences/10.1049/cp_20030476
Compared to block-based motion fields, pel-based (or dense) motion fields have been shown to provide improved quality in many applications. However, this improved performance is achieved at the expense of a significant increase in both computational complexity and motion overhead. In this paper, motion field interpolation (MFI) is proposed as a method of converting a given block motion field to a dense motion field. The proposed method involves a negligible increase in computational complexity and no increase in motion overhead. The effectiveness of the proposed method is demonstrated in two applications: motion-compensated prediction and motion-compensated frame-rate conversion.Transcoder system with high processing efficiency to support simultaneous multirate output
http://dl-live.theiet.org/content/conferences/10.1049/cp_20030534
In this paper, we propose an efficient transcoder architecture to support a simultaneous multirate output. First, we discuss about some architectures to realize this feature. Next, we explain the proposed architecture, it shares not only VLD/IQ but also Q/VLC which have the same quantization step sizes each other. We analyze the numbers of Q/VLC times per one MB to investigate an effect of sharing, and evaluate its computation complexity. From the simulation, that complexity has an upper limit and it can support any numbers of bitstream by 3-6 times complexity than single output.Genetic algorithm based maximum likelihood DOA estimation
http://dl-live.theiet.org/content/conferences/10.1049/cp_20020337
The maximum likelihood (ML) direction-of-arrival (DOA) estimation method was one of the first to be investigated. For a long time, the complexity and computational load of maximizing the multivariable, highly nonlinear likelihood function prevented it from popular. We present the genetic algorithm (GA) for computing exact solutions to the likelihood function with almost guarantee of global convergence. The performance of GA-based ML and multiple signal classification (MUSIC) algorithm have been compared for a variety of scenarios of SNR, DOA separation, number of snapshots, and computational cost. The relationship between the ML technique and MUSIC is also investigated.Improvements of the SPIHT for image coding by wavelet transform
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000573
The SPIHT (set partitioning in hierarchical trees) is an efficient image coding method using the wavelet transform. Two improvements of the SPIHT are presented in this paper. One is to use a new type of tree, to hold as many wavelet coefficients as possible during initialization. The other is to omit the predictable coding symbols for the significance indication of the coefficient sets or individual coefficients. The first improvement increases the compression ratio of low bit-rate image coding, while the second favours relatively high bit-rate image coding. The implementation of the SPIHT is re-designed to include the two improvements. The computation complexity is not increased by using these improvements. Simulation results show significant performance gain of the improvements. (5 pages)Handling complexity in object based modeling and simulation
http://dl-live.theiet.org/content/conferences/10.1049/ic_20000226
The issue of complexity in modeling and simulation arises from the desire to understand fully the behaviour of physical systems at the time design and acquisition decisions are being taken. This avoids the need for extensive experimentation, testing and product development, increasing confidence and reducing time to market. The article describes the development of causally consistent models of different complexity and shows how they may be applied to the simulation and animation of an electrical drive system. Functional objects have a physical form which may be modelled as a three dimensional solid object having sets of functional axes that define the spatial positioning of component interaction. Object interaction is not limited to energetic interaction but may be in relation to any aspect of its context. The object may interact with respect to mass, reliability, failure modes, cost, etc., which may be included in economic and environmental aspects of the models. (4 pages)