Home
>
Journals & magazines
>
IEE Proceedings - Computers and Digital Technique...
>
Volume 149
Issue 3
IEE Proceedings - Computers and Digital Techniques
Volume 149, Issue 3, May 2002
Volumes & issues:
Volume 149, Issue 3
May 2002
-
- Author(s): S.-K. Oh ; W. Pedrycz ; H.-S. Park
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 61 –78
- DOI: 10.1049/ip-cdt:20020411
- Type: Article
- + Show details - Hide details
-
p.
61
–78
(18)
Experimental software datasets describing software projects in terms of their complexity and development time have been the subject of intensive modelling. A number of various modelling methodologies and detailed modelling designs have been proposed including neural networks and fuzzy models. The authors introduce self-organising networks (SON) that result from a synergy of fuzzy inference schemes and polynomial neural networks (PNNs). The latter has included an efficient scheme of selecting input variables of the model being realised on a basis of a group method of data handling (GMDH) algorithm. The authors discuss a detailed architecture of the SON and propose a comprehensive learning algorithm. It is shown that this network exhibits a dynamic structure as the number of its layers as well as the number of nodes in each layer of the SON are not predetermined (as is the case in a popular topology of a multilayer perceptron). The experimental results include well-known software data such as the one describing software modules of the medical imaging system (MIS) and the NASA data set concerning software cost estimation. The experimental results reveal that the proposed model exhibits high accuracy. - Author(s): A.C.M. Fong and G.R. Higgie
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 79 –81
- DOI: 10.1049/ip-cdt:20020403
- Type: Article
- + Show details - Hide details
-
p.
79
–81
(3)
First proposed in 1984, T-codes are a class of variable-length codes that exhibit exceptional tendency towards self-synchronisation. A number of industrial applications have been reported, ranging from moving-picture images to boundary markers. A number of attempts have been made to quantify the synchronisation performance of different T-codes. The first complete analytical method for calculating average synchronisation delays of T-codes was published in 1996 and refined in 1998. However, the computational efficiency of that algorithm is not optimal, notably when suffix conditions are encountered during the decoding process. The authors present a significant improvement on that algorithm. With the new method, computational efficiency is improved by reducing the average time required per code set. It produces average synchronisation delay values in less than one quarter of the time required by the original method to generate comparable results. Consequently, higher-degree code sets, which have wide-ranging practical applications, can have their sync performance analysed and compared. - Author(s): A. Jabir and J. Saul
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 82 –96
- DOI: 10.1049/ip-cdt:20020412
- Type: Article
- + Show details - Hide details
-
p.
82
–96
(15)
The AND-OR-EXOR form of a logic function comprises a pair of sum-of-products expressions (groups) connected by a single EXOR operator, such that the result realises the function. This paper presents several fast AND-OR-EXOR optimisation algorithms based on our previous technique for AND-OR-EXOR minimisation. It has been observed using benchmarks that this technique is able to produce results faster than our previous technique for most of the benchmarks without compromising the quality much. There are some functions which have a good AND-OR-EXOR representation, while there are others which have a good representation in the AND-OR-EXNOR or the mixed AND-OR-EXOR/AND-OR-EXNOR form. Keeping this in view a new heuristic minimisation algorithm for mixed AND-OR-EXOR/AND-OR-EXNOR forms is presented. A fast phase computation technique has been applied to several stages of the minimiser. The algorithm has been tested on many benchmarks and the results show that it is able to produce substantially better results for the majority of the benchmarks compared to any previously reported techniques. It has also been applied to adders and the results show that it is able to produce very good results for these. - Author(s): D.H. Green and J. Choi
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 97 –101
- DOI: 10.1049/ip-cdt:20020404
- Type: Article
- + Show details - Hide details
-
p.
97
–101
(5)
Legendre sequences are a well-known class of binary sequences, which possess good periodic and aperiodic autocorrelation functions. They are also known to exhibit high linear complexity, which makes them significant for cryptographic applications. Jacobi and modified Jacobi sequences are constructed by combining two appropriate Legendre sequences and they also have good correlation properties. This class also contains the Twin Prime sequences as a special case. The authors report the results of subjecting a wide range of modified Jacobi sequences to the Berlekamp–Massey algorithm in order to establish their linear complexities. The results obtained confirm that some members of this class also have high linear complexity. The findings display sufficient structure to enable the general form of the linear complexity and the corresponding generator polynomials to be conjectured. - Author(s): M. Cowlishaw
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 102 –104
- DOI: 10.1049/ip-cdt:20020407
- Type: Article
- + Show details - Hide details
-
p.
102
–104
(3)
Chen–Ho encoding is a lossless compression of three binary coded decimal digits into 10 bits using an algorithm which can be applied or reversed using only simple Boolean operations. An improvement tothe encoding which has the same advantages but is not limited to multiples of three digits is described. The new encoding allows arbitrary-length decimal numbers to be coded efficiently while keeping decimal digit boundaries accessible. This in turn permits efficient decimal arithmetic and makes the best use of available resources such as storage or hardware registers. - Author(s): B. Gupta ; S.K. Banerjee ; B. Liu
- Source: IEE Proceedings - Computers and Digital Techniques, Volume 149, Issue 3, p. 105 –112
- DOI: 10.1049/ip-cdt:20020410
- Type: Article
- + Show details - Hide details
-
p.
105
–112
(8)
A new roll-forward checkpointing scheme is proposed using basic checkpoints. The direct-dependency concept used in the communication-induced checkpointing scheme is applied to basic checkpoints to design a simple algorithm to find a consistent global checkpoint. Both blocking (i.e. when the application processes are suspended during the execution of the algorithm) and non-blocking approaches are presented. The use of the concept of forced checkpoints ensures a small re-execution time after recovery from a failure. The proposed approaches enjoy the main advantages of both the synchronous and the asynchronous approaches, i.e. simple recovery and simple way to create checkpoints. Besides, in the proposed blocking approach, the direct-dependency concept is implemented without piggybacking any extra information with the application message. A very simple scheme for avoiding the creation of useless checkpoints is also proposed.
Self-organising networks in modelling experimental data in software engineering
Using a tree algorithm to determine the average synchronisation delay of self-synchronising T-codes
Minimisation algorithm for three-level mixed AND-OR-EXOR/AND-OR-EXNOR representation of Boolean functions
Linear complexity of modified Jacobi sequences
Densely packed decimal encoding
Design of new roll-forward recovery approach for distributed systems
Most viewed content for this Journal
Article
content/journals/ip-cdt
Journal
5
Most cited content for this Journal
We currently have no most cited data available for this content.