Home
>
Journals & magazines
>
IEE Proceedings - Vision, Image and Signal Proces...
>
Volume 144
Issue 2
IEE Proceedings - Vision, Image and Signal Processing
Volume 144, Issue 2, April 1997
Volumes & issues:
Volume 144, Issue 2
April 1997
-
- Author(s): O. Tanrıkulu ; J.A. Chambers ; A.G. Constantinides
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 49 –56
- DOI: 10.1049/ip-vis:19971096
- Type: Article
- + Show details - Hide details
-
p.
49
–56
(8)
Based on the method of parallel tangents, the block-LMS algorithm is modified, and the block momentum-LMS algorithm is proposed. The new algorithm has lower computational complexity than the LMS algorithm. It converges significantly faster than the block-LMS algorithm when the input signal is coloured. The time-constant, mean and mean-square convergence conditions and the misadjustment of the proposed algorithm are derived. As a special case, an accurate mean-square convergence condition is obtained for the block-LMS algorithm. Extension to the frequency domain is also discussed. Comprehensive experimental results on system identification and channel equalisation are presented that validate the theoretical findings. - Author(s): Lj. Stanković
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 57 –64
- DOI: 10.1049/ip-vis:19970917
- Type: Article
- + Show details - Hide details
-
p.
57
–64
(8)
A new general class of distributions (S-class of distributions) for time–frequency signal analysis is proposed. This class is derived by generalising recently defined S-distribution. It is possible to define the S-counterpart distribution for each known distribution from the Cohen class, such that some of the performances may be improved. This class of distributions may be treated as a variant of the author's L-class of distributions, but it may satisfy unbiased energy conditions, time marginal as well as the frequency marginal in the case of asymptotic signals. A method for the realisation of the S-distribution which will be, in the case of multicomponent signals, equal to the sum of S-distributions of each component separately, is presented. Theory is illustrated by examples. - Author(s): C.-H. Wu
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 65 –71
- DOI: 10.1049/ip-vis:19971095
- Type: Article
- + Show details - Hide details
-
p.
65
–71
(7)
A continuous Mandarin speech keyword spotting system based on context-dependent subsyllables is presented. In this vocabulary-independent system, users can define their own keywords and most frequently occurring non-keywords without retraining the system. A set of 176 monosyllables and 483 balanced words or sentences are used to extract the context-dependent subsyllables (i.e. initials or finals in Mandarin speech), for training. Each subsyllable is represented by a proposed discriminative segmental Bayesian network (DSBN). In the training process, the generalised probabilistic descent (GPD) algorithm is used for discriminative training. The most frequently occurring non-keywords are divided into keyword predecessors and successors. Non-keyword garbage models for keyword predecessors, keyword successors and extraneous speech are separately constructed. In the recognition process, a final part preprocessor is used to screen out unreasonable hypotheses in order to reduce the recognition time. Using a test set of 750 conversational speech utterances from 20 speakers (ten males and ten females), word spotting rates of 92.0% when the vocabulary word was embedded in unconstrained extraneous speech, were obtained for a user-defined 20 keyword vocabulary. - Author(s): R.D. Dony and S. Haykin
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 73 –80
- DOI: 10.1049/ip-vis:19971153
- Type: Article
- + Show details - Hide details
-
p.
73
–80
(8)
In previous work, the authors have presented a new adaptive approach to image compression using a neural network-based scheme. It is based on a mixture of principal components model for data representation. The classifier used in the adaptation is a linear subspace classifier, which the authors apply to the problem of segmentation. An important property of this classifier is its insensitivity to the norm of the input vectors. As a result, regions in an image that differ only in variations in illumination are classified the same. When trained on an image, the networks extracted perceptually important features in an entirely self-organising manner. The topological ordering of the classes resulted in like classes being close together in a manner analogous to the ordering of directionally sensitive columns in the visual cortex. The classification of similar features is consistent across an image quite different to the one used in training. In addition, the segmentation is shown to be independent of variations in illumination. - Author(s): E. Salari and S. Lin
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 81 –88
- DOI: 10.1049/ip-vis:19971050
- Type: Article
- + Show details - Hide details
-
p.
81
–88
(8)
A low complexity codec with no visible degradation is desirable in compression of digital images. The authors present a codec which combines the efficiency and effectiveness of both sub-band coding and lattice quantisation. Initially, a cosine modulated pseudoquadrature mirror filter is designed to decompose the input image into a number of sub-bands. The lowest frequency sub-band is DPCM encoded. All the higher frequency bands are vector quantised using a pyramidal piecewise uniform lattice quantisation. The proposed method was implemented and the computer simulation results are presented. - Author(s): G.H. Watson and S.K. Watson
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 89 –97
- DOI: 10.1049/ip-vis:19971116
- Type: Article
- + Show details - Hide details
-
p.
89
–97
(9)
A new form of wavelet-based feature extraction has been developed for the multiresolution analysis of multispectral imagery. The wavelet components are vector-valued and can be used to characterise multispectral phenomena such as colour, in addition to brightness, position, scale and orientation. The authors show that various types of multispectral natural background have colour scale invariance, leading to an extension of the concept of self-similarity and enabling fractal models to characterise this type of data. Background self-similarity leads to a wavelet transformation which for many types of multispectral background is statistically invariant with respect to its parameters of position, scale and orientation. A norm, related to the background distribution, has been defined on the wavelet vector space and is used to identify objects of unusual brightness and colour. Wavelet-based feature extraction has been used to identify and characterise artefacts such as vehicles, roads and ship tracks in strongly cluttered electro-optical and infrared multispectral images. - Author(s): F. Pedersini ; A. Sarti ; S. Tubaro
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 98 –107
- DOI: 10.1049/ip-vis:19970986
- Type: Article
- + Show details - Hide details
-
p.
98
–107
(10)
Signal decimation aimed at optimal spectral packing has a variety of applications in areas ranging from array processing to image processing. The authors propose and discuss a new method for determining the decimation grid and prefilter that best fit the spectral extension of any 2-D signal defined on an arbitrary sampling lattice. The method first quantifies the spectral anisotropy through the determination of the principal axes of the power spectrum, then it selects among all possible decimation grids those that are compatible with the spectral extension shaped on the ‘inertia’ ellipse. Finally, for each of them it geometrically constructs the ideal prefilter whose convex passband best encircles this spectral extension. A final selection is thus made among the available sublattice/prefilter pairs according to some specific criterion. The method, implemented in a fully automatic computer procedure, has been tested over several digital images to evaluate its performance in terms of the impact of the spectral truncation on the overall quality of the reconstructed images. - Author(s): F. Lorenzelli and K. Yao
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 108 –115
- DOI: 10.1049/ip-vis:19970916
- Type: Article
- + Show details - Hide details
-
p.
108
–115
(8)
An effective updating algorithm for singular value decomposition (SVD), based on Jacobi rotations, has recently been proposed (Moonen et al., 1992). This algorithm is composed of two basic steps: QR updating and rediagonalisation. The authors are concerned with the behaviour of this algorithm for nonstationary data, and the effect of the updating rate on tracking accuracy. To overcome the trade-off between accuracy and updating rate intrinsic in the original algorithm, the authors propose two schemes which improve the overall performance when the rate of change of the data is high. In the ‘variable rotational rate’ scheme, the number of Jacobi rotations per update is dynamically determined. In the ‘variable forgetting factor’ approach, the effective width of the observation adjusts to the data nonstationarity. Behaviour and performance of the two schemes are discussed and compared. Applications to direction-of-arrival estimation and speech processing are given. - Author(s): J.J. Rajan and P.J.W. Rayner
- Source: IEE Proceedings - Vision, Image and Signal Processing, Volume 144, Issue 2, p. 116 –123
- DOI: 10.1049/ip-vis:19971093
- Type: Article
- + Show details - Hide details
-
p.
116
–123
(8)
Bayesian model order selection is considered in relation to the singular value decomposition (SVD) and the discrete Karhunen–Loève transform (DKLT). There are many applications of the SVD and DKLT where it is necessary to discard some of the small singular values that may represent corrupted signal information. Often this task is performed heuristically or in an ad hoc manner. The Bayesian approach to model order selection involves the determination of the evidence or the conditional posterior probability of the model structure given the data; this framework allows the relative probabilities of all possible candidate models to be compared explicitly. Applied to the SVD, the evidence formulation enables the number of nonzero singular values (and hence the effective rank) of a singular or ill-conditioned matrix to be determined analytically. For the DKLT, the evidence allows the determination of the optimal number of basis vectors to choose for the signal reconstruction. In addition, the Bayesian method allows prior information such as physical smoothness constraints to be incorporated directly into the problem specification. Derivations of the evidence formulae are included along with results that illustrate the usefulness of the method.
Block momentum-LMS algorithm based on the method of parallel tangents
S-class of time–frequency distributions
Subsyllable-based discriminative segmental Bayesian network for Mandarin speech keyword spotting
Image segmentation using a mixture of principal components representation
Low complexity sub-band image coding with pseudo QMF and pyramidal lattice VQ
Wavelet transforms on vector spaces as a method of multispectral image characterisation
Joint sublattice selection and prefilter design for the optimal decimation of 2-D digital signals
Updating the Jacobi SVD for nonstationary data
Model order selection for the singular value decomposition and the discrete Karhunen–Loève transform using a Bayesian approach
Most viewed content for this Journal
Article
content/journals/ip-vis
Journal
5
Most cited content for this Journal
We currently have no most cited data available for this content.