Fifth International Conference on Image Processing and its Applications
Buy conference proceeding
- Location: Edinburgh, UK
- Conference date: 4-6 July 1995
- ISBN: 0 85296 642 3
- Conference number: CP410
- The following topics were dealt with: image coding; labelling and classification; medical applications; motion; stereo and 3D images; image analysis; image interpretation; communications; shape description and recognition; image processing applications; architectures; image segmentation; neural networks; industrial inspection; filtering and morphology; image texture and colour; transport, security and remote sensing
1 - 20 of 174 items found
-
Image interpretation: exploiting multiple cues
- Author(s): J. Kittler ; J. Matas ; M. Bober ; L. Nguyen
- + Show details - Hide details
-
p.
(4)
Multiple cues play a crucial role in image interpretation. A vision system that combines shape, colour, motion, prior scene knowledge and object motion behaviour is described. The authors show that the use of interpretation strategies which depend on the image data, temporal context and visual goals significantly simplifies the complexity of the image interpretation problem and makes it computationally feasible.
-
On the use of local and scalable Fourier transforms, fractal dimension information, and texture segmentation
- Author(s): R. Zwiggelaar and C.R. Bull
- + Show details - Hide details
-
p.
(4)
Texture information can be used to segment images. One method to extract the texture information is to use certain links between the local Fourier transform of the image and fractal dimension information. When working with biological processes, the resulting images might not be truly fractal in which case classical methods might not give very robust estimates. To provide a more robust method for the extraction of the fractal information from certain images an integration is performed in the Fourier domain. The described method is applied to a few example images.
-
“True” three dimensional image labeling: semantic graph and arc consistency
- Author(s): A. Deruyver and Y. Hode
- + Show details - Hide details
-
p.
(3)
Automatic interpretation of images is of great interest. The goal is to compute a symbolic description of aspects or contents of the image. This can be seen as a general problem of pattern recognition (Ballard and Brown, 1982). A way of solving it involves labeling a set of objects such that specific constraints are satisfied. Labeling uses different kinds of constraint models (Mohr and Masini, 1988; Niemann et al., 1990; Pelillo and Refice, 1994; Rosenfeld et al., 1976). The present authors focus on knowledge representation based on semantic graphs. In this context, they consider control algorithms based on arc consistency. Mohr and Henderson (1986) presented an algorithm for arc consistency and showed that it is optimal in time complexity. This algorithm has been applied with success to a semantic graph for understanding images (Belaid and Belaid, 1992; and Benmouffek et al., 1991). However, this method only works if any two distinct regions are labeled differently. When a dataset concerns sursegmented objects without previous knowledge about this sursegmentation, this condition is missing. The present paper presents a new algorithm for arc consistency working on such data, for example, "true" three dimensional images. The analysis of this kind of information is encountered in medical imagery. The case of NMR imaging of the brain is mentioned in particular.
-
Stretching the brain to fit: a step towards automatically segmenting, orienting and registering SPECT brain images
- Author(s): J.P. Cubillo ; D.L. Harrow ; M.H. Fisher ; D.N. Taylor
- + Show details - Hide details
-
p.
(4)
A procedure has been developed to automatically orientate, register and fit single photon emission computer tomographic images of the brain. The reconstructed transaxial slices are centred and aligned using a guided, iterative correlation technique, correcting for translation and `yaw'. Then the resulting sagittal views are `pitch' corrected using a regression method. The coronal slices produced subsequently are `roll' corrected, again using the guided, iterative correlation method. Finally, the resulting transaxial slices are delineated using an active shape model and then stretched onto a pseudo standard model.
-
Biologically inspired image processing
- Author(s): R.M. Hodgson ; R.I. Chaplin ; W.H. Page
- + Show details - Hide details
-
p.
(4)
The rapidly advancing subject of visual science is discussed, including its inherent interdisciplinary nature and the benefits to be gained by those involved in digital image processing being aware of the subject and its key results. The remainder of the paper is dedicated to a discussion of how image processing can be inspired by the form, system or strategy of the visual systems of man and animals. In discussing form inspired image processing, the authors concentrate on the log polar mapping that results from the spatial distribution of sensors in the periphery of the human eye. Some original work in this field was reported in Wilson and Hodgson (1992). For examples of effective image processing inspired by natural systems, reference is made to the books and papers of Ian Overington. He is one of the pioneers of biologically inspired image processing. Finally, biological systems are considered as sources of high level strategy in image processing. The example taken is concerned with the application of instructional design theories developed for humans being applied to neural networks, a second biologically inspired computing paradigm.
-
Multiple energy function active contours applied to CT and MR images
- Author(s): D.N. Davis ; K. Natarajan ; E. Claridge
- + Show details - Hide details
-
p.
(4)
Reports on the work on active contour models (or snakes) undertaken as part of the SAMMIE project (AIM project A2032) on the development of advanced segmentation aids for object demarcation in MR and CT images. Active contour models (or snakes) are a special form of deformable models, characterised by their property of dynamic deformation to an image from an original given shape. This deformation is controlled through the minimisation of an energy function. An overview is given of the Computer Tomography (CT) and Magnetic Resonance (MR) imaging modalities. This is followed by a section introducing active contour models, or snakes. The adopted active contour model is then detailed, followed by a breakdown of the work undertaken in developing this for CT and MR images.
-
Computer vision applied to the detection and localisation of acoustic neuromas from head MR images
- Author(s): S. Dickson ; W.P.J. Mackeown ; B.T. Thomas ; P. Goddard
- + Show details - Hide details
-
p.
(4)
Computer vision has been applied to many medical imaging problems. Of the different medical imaging modalities, magnetic resonance (MR) imaging is a powerful and widely used technique for medical diagnosis and is attracting much medical image processing research effort. One use of MR imaging is the detection of benign tumours, called acoustic neuromas, which occur in the auditory canal. At present, these tumours are identified manually from MR images (slices) of the head, a task which is both costly and tedious. Here, a method for automating the detection and localisation of acoustic neuromas from head MR images, using computer vision, is described which comprises of three phases: a) a data driven initial segmentation; b) classification at the pixel level, using a neural network, to identify pixels from acoustic neuromas; c) fusion of the segmentation and the pixel level classification to identify segmented regions likely to belong to an acoustic neuroma. These three phases are discussed and the results of each phase presented. Current and future work is then outlined. The MR images of the head used are 256 × 256 pixels in size, grey scale (8 bits per pixel) images.
-
Cortical sulci model and matching from 3D brain magnetic resonance images
- Author(s): S. Langlois ; N. Royackkers ; H. Fawal ; M. Desvignes ; M. Revenu ; J.M. Travere
- + Show details - Hide details
-
p.
(4)
Positron emission tomography (PET) is one of the most popular techniques for the study of brain functional activity. Several studies show that PET is an in-vivo examination technique able to produce real images of cerebral activity, and is also neither destructive nor invasive. Unfortunately, PET images offer low resolution and signal-to-noise ratio. Moreover, they do not reflect the anatomy of patients. Accurate and reproducible analysis of PET images requires other informations, coming from aliases or other images such as magnetic resonance images (MRI) of the same patient. Hence it is of great interest to superimpose functional PET data and anatomical MRI data. Here, the authors deal with representation and identification of sulci. A first step is to choose and to automatically extract anatomical knowledge from a database, in order to adapt it to any image where the recognition has to be performed. Then, the authors introduce a stochastic method using these features to recognise human cerebral sulci.
-
Area identification of bone marrow smears using radial-basis function networks and the HSI colour model
- Author(s): I.D. Greaves ; J. Davies ; P.B. Musgrove
- + Show details - Hide details
-
p.
(4)
Reports on the results of a study using neural networks and the HSI (hue, saturation and intensity) colour model for the identification of areas, suitable for further image processing, from bone marrow smears. 25 μm2 areas of the image were sparse sampled and this acted as the input to the neural networks. The classification ability of multi-layer perceptron (MLP) networks and radial basis function (RBF) networks were compared and it is was found that RBF networks proved to be superior for this task. It was also noted that the saturation plane was the least useful for the differentiation of suitable areas. By using the system and scanning the image on a pixel by pixel basis it was possible to produce `masks' which identified areas worthy of further processing.
-
Dynamic boundary location for 2D echocardiographic images in a semi-automated environment
- Author(s): E.E.S. Ruiz and M.C. Fairhurst
- + Show details - Hide details
-
p.
(4)
As part of an interactive software toolkit for the processing of echocardiographic images, a technique for LV boundary detection in static cardiac images is described. It is shown how further development using deformable contours can lead to a more flexible system for dynamic analysis of image sequences across the cardiac cycle. The practical implications of different approaches to implementation are discussed and evaluated.
-
Automatic detection of calcification in mammograms
- Author(s): S.A. Hojjatoleslami and J. Kittler
- + Show details - Hide details
-
p.
(4)
The authors propose a system for the detection of mammographic calcifications. Their method first segments the image into suspected calcification regions and then classifies each detected region as calcification or normal background. The segmentation method exploits new local thresholding and region growing techniques suitable for the detection of small blobs in a textured background. The next step of processing is to decrease the number of falsely detected blobs obtained in the first step using pattern recognition techniques. Seven features of the detected regions are used for classification of the segmented region. A quadratic classifier was used to classify mammographic calcification using the region's features. The results of the experimental study using a set of 20 mammographic images shows that the proposed system has a good capability to detect calcifications in mammographic images.
-
Interactive support for evaluation of visual motor integration using geometric figure copying tasks
- Author(s): M.C. Fairhurst and N. Higson
- + Show details - Hide details
-
p.
(3)
The administration and evaluation of figure-copying and related tasks such as the visual motor integration (VMI) assessment can, in principle, be made more efficient and effective by means of a system which can record the subject-executed image and provide on-line manipulation and analysis of the image data. The authors specifically review a novel generalised approach to achieving such an objective and, in particular, describe an interactive software toolkit which can be used to support clinical evaluation of standardised testing procedures and which can potentially improve the accuracy and objectivity of task assessment. It is also possible to extend the basic analysis described here to provide further developments to the system to emphasise in particular the extraction of dynamic image features. This type of analysis can offer the possibility of new forms of assessment not available with the purely static evaluation of subjects' responses currently adopted in practice, and this is the subject of continuing work.
-
On the possibility of objective identification of human vertebrae through pattern recognition algorithms
- Author(s): J.M. Inesta ; M.A. Sarti ; M. Buendia
- + Show details - Hide details
-
p.
(4)
The task of finding whether vertebral levels of the human spine can be mathematically differentiated is posed, using a morphometric database of the vertebral quantitative morphology as a training set. There is no certainty in the medical community about this possibility. The axial projections of a number of vertebrae were digitized and their features quantified by automatic image analysis algorithms. These measurements build a spinal morphometric database. After the application of feature selection procedures, the selected measurements are used as inputs to different pattern recognition algorithms try to find whether vertebral levels can be distinguished. The authors have proved the capability to fulfil this classification with a small degree of uncertainty. Artificial neural networks, in particular, have shown their capability to perform well in this difficult pattern recognition task.
-
Dense depth maps from motion using dynamic data fusion
- Author(s): M. Corbatto ; S. Tinonin ; E. Trucco ; V. Roberto
- + Show details - Hide details
-
p.
(4)
This paper concentrates on the estimation of dense depth maps from sequences of frames acquired by a sensor in controlled motion. It addresses the computation of optic flows at each instant as well as the dynamic fusion of estimates in time via Kalman filtering. Examples of experimental tests are included.
-
Spatiotemporal multiresolution associated to MRF modelling for motion detection
- Author(s): A. Caplier and F. Luthon
- + Show details - Hide details
-
p.
(4)
We are concerned with motion detection in image sequences acquired with a static camera. The monoresolution motion detection algorithm based on spatiotemporal Markov random field (MRF) modelling was proposed by Luthon and Caplier (see 4th. Eurographics Animation and Simulation Workshop, Barcelona, Spain, September 1993). Our aim is the use of a classic multiresolution approach, that is to apply the motion detection algorithm on a low-pass pyramid related to a coarse-to-fine strategy. But we propose to implement this framework both in space and time and to build a spatiotemporal pyramid related to the spatiotemporal MRF model. The same low-pass filter is applied in space and time yielding the same model energy at each level. However, the lower the resolution, the weaker the interactions between the pixels. Since the good behaviour of the algorithm is based on a balance between the influence of each term of the energy, reducing the spatiotemporal interactions between pixels is equivalent to increasing the adequation energy along the pyramid. A description of the motion detection algorithm in the monoresolution framework is given. Then the building of the spatiotemporal pyramid and the application of the MRF model on such a pyramid are described. Finally, some promising results in the case of uniform moving objects and sub-pixel motion and some aspects about the computational complexity of the algorithm are presented.
-
Interframe image sequence coding using overlapped motion estimation and wavelet lattice quantisation
- Author(s): D.G. Sampson ; E.A.B. da Silva ; M. Ghanbari
- + Show details - Hide details
-
p.
(4)
We present a method for low bit rate video coding based on wavelet lattice vector quantisation. It is shown that the overlapped block matching (OBM) motion compensation increases the efficiency of the wavelet video codec, by eliminating the blocking artefacts in the prediction error image introduced from the conventional block matching. The motion compensated prediction error signal is coded using a method which combines wavelet transform and lattice vector quantisation, referred to as successive approximation wavelet lattice vector quantisation (SAWLVQ). In this technique, the most important (in terms of energy) wavelet coefficients are successively coded by a series of vectors of decreasing magnitudes. The structural similarities among the bands of same orientation are exploited by incorporating a block zero-tree structure. Simulation results demonstrate that this scheme achieves very good performance for low bit rate video coding. Comparison with the standard RM8 model of the H.261 video codec, shows that the OBM-SAWLVQ codec results in improvements in both the peak signal-to-noise ratio performance and the subjective quality of the reconstructed pictures.
-
A spatial-temporal approach for segmentation of moving and static objects in sector scan sonar image sequences
- Author(s): D. Dai ; M.J. Chantler ; D.M. Lane ; N. Williams
- + Show details - Hide details
-
p.
(4)
The authors investigate an approach to separate time-variant and time-invariant regions in a sector-scan sonar image sequence in both the time and frequency domains. By combining time domain and frequency domain methods, the merging region between the moving and static objects in a sequence of sonar images can be segmented. This method is useful not only for image segmentation but also for object tracking.
-
Robust displacement vector estimation including a statistical error analysis
- Author(s): R. Mester and M. Hotter
- + Show details - Hide details
-
p.
(4)
The determination of displacement vectors is an important task in the context of temporal image sequence analysis, as well as in stereoscopic vision. The relations between images taken from an image sequence or a stereo pair are conceptually represented by a displacement vector field which establishes a pairwise correspondence between points in both regarded images. Most approaches for the determination of displacement vector fields include a first step, where individual measurements of local displacements are performed. Often these initial measurements are subsequently combined in a way that exploits a priori knowledge, e.g. using spatial smoothness constraints for the resulting vector field. However, the very first step, i.e. the determination of individual displacement vectors is a process whose results are in general afflicted by errors. The extent and specific type of error that is to be expected varies largely between the different displacement vectors, as the reliability is largely dependent on the local characteristics of the image signal. If reliability or accuracy measures can be assigned to these estimates, this is advantageous compared to the approach of detecting and suppressing erroneous measurements (outliers) in subsequent processing steps. The paper is oriented towards the joint estimation of individual displacement vectors and their corresponding reliability measures. By extending the results of Singh and Allen (see CVGIP Image Understanding, vol.56, no.2, p.152-177, 1992) these estimation theoretic relations can be fully derived from a statistical image model.
-
Centre-frequency adaptive IIR temporal filters for phase-based image velocity estimation
- Author(s): C.W.G. Clifford ; K. Langley ; D.J. Fleet
- + Show details - Hide details
-
p.
(4)
This paper proposes an application of adaptive IIR filters to the problem of image velocity estimation. A phase-based motion algorithm is employed to measure velocity locally within an image sequence from the outputs of a set of complex space-time separable band-pass filters. The filters' temporal tunings are adaptively modified on the basis of measured velocity to optimise the representation of image motion. In computer simulations the scheme is shown to provide accurate estimates of velocity even at high levels of image noise.
-
A stereo disparity algorithm for 3D model construction
- Author(s): D.V. Papadimitriou and T.J. Dennis
- + Show details - Hide details
-
p.
(4)
Stereo vision is an important passive method for extracting the 3D structure of a scene. It involves the analysis of at least two digital images with overlapping fields of view. We describe a hierarchical algorithm for the computation of a dense disparity field from a binocular view. Since our goal is the measurement of 3D scene structure with application to model-based image coding where a continuous surface description is required, we concentrate on area- rather than feature-based matching. The main features of the algorithm are: the synthesis of a variable disparity search range in a blockwise fashion; the active use of an occlusion detector; and the inclusion of a well-defined interpolation scheme that preserves discontinuities and avoids blurring near occlusion boundaries. These lead to a robust procedure for measuring continuous disparity fields.