IET Computer Vision
Volume 8, Issue 3, June 2014
Volumes & issues:
Volume 8, Issue 3
June 2014
Local feature fitting active contour for segmenting vessels in angiograms
- Author(s): Maryam Taghizadeh Dehkordi ; Ali Mohamad Doost Hoseini ; Saeed Sadri ; Hamid Soltanianzadeh
- Source: IET Computer Vision, Volume 8, Issue 3, p. 161 –170
- DOI: 10.1049/iet-cvi.2013.0083
- Type: Article
- + Show details - Hide details
-
p.
161
–170
(10)
An active contour model for vascular segmentation has been proposed, by defining a new, local, feature fitting, energy function. A vesselness filter is applied to the image in a directional Hessian-based framework. The filter output, as a feature, expresses the degree of the correspondence of each pixel to the vessel structure. By using intensity information obtained from local regions, the proposed model is able to solve the problem of intensity inhomogeneity in images. In addition, by introducing this feature into the fitting process, the model exhibits greater accuracy when compared to existing models. Experimental results from synthetic images and coronary X-ray angiograms verify the desirable performance of the proposed model.
Alternative formulations to compute the binary shape Euler number
- Author(s): Juan Humberto Sossa Azuela ; Elsa Rubio Espino ; Raúl Santiago ; Alejandro López ; Alejandro Peña Ayala ; Erik V. Cuevas Jimenez
- Source: IET Computer Vision, Volume 8, Issue 3, p. 171 –181
- DOI: 10.1049/iet-cvi.2013.0076
- Type: Article
- + Show details - Hide details
-
p.
171
–181
(11)
The authors propose two equations based on the pixel geometry and connectivity properties, which can be used to compute, efficiently, the Euler number of a binary digital image with either thick or thin boundaries. Although computing this feature, the authors’ technique extracts the underlying topological information provided by the shape pixels of the given image. The correctness of computing the Euler number using the new equations is also established theoretically. The performance of the proposed method is compared against other available alternatives. Experimental results on a large image database demonstrate that the authors technique for computing the Euler number outperforms the earlier approaches significantly in terms of the number of basic arithmetic operations needed per pixel. Both equations are specialised only for 4-connectivity cases.
Probabilistic shape-based segmentation method using level sets
- Author(s): Melih S. Aslan ; Ahmed Shalaby ; Hossam Abdelmunim ; Aly A. Farag
- Source: IET Computer Vision, Volume 8, Issue 3, p. 182 –194
- DOI: 10.1049/iet-cvi.2012.0226
- Type: Article
- + Show details - Hide details
-
p.
182
–194
(13)
In this study, a novel probabilistic, geometric and dynamic shape-based level sets method is proposed. The shape prior is coupled with the intensity information to enhance the segmentation results. The two-dimensional principal component analysis method is applied on the training shapes to represent the shape variation with enough number of shape projections in the training step. The shape model is constructed using the implicit representation of the projected shapes. A new energy functional is proposed (i) to embed the shape model into the image domain and (ii) to estimate the shape coefficients. The proposed method is validated on synthetic and clinical images with various challenges such as the noise, occlusion and missing information. The authors compare their method with some of related works. Experiments show that the proposed segmentation method is more accurate and robust than other alternatives under different challenges.
* Note: Colour figures are available in the online version of this paper.
Tracking with scattering descriptor
- Author(s): Xiaolin Tian ; Licheng Jiao ; Xiaowei Shang
- Source: IET Computer Vision, Volume 8, Issue 3, p. 195 –206
- DOI: 10.1049/iet-cvi.2013.0124
- Type: Article
- + Show details - Hide details
-
p.
195
–206
(12)
This study proposes a new method to track the moving object based on undecimated scattering transform (UST). The UST removes the down-sampling operation from the traditional scattering transform to produce a complete representation. Based on the UST, the structural information of object can be captured, which is a highly discriminative representation and facilitates the tracker to distinguish the object from background. The update parameter of tracking model is adaptively adjusted by the correlation coefficient. Occlusion identification is achieved by using a new cascaded model, which can correctly find occlusion and avoid the drift. Experimental results demonstrate that the proposed method is able to track the object accurately and reliably in realistic videos where the appearance and motion change drastically over time.
Multi-scale contrast-based saliency enhancement for salient object detection
- Author(s): Wenhui Zhou ; Teng Song ; Lili Lin ; Andrew Lumsdaine
- Source: IET Computer Vision, Volume 8, Issue 3, p. 207 –215
- DOI: 10.1049/iet-cvi.2013.0118
- Type: Article
- + Show details - Hide details
-
p.
207
–215
(9)
To achieve more complete and more uniformly highlighted salient object regions, this study presents a computational saliency enhancement model that incorporates the properties of multi-scale and logarithmic response into the local and global contrasts. A distinct feature of the authors model is a novel saliency enhancement operator. This operator can effectively enhance the saliency of object interior regions while simultaneously reducing blur on object boundaries caused by multiple scales. Their model is a general one that can make flexible tradeoffs between precision and recall. Detailed comparisons with 12 state-of-the-art methods show that their method can obtain satisfactory salient object regions that are closer to the human-labelled results. In addition, their method provides superior results in precision–recall, F-measure and mean absolute error.
Internal-to-internal transition method for consecutive hierarchical template matching
- Author(s): Ho Gi Jung
- Source: IET Computer Vision, Volume 8, Issue 3, p. 216 –223
- DOI: 10.1049/iet-cvi.2013.0125
- Type: Article
- + Show details - Hide details
-
p.
216
–223
(8)
This study proposes a method reducing the tree traversal cost by first investigating the most probable node instead of the root node when a hierarchical template matching is consecutively applied to the object. In particular, this study gives a novel viewpoint that a consecutive hierarchical template matching could be regarded as a transition of a hierarchical finite state machine, and then it proposes a novel method considering the transitions between the internal nodes of the hierarchical template tree. The proposed method is verified by applying it to pedestrian silhouette detection.
Face recognition based on the fusion of global and local HOG features of face images
- Author(s): Hengliang Tan ; Bing Yang ; Zhengming Ma
- Source: IET Computer Vision, Volume 8, Issue 3, p. 224 –234
- DOI: 10.1049/iet-cvi.2012.0302
- Type: Article
- + Show details - Hide details
-
p.
224
–234
(11)
Histogram of oriented gradients (HOG) descriptor was initially applied to human detection and achieved great success. In recent years, HOG descriptor has also been applied to face recognition. However, comparing with other sophisticated feature descriptors such as LBP, Gabor and so on, there are still considerable research space on the application of HOG features for face recognition. There are two main contributions. On one hand, the main parameters are statistically analysed characterising HOG descriptor for face recognition, which seems to be not discussed clearly in literatures so far. On the other hand, a novel framework for face recognition based on the fusion of global and local HOG features has been proposed. Face images are first illumination normalised by the DoG filter. Secondly, global and local HOG features are extracted by PCA + LDA or LDA with different framework. Finally, in decision level, global and local classifiers are built by the nearest neighbour classifier, after that, two classifiers are fused by a weighted sum rule. Experimental results on two large-scale face databases FERET and CAS-PEAL-R1 show that, in comparison with 12 state-of-the-art approaches of face recognition, the proposed method achieves the highest average recognition rate.
Optimal colour-based mean shift algorithm for tracking objects
- Author(s): Xiaowei An ; Jaedo Kim ; Youngjoon Han
- Source: IET Computer Vision, Volume 8, Issue 3, p. 235 –244
- DOI: 10.1049/iet-cvi.2013.0004
- Type: Article
- + Show details - Hide details
-
p.
235
–244
(10)
The mean-shift method is widely used to locate a target object quickly in sequential images. The mean-shift algorithm takes advantage of a colour distribution with a uniform quantisation. However, the quantisation method ignores the close relationship of colour statistics. The uniform distribution also results in a colour histogram with many empty bins, which introduces additional computation cost in the tracking procedure. To reduce the number of these redundant, empty bins, the authors present a new optimal colour-based, mean-shift algorithm for tracking objects. In the proposed method, the optimal colours are extracted by a histogram agglomeration, which clusters three-dimensional (3D) colour histogram bins with the frequency ratios of 3D colour values. After obtaining optimal colours in a RGB colour histogram, the target image is represented by the indices of the optimal colours. The mean-shift algorithm thus creates a confidence map in a candidate image based on the optimal colour histogram in the target image. It then finds the peak of the confidence map near the previous position of an object area. Comparative experiments with the conventional mean-shift method showed that our method has the advantages of decreased processing time and improved tracking accuracy.
Matching corners using the informative arc
- Author(s): Nadia Kanwal ; Erkan Bostanci ; Adrian F. Clark
- Source: IET Computer Vision, Volume 8, Issue 3, p. 245 –253
- DOI: 10.1049/iet-cvi.2013.0104
- Type: Article
- + Show details - Hide details
-
p.
245
–253
(9)
Corners are important features in images because they typically delimit the boundaries of regions or objects. For real-time applications, it is essential that corners are detected and matched reliably and rapidly. This study presents two related descriptors which are compatible with standard corner detectors and able to be computed and matched at video rate: one encodes the entire region within a corner, whereas the other describes only the region within an object. The advantage of encoding only the region within an object is demonstrated. The noise stability of the descriptors is assessed and compared with that of the popular binary robust independent elementary feature (BRIEF) descriptor, and the matching performances of the descriptors are compared on video sequences from hand-held cameras and the PETS2012 database. A statistical analysis shows that performance is indistinguishable from BRIEF.
Co-segmentation of multiple similar images using saliency detection and region merging
- Author(s): Chongbo Zhou and Chuancai Liu
- Source: IET Computer Vision, Volume 8, Issue 3, p. 254 –261
- DOI: 10.1049/iet-cvi.2012.0266
- Type: Article
- + Show details - Hide details
-
p.
254
–261
(8)
The aim of co-segmentation is to simultaneously segment multiple images depicting an identical or similar object. In this study, a co-segmentation method using saliency detection and region merging is proposed. The saliency detection results using different detection methods on different types of colour space are combined to produce seed regions for each image in the image group. The initial seed regions of all the images are refined by eliminating the dissimilar ones to ensure accurate seed regions for each images as possible. Region merging is performed on each image individually in order to allow our method to be applied to large image groups. The maximal similarity measurement and nearest similarity measurement are defined as merging rules. The deliberately designed merging strategy aims to merge two regions using the maximal similarity rule and label two regions as the same class but not merge them using the nearest similarity rule. The proposed method has been compared with some state-of-the-art methods on three datasets, and the experimental results show its effectiveness.
Most viewed content
Most cited content for this Journal
-
Brain tumour classification using two-tier classifier with adaptive segmentation technique
- Author(s): V. Anitha and S. Murugavalli
- Type: Article
-
Driving posture recognition by convolutional neural networks
- Author(s): Chao Yan ; Frans Coenen ; Bailing Zhang
- Type: Article
-
Local directional mask maximum edge patterns for image retrieval and face recognition
- Author(s): Santosh Kumar Vipparthi ; Subrahmanyam Murala ; Anil Balaji Gonde ; Q.M. Jonathan Wu
- Type: Article
-
Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images
- Author(s): Anjith George and Aurobinda Routray
- Type: Article
-
‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification
- Author(s): Lex Fridman ; Joonbum Lee ; Bryan Reimer ; Trent Victor
- Type: Article