http://iet.metastore.ingenta.com
1887

CNN-based UGS method using Cartesian-to-polar coordinate transformation

CNN-based UGS method using Cartesian-to-polar coordinate transformation

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The main concern of user-guided segmentation (UGS) is to achieve high segmentation accuracy with minimal user interaction. A novel convolutional neural network (CNN)-based UGS method is proposed, which employs a single click as the user interaction. In the proposed method, the input image in the Cartesian coordinate system is first converted into the polar transformed image with the user-guided point (UGP) as the origin of the polar coordinate system. The transformed image not only effectively delivers the UGP to the CNN, but also enables a single-scale convolution kernel to act as a multi-scale kernel, whose receptive field in the Cartesian coordinate system is altered based on the UGP without any extra parameters. In addition, a feature selection module (FSM) is introduced and utilised to additionally extract radial and angular features from the polar transformed image. Experimental results demonstrate that the proposed CNN using the polar transformed image improves the segmentation accuracy (mean intersection over union) by 3.69% on PASCAL VOC 2012 dataset compared with the CNN using the Cartesian coordinate image. The FSM achieves additional performance improvement of 1.32%. Moreover, the proposed method outperforms the conventional non-CNN-based UGS methods by 12.61% on average.

References

    1. 1)
    2. 2)
      • 2. Jain, S.D., Grauman, K.: ‘Click carving: segmenting objects in video with point clicks’, arXiv preprint arXiv:1607.0115, 2016.
    3. 3)
    4. 4)
    5. 5)
      • 5. Peng, C., Zhang, Z., Yu, G., et al: ‘Large kernel matters – improve semantic segmentation by global convolution network’. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, July 2017, pp. 43534361.
    6. 6)
      • 6. Mishra, A., Aloimonos, Y., Fah, C.L.: ‘Active segmentation with fixation’. IEEE 12th Int. Conf. Computer Vision, Kyoto, Japan, September 2009, pp. 468475.
    7. 7)
      • 7. Long, J., Shelhamer, E., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, June 2015, pp. 34313440.
    8. 8)
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2018.5051
Loading

Related content

content/journals/10.1049/el.2018.5051
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address