%0 Electronic Article %A Chenchao Xiang %A Zhou Yu %A Suguo Zhu %A Jun Yu %A Xiaokang Yang %K visual grounding problem %K multimodal factorised bilinear pooling model %K Flickr-30k Entities dataset %K textual query phrase %K end-to-end approach %K region-based visual features %K region proposal networks %K ReferItGame dataset %K visual features %K phrase-based visual grounding %K real-world visual grounding datasets %K off-the-shelf proposal generation model %K RefCOCO dataset %K multimodal features %K object proposals %X Phrase-based visual grounding aims to localise the object in the image referred by a textual query phrase. Most existing approaches adopt a two-stage mechanism to address this problem: first, an off-the-shelf proposal generation model is adopted to extract region-based visual features, and then a deep model is designed to score the proposals based on the query phrase and extracted visual features. In contrast to that, the authors design an end-to-end approach to tackle the visual grounding problem in this study. They use a region proposal network to generate object proposals and the corresponding visual features simultaneously, and multi-modal factorised bilinear pooling model to fuse the multi-modal features effectively. After that, two novel losses are posed on top of the multi-modal features to rank and refine the proposals, respectively. To verify the effectiveness of the proposed approach, the authors conduct experiments on three real-world visual grounding datasets, namely Flickr-30k Entities, ReferItGame and RefCOCO. The experimental results demonstrate the significant superiority of the proposed method over the existing state-of-the-arts. %@ 1751-9632 %T End-to-end visual grounding via region proposal networks and bilinear pooling %B IET Computer Vision %D March 2019 %V 13 %N 2 %P 131-138 %I Institution of Engineering and Technology %U https://digital-library.theiet.org/;jsessionid=2iloc5bpr6ldw.x-iet-live-01content/journals/10.1049/iet-cvi.2018.5104 %G EN