Visual navigation method for indoor mobile robot based on extended BoW model
This article proposes a new navigation method for mobile robots based on an extended bag of words (BoW) model for general object recognition in indoor environments. The scale-invariant feature transform (SIFT)-detection algorithm with the graphic processing unit (GPU) acceleration technology is used to describe feature vectors in this model. First, in order to add some redundant image information, statistical information of the spatial relationships of all the feature points in an image, i.e. relative distances and angles, is used to extend the feature vectors in the original BoW model. Then, the support vector machine (SVM) classifier is used to classify objects. Also, in order to navigate conveniently in unknown and dynamic indoor environments, a type of human–robot interaction method based on a hand-drawn semantic map is considered. The experimental results show that this new navigation technology for indoor mobile robots is very robust and highly effective.