Vision-based UAV pose estimation
As the use of unmanned aerial vehicles increased, studies regarding their autonomous flight became an academic field of great interest for researchers. Until recently, most studies based their developments using an inertial measurement unit (IMU) and a global navigation satellite system (GNSS) as the main sensors to calculate and estimate the UAVs pose. These sensors, on the other hand, have several limitations, which can affect the navigation, therefore, the fully autonomous aspect of the system. Images captured during flight, computer vision algorithms, and photogrammetry concepts have become a core source of data to estimate the UAVs pose in real-time, therefore, composing new alternative or redundant navigation systems. Several algorithms have been proposed in the scientific community, each one working better in specific situations and using different kinds of imaging sensors (active and passive sensors). This chapter describes the main visual-based pose estimation algorithms and discusses where they best apply and when each fails. Fresh results depict the development of new strategies that will overcome the remaining challenges of this research field.
Vision-based UAV pose estimation, Page 1 of 2
< Previous page Next page > /docserver/preview/fulltext/books/ce/pbce120f/PBCE120F_ch7-1.gif /docserver/preview/fulltext/books/ce/pbce120f/PBCE120F_ch7-2.gif