Indoor Localization by Querying Omnidirectional Visual Maps using Perspective Images

Indoor localization based on distinguishable scene landmarks is closely related to image retrieval, object recognition, and location recognition problems. A commonly adopted scheme extracts local image features, quantizes their descriptors to visual words, and applies methods adapted from text search engines to accomplish visual recognition. Many authors take advantage of these techniques, primarily designed for perspective images, for performing image-based localization using omnidirectional images. Typically the image description is accomplished by the extraction of local (such as SIFT or SURF)or global features (such as GIST) for topological and metric localization using omnidirectional images in a hierarchical recognition framework.

Something

In this work the goal is to perform appearance-only localization when the query and database images are acquired using different imaging systems (hybrid imaging systems). Taking advantage of the omnidirectional images to perform a complete coverage of the environment, we want to retrieve the location of a query image taken from a conventional camera, e.g. a mobile robot equipped with a perspective camera, or a cell-phone image taken from a person who wants to retrieve its location. While the omnidirectional images permit to speedup the acquisition of thorough visual maps and to tackle the scalability problem of the image database, they also introduce non-linear image distortion that increase the appearance difference between the images difficulting matching and retrieval.


This work was made in strict colaboration with Vitor Pedro. Our paper has been accepted for publication at ICRA 2012! Soon I will release the database and matlab/C code used in our experiments!

Keypoint Detection and Description in Images with Radial Distortion

In this work, a new method to perform image blurring using adaptive gaussian filter is proposed and applied to the context of feature detection in a multi-scale framework using the SIFT detector. We have experimentally proved that the proposed methods allow a more effective feature detection under RD.


Something

We have also proposed to compute the SIFT descriptor using an implicit gradient correction method based on the Jacobian of the division model for RD that improves the matching resilience when compared with explicit RD correction.

This research was performed under the ArthroNav project scope, where the main goal is to process endoscopic video in order to improve the physician’s perception and navigation skills inside the knee joint. As so, we have also tested our new resilient method to RD for feature detection and description under endoscopic images of a model of the knee bone. The bone images are almost textureless, although they present some structures that can be robustly detected and retrieved. In this video we show a 3D reconstruction of the bone with our method integrated with PMVS software. (Many thanks to Francisco Vasconcelos, for the integration of sRDSIFT with PMVS2)



This work was presented in IEEE ICRA (Finalist for the Best Student Paper Award). Afterwards, this work was evaluated using the Mikolajczyk et al., benchmark procedures available in here. We have also proved the resilience of the method under calibration error, localization precision and its utility to recover camera motion estimation. The extension of the conference paper was accepted for publication in the IEEE Transactions on Robotics.