by Arnau Ramisa, Alex Goldhoorn, David Aldavert, Ricardo Toledo and Ramon Lopez de Mantaras

There is significant research in robotic navigation using methods based on animal navigation techniques. For example, some work has drawn inspiration from biological studies of the navigation techniques of the ant species Cataglyphis. The main advantage of such techniques is that they use simple sensors and are also computationally simple, which makes them applicable to inexpensive robots.

The Average Landmark Vector (ALV) has been suggested as a way to model animal navigation techniques. This model assumes that the animal stores an average landmark vector of a place, where landmarks can be (simple) features like edges. The direction to the destination is then the difference of the ALV at the destination and the ALV at the current location.

This approach has been investigated in robot homing research, but mainly using artificial landmarks as features. This is a strong limitation, as it requires setting up the environment beforehand. Instead, in our work the goal is to create a simple homing method that can be used without having to rely on artificial landmarks. For this we propose the combination of the ALV homing technique with visual invariant feature detectors. This project is funded by the Generalitat de Catalunya's support to groups of excellence, referenced under number 2009-SGR-1434.

Local visual invariant features can be points or regions of an image which correspond to a local extrema function over it. The main interest of these features is that they are detectable under several transformations and illumination changes, which makes them suitable for the purpose of matching and recognition. Moreover, representations made with such local features are robust to partial occlusions and background clutter. Finally, extracting local features from an image reduces the dimensionality of the data to handle and adds robustness against noise, aliasing and acquisition conditions.

From the multiple local feature detection techniques available in the literature, we have used the Maximally Stable Extremal Regions (MSER) and Differences of Gaussians (DoG) visual invariant feature detectors for the homing method. These local feature points possess qualities which make them interesting for the ALV. One advantage is that they are fast to compute (and even faster hardware-based approaches are being built) and yet robust, and another is that many high-level processes are based on information from these interesting regions.

Figure 1: ALV Homing results using MSER visual features.
Figure 1: ALV Homing results using MSER visual features.

Experiments with the ALV homing method were first done in simulation and because the results were promising, experiments were conducted using a real robot in an office environment, namely in three different rooms at the IIIA research centre. Additionally, experiments with artificial landmarks were done for comparison purposes.

One important prerequisite of the ALV is that it is necessary to have the panoramic images aligned to an external compass reference before computing the homing direction. As a way to solve the constant orientation prerequisite, in our work all test panoramic images have been acquired with the robot facing a constant direction, as is common practice in similar works. In order to apply the ALV method in a navigation experiment, a magnetic compass, or another system to acquire the global orientation, is required to align the panoramas.

The locations at which the robot acquired the panoramas were measured manually and used to calculate the ground truth homing directions, which were then used to verify the homing method results. The panoramas were created with the camera on a pan tilt unit which rotates around a fixed point to get images from all directions. Next, these images were combined to create the final panorama. Feature points from these images were extracted to be used by the homing method, and only the horizontal location of the feature points was used (ie the cylindrical angle, and not height, nor depth).

The ALV homing was found to be a good working method, however the method performed worse in rooms where the width and length differ greatly. This has been explained by the way the feature points are projected on the panorama and by the "equal distance assumption".

We have also evaluated the proposed method in the Bielefeld panorama dataset, where omnidirectional images were acquired with a camera pointing to a parabolic mirror. The advantage of creating a panorama like this is the speed of acquisition, in contrast with our initial method where images from several angles had to be retrieved first and then stitched to create a high resolution panorama. Its main drawback is a significantly lower resolution. In order to compare both methods of panorama acquisition, additional experiments using the Bielefeld dataset were conducted.

When comparing the results of IIIA dataset and Bielefeld dataset, we can see that the ALV homing method performs slightly better on the IIIA panoramas, but the difference is not significant. Given these results, it seems favorable to use an omnidirectional camera for ALV, as the speed of acquisition is more relevant than the slightly better performance.

Regarding the feature types, in our experiments MSER significantly outperformed DoG, which is consistent with previous studies which have reported MSER to be one of the most robust feature detectors. In the additional experiments with artificial landmarks, the results were significantly better with the artificial landmarks than with the invariant feature points as was expected, since they are less affected by occlusions and viewpoint changes. However, using the MSER detector only represented an additional error of seven degrees in orientation, which seems low enough to justify the applicability of the presented homing method as it can be used in unprepared environments.

Links:
Bielefeld panorama database: http://www.ti.uni-bielefeld.de/html/research/databases/IIIA ALV panorama database: http://www.iiia.csic.es/~aramisa/datasets/iiia_alv.html

Please contact:
Arnau Ramisa
IIIA - CSIC / SpaRCIM, Campus Universitat Autonoma de Barcelona, Spain
Tel: +34 93 580 95 70
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Get the latest issue to your desktop
RSS Feed