Out of the Box: Embodied Navigation in the Real World. (arXiv:2105.05873v1 [cs.CV])

The research field of Embodied AI has witnessed substantial progress in
visual navigation and exploration thanks to powerful simulating platforms and
the availability of 3D data of indoor and photorealistic environments. These
two factors have opened the doors to a new generation of intelligent agents
capable of achieving nearly perfect PointGoal Navigation. However, such
architectures are commonly trained with millions, if not billions, of frames
and tested in simulation. Together with great enthusiasm, these results yield a
question: how many researchers will effectively benefit from these advances? In
this work, we detail how to transfer the knowledge acquired in simulation into
the real world. To that end, we describe the architectural discrepancies that
damage the Sim2Real adaptation ability of models trained on the Habitat
simulator and propose a novel solution tailored towards the deployment in
real-world scenarios. We then deploy our models on a LoCoBot, a Low-Cost Robot
equipped with a single Intel RealSense camera. Different from previous work,
our testing scene is unavailable to the agent in simulation. The environment is
also inaccessible to the agent beforehand, so it cannot count on scene-specific
semantic priors. In this way, we reproduce a setting in which a research group
(potentially from other fields) needs to employ the agent visual navigation
capabilities as-a-Service. Our experiments indicate that it is possible to
achieve satisfying results when deploying the obtained model in the real world.
Our code and models are available at https://github.com/aimagelab/LoCoNav.

Source: https://arxiv.org/abs/2105.05873

webmaster

Related post