top of page
visual_navigation_resized.jpg

VISION BASED NAVIGATION

Vision contributes to nearly 80% of human perception and cognition. Endowing autonomous systems with the appropriate visual sensors can allow them to geometrically and semantically parse their environments building navigable representations. 
This allows autonomous systems to operate in both previously seen or unseen environments.
This can be particularly challenging in environments that are subject to frequent changes in appearance. These changes can be artifacts of specular reflections or changes in lighting, changes in weather conditions, or changes related to the physical evolution of the structure of the environment. These changes can also be viewed as short, medium, and long-term appearance changes. In such scenarios, the ability of a vision system to exploit information regarding the geometry and texture of the environment to be invariant to these changes is essential for robust autonomy. Previously I worked on developing robust visual representations that can be easily adapted online with rapidly changing appearance [Reference 1] or deal with changes in appearance due to weather [Reference 2].
More recently, I have been interested in lifelong multimodal semantic mapping to deal with slowly evolving environments that are also plagued by detractors and dynamic objects. [Reference 3]. Construction environments pose a unique challenge for current mapping algorithms as they are sparse in visual features while being plagued with detractors and dynamic objects. They also evolve over time changing appearance.

bottom of page