Please Note: The content on this page is not maintained after the colloquium event is completed. As such, some links may no longer be functional.
“Direct Vision for Navigation”
Wednesday, October 31, 2018
Building 3 Auditorium - 11:00 AM
(Cookies at 10:30 AM)
The recent advancements in robotics platforms and developments in autonomous navigation have rekindled in the Robotics Community an interest in direct, active vision solutions to tasks of visual navigation. This is in contrast to current work in motion analysis, which seeks to derive a 3D reconstruction of the scene, which is a representation of general applicability, but requires extensive computation and is challenging under fast motion and changing lighting conditions.
I will first discuss theoretical findings on active vision solutions. Then I will describe our recent work using event-based vision sensors for active vision solutions. The unique properties of this sensor including its high temporal resolution, superior sensitivity to light, low latency and high compression of the dynamic scene, make it interesting for new applications. We have been studying the space of events for new efficient solutions to 3D motion analysis, object segmentation and tracking, and developed efficient algorithms using classical and neural network approaches, and hope to explore possible applications for space science.
Cornelia Fermüller is an associate research scientist at UMIACS, (University of Maryland Institute for Advanced Computer Studies). Her research is in the areas of Computer Vision, Human Vision, and Robotics, and she has published more than 40 journal articles and 120 articles in referred conferences and books. She has studied multiple view geometry and statistics, and her work includes view-invariant texture descriptors, 3D motion and shape estimation, image segmentation, and computational explanations and predictions of optical illusions. Her recent work has focused on two topics: the integration of perception, action and high-level reasoning to develop cognitive robots that can understand and learn human manipulation actions, and motion processing with event-based cameras.
IS&T Colloquium Committee Host: Nargess Memarsadeghi
Sign language interpreter upon request: 301-286-7348