The DAVIS camera developed by the Sensors group provides sparse, quick, high dynamic range events signaling brightness changes as well as global-shutter synchronous active pixel sensor (APS) image frames that can be triggered and captured on demand. It also integrates an inertial measurement unit that provides a vestibular rotation and acceleration sensing. These characteristics make it potentially useful for mobile robotics, but so far the USB2.0 camera is a bit too bulky and it provides limited on-board computing. The developments of smartphones have made available powerful application processors (APs). In the first part of my PhD project, I will miniaturize the DAVIS camera and integrate it together with a powerful AP or other processor (e.g. Adapteva), probably also replacing the existing USB2.0 interface with a direct connection from DAVIS to AP using MIPI (Mobile Industry Processor Interface).
This miniaturized camera will be an ideal platform for application in Micro Air Vehicle (MAVs) which are starting to be used in many commercial applications. In collaboration with the Robotics and Perception Group of Davide Scaramuzza, I will next use the new camera to develop and test algorithms for visual odometry (VOD) and simultaneous localization and mapping (SLAM). Conventional approaches based on frames are expensive because keypoint extraction and matching must deal with large shifts between successive frames, and thus are also limited in dealing with high speed maneuvers. By using the DAVIS, we aim to demonstrate order of magnitude improvements in computational cost and speed.