Our Approach

Identify Walkable Surfaces using Inexpensive Sensors

We developed a working platform for safe mobility for indoor environments using a kinect and IMU. We integrate raw data from kinect frames over short time intervals, and maintain live 3D models of the user’s vicinity. From this model we classify the floor and other obstacles appearing in the user’s path, present this information through stereo audio by mapping the audio cor- responding the various obstacles/hazards.



Depiction of our Safety mobility and navigation framework based on Kinect depth camera and an IMU. Left: An RGB image, Middle: A depth image captured from a Kinect RGBD camera. Right: Classifying the depth cloud in safe to walk (in green) and not safe to walk (red) regions.



The video sequence can be downloaded here

Stereo Sensor for Outdoor Use

Using the same framework, a stereo sensor with IMU can provide the same functionalities. However, safety mobility outdoors can be more challenging, including detection of safe walking regions, dropoffs, ascents, trip hazards, obstacles, head hazards, boundaries, other pedestrian, bicycles, and vehicles.

Assistive Tango

Our team was recently provided with one of Google’s Project Tango devices. We plan to incorporate it into the proposed effort. The sensors on a Tango device are highly compatible with our prototype system, and we are enthusiastic about bringing our work into everyday life for millions of BVI users. Examples include precise navigation cues inside buildings and subway stations.


The detail of Google's Project Tango can be found here.

Private, Timely Information Delivery

We are exploring methods for private, timely and dense information delivery. We will use a rapidly refreshable tactile device for this purpose; it has 360 binary, piezoelectric actuated pins arranged in a 24 × 15 grid, and can be refreshed at 30Hz.

This matrix of active elements can be treated as a bit-mapped display, to which environmental information can be rendered for the user’s consumption. Safe walkable terrain can be rendered as a homogeneous region interrupted by a distinct texture for ascents, descents or obstacles. An approaching person can be rendered as a moving patch or sprite of distinctive texture. Decoded environmental text could be rendered as Braille symbols (for users who read Braille), directly as letterforms, and/or voiced through an audio channel. Our COTS refreshable Braille display also serves as a multi-touch input display, enabling inter- pretation of selection, panning, and zooming gestures. To support real-time delivery of complex information, we propose to convey information multimodally, both through a tactile and aural channel.