sensory substitution; visual prosthesis for blind people; space perception; sound spatialisation; stereo-vision; multi-modal interaction; computer vision; image processing; audio processing; Sonification; Visual prosthesis; Blind people
(2012), Real-time image registration of RGB webcams and colorless 3D time-of-flight cameras., in Proc. of European Conference on Computer Vision
, Florence, Italy October 2012.
(2012), Spatial awareness and intelligibility for the blind: audio-touch interfaces, in Proceedings of International Conference on Human-Computer Interaction
, Austin, 5-10 May 2012, Texas, U.S..
(2011), 3D Scene accesibility for the blind via auditory-multitouch interfaces, in AEGIS Workshop and International Conference
, Brussells, 28-30 November, Belgium.
(2011), A virtual ceiling mounted depth-camera using orthographic Kinect, in Proceedings of International Conference on Computer Vision
, Barcelona, 6-13 November.
(2011), Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired, in Proceeding ASSETS '11 The proceedings of the 13th international conference on Computers and access
, Dundee, October 24-26, Scotland.
(2011), Toward local and global perception modules for vision substitution, in Neurocomputing
, 74(8), 1182-1190.
(2010), Color-audio encoding interface for visual substitution : See ColOr matlab-based demo, in Proceedings of the 12th international ACM SIGACCESS conference on computers accessibility
, October 25-27, 2010, Orlando, Florida, US.
(2010), Detecting objects and obstacles for visually impaired individuals using visual saliency, in Proceedings of the 12th international ACM SIGACCESS conference on computers accessibility
, October 25-27, 2010, Orlando, Florida, US.
(2010), Sonification of color and depth in a mobility aid for blind people, in Proceedings of the 16th international Conference on Auditory Display
, Washington DC, June 9-15, 2010.
, Multisource sonification for visual substitution in an auditory memory game: one, or two fingers?, in International Conference on auditory display
, Budapest, Hungary June 20-24, 2011.
Mobility is more than simple obstacle detection. It requires perception and understanding of the whole nearby environment. As a result, mobility encompasses three main tasks. The first, is to understand the near space global geometry; the second, is simply to walk and to avoid obstacles; and the third is to walk with a specific goal to reach, for instance looking for a specific door, a shop entrance, etc. The purpose of this project is to create a mobility assistance device for visually impaired people, in order to make a step further toward their independent mobility. The future prototype will be based on cheap components like webcams and portable touchpads.This project is inscribed in a more general context related to the mobility of visually impaired individuals and builds up on the previous See ColOr project . Considering the “ABAplan” project of Prof. Lazeyras and the “Interface” project of Prof. Malandain it will allow us to develop a solid centre of competence in the context of rehabilitation for vision substitution. A drawback of the current See ColOr prototype is that the user attention focuses on a small portion of the captured scene only, leading to the tunnel vision phenomenon. Based on our previous results, the new system aims at overcoming this shortcoming. It will provide information regarding near space geometry and will assist in the detection of static obstacles that cannot be detected by a cane, such as overhanging or protruding. The new system architecture will be based on so-called local and global modules.The local module is related to the already existing perceptual mode of the previous See ColOr project. Basically, it provides the user with the auditory representation of a row containing 25 points of a captured image. The key idea is to represent a pixel of an image as a sound source located at a particular azimuth angle. Moreover, each emitted sound is assigned to a musical instrument, depending on the colour of the pixel, as colour is helpful to group pixels of a mono-coloured object into a coherent entity. Several new functionalities will be added to this module. For instance, in addition to the sonification of colour, at the user request depth will be sonified by sound duration and volume. Because of the limitations of the auditory channel capacity, when depth will be provided, only a few dominant colours of the sonified row will be taken into account. On the basis of this principle the current prototype for instance allows blindfolded users to follow lines painted on the ground (see the video on http://www.youtube.com/guidobologna ).The global module will encompass several sub-modules. The first is an alerting system for the detection of obstacles in the trajectory of the user. This is based on saliency maps incorporating depth gradient as a feature. The sonification can be performed by a voice warning or by an earcon. The second sub-module is related to the sonification of the depth map of the captured image. This will allow the user to inspect this diagram represented on a touchpad, in order to understand the current scene geometry with their corresponding obstacles. Typically, finger contact point will play the role of the eye and it will trigger a particular sound, depending on the presence/absence of an obstacle at this point coordinate. By means of this map, a visually impaired person will be able to walk toward a specific location.Our system for visual substitution puts forward several novel aspects. First, by its architecture presenting local and global modules, See ColOr_II will imitate the visual system by providing the user with essential cues of vision (local and global) by means of the auditory channel. Secondly, we will introduce a multi-touchpad in the global module, in order to examine a video image with several fingers. Finally, another important aspect is the use of cheap, non-invasive components. Specifically, wireless webcams will be integrated onto sunglasses, a touchpad will lie in a pocket and the whole prototype will run on an ultra-portable computer. Cheap components and non-invasive devices will be likely to be adopted by the visually impaired community.See ColOr_II will be conducted in partnership with the “Association for the wellness of blind and partially-sighted people”, Geneva. They will in particular provide assistance regarding the establishment of precise user requirements, the identification of volunteers for the testing, and the evaluation.