Projekt

Zurück zur Übersicht

See ColOr_II

Gesuchsteller/in Bologna Guido
Nummer 127334
Förderungsinstrument Projektförderung (Abt. I-III)
Forschungseinrichtung Centre Universitaire d'Informatique Université de Genève
Hochschule Universität Genf - GE
Hauptdisziplin Informatik
Beginn/Ende 01.06.2010 - 31.05.2012
Bewilligter Betrag 117'140.00
Alle Daten anzeigen

Keywords (12)

sensory substitution; visual prosthesis for blind people; space perception; sound spatialisation; stereo-vision; multi-modal interaction; computer vision; image processing; audio processing; Sonification; Visual prosthesis; Blind people

Lay Summary (Französisch)

Lead
Lay summary
Résumé succint.Le projet See ColOr_II représente les couleurs des images par des instruments de musique. Le but est de fournir aux aveugles une meilleure représentation de l'environnement.Résumé.Ce projet vise à créer une représentation sonore de l'environnement pour les aveugles. Concrètement, la personne porte une caméra à deux objectifs permettant de déterminer la profondeur. Une ligne horizontale de 25 points sur l'image est transformée en une représentation sonore, selon un code permettant d'associer les sons aux couleurs. La teinte est par exemple représentée par un instrument (le piano pour le bleu, la flûte pour le vert, le violon en pizzicato pour le jaune, etc.), alors que la luminosité est représentée par la hauteur de la note. Avec de l'entraînement, on apprend ainsi à associer tel son et telle couleur. Des vidéos illustrant quelques expériences sont disponibles sur: http://www.youtube.com/guidobologna. Le système transmet ces sons à l'utilisateur sous la forme d'une succession de notes très brèves (3/10 de seconde, au maximum). Elles sont spatialisées; c'est-à-dire que le son associé à un objet situé sur la gauche sera perçu comme venant de cette direction. La distance des points colorés est transmise par la longueur des sons correspondants.But.Le but du projet See ColOr_II est de permettre aux aveugles d'avoir une meilleure perception de leur environnement pour pouvoir se déplacer. Ce projet est parti d'un mode de perception utilisé spontanément par les aveugles : beaucoup d'entre eux se repèrent grâce aux sons et à leurs échos. See ColOr_II vise à leur permettre de repérer des objets colorés typiques (panneaux, boîtes aux lettres, arrêt de bus, voitures, arbres, bâtiments, lignes de couleur etc.) grâce à un encodage sonore. Ce dispositif devrait donner une plus grande autonomie à la personne malvoyante qui pourrait se déplacer plus sûrement à l'intérieur ou à l'extérieur en localisant par exemple des portes, des passages piétons ou des feux de signalisation.Signification.Le dispositif à développer est une aide à la mobilité. Il ne vise pas à remplacer les dispositifs tels que la canne, mais plutôt à les compléter. On peut ainsi imaginer qu'un aveugle effectue un parcours en suivant une ligne de couleur à l'aide du dispositif, ou qu'il l'utilise pour identifier des points de repère comme des murs colorés, des arbres ou des portes. A plus long terme le dispositif actuel pourrait être miniaturisé sur des lunettes et un " smartphone " ; ce qui représentera une alternative très bon marché par rapport à une approche invasive comme celle des implants rétiniens.
Direktlink auf Lay Summary Letzte Aktualisierung: 21.02.2013

Verantw. Gesuchsteller/in und weitere Gesuchstellende

Mitarbeitende

Publikationen

Publikation
Real-time image registration of RGB webcams and colorless 3D time-of-flight cameras.
(2012), Real-time image registration of RGB webcams and colorless 3D time-of-flight cameras., in Proc. of European Conference on Computer Vision, Florence, Italy October 2012.
Spatial awareness and intelligibility for the blind: audio-touch interfaces
(2012), Spatial awareness and intelligibility for the blind: audio-touch interfaces, in Proceedings of International Conference on Human-Computer Interaction, Austin, 5-10 May 2012, Texas, U.S..
3D Scene accesibility for the blind via auditory-multitouch interfaces
(2011), 3D Scene accesibility for the blind via auditory-multitouch interfaces, in AEGIS Workshop and International Conference, Brussells, 28-30 November, Belgium.
A virtual ceiling mounted depth-camera using orthographic Kinect
(2011), A virtual ceiling mounted depth-camera using orthographic Kinect, in Proceedings of International Conference on Computer Vision, Barcelona, 6-13 November.
Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired
(2011), Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired, in Proceeding ASSETS '11 The proceedings of the 13th international conference on Computers and access, Dundee, October 24-26, Scotland.
Toward local and global perception modules for vision substitution
(2011), Toward local and global perception modules for vision substitution, in Neurocomputing, 74(8), 1182-1190.
Color-audio encoding interface for visual substitution : See ColOr matlab-based demo
(2010), Color-audio encoding interface for visual substitution : See ColOr matlab-based demo, in Proceedings of the 12th international ACM SIGACCESS conference on computers accessibility, October 25-27, 2010, Orlando, Florida, US.
Detecting objects and obstacles for visually impaired individuals using visual saliency
(2010), Detecting objects and obstacles for visually impaired individuals using visual saliency, in Proceedings of the 12th international ACM SIGACCESS conference on computers accessibility, October 25-27, 2010, Orlando, Florida, US.
Sonification of color and depth in a mobility aid for blind people
(2010), Sonification of color and depth in a mobility aid for blind people, in Proceedings of the 16th international Conference on Auditory Display, Washington DC, June 9-15, 2010.
Multisource sonification for visual substitution in an auditory memory game: one, or two fingers?
, Multisource sonification for visual substitution in an auditory memory game: one, or two fingers?, in International Conference on auditory display, Budapest, Hungary June 20-24, 2011.

Zusammenarbeit

Gruppe / Person Land
Formen der Zusammenarbeit
Europe- NOE Similar Staatenlose ()
- vertiefter/weiterführender Austausch von Ansätzen, Methoden oder Resultaten
ABA Schweiz (Europa)
- vertiefter/weiterführender Austausch von Ansätzen, Methoden oder Resultaten
ISIR/UPMC Frankreich (Europa)
- vertiefter/weiterführender Austausch von Ansätzen, Methoden oder Resultaten

Verbundene Projekte

Nummer Titel Start Förderungsinstrument
140392 See ColOr_II (Seeing Colours with an Orchestra II) 01.06.2012 Projektförderung (Abt. I-III)

Abstract

Mobility is more than simple obstacle detection. It requires perception and understanding of the whole nearby environment. As a result, mobility encompasses three main tasks. The first, is to understand the near space global geometry; the second, is simply to walk and to avoid obstacles; and the third is to walk with a specific goal to reach, for instance looking for a specific door, a shop entrance, etc. The purpose of this project is to create a mobility assistance device for visually impaired people, in order to make a step further toward their independent mobility. The future prototype will be based on cheap components like webcams and portable touchpads.This project is inscribed in a more general context related to the mobility of visually impaired individuals and builds up on the previous See ColOr project . Considering the “ABAplan” project of Prof. Lazeyras and the “Interface” project of Prof. Malandain it will allow us to develop a solid centre of competence in the context of rehabilitation for vision substitution. A drawback of the current See ColOr prototype is that the user attention focuses on a small portion of the captured scene only, leading to the tunnel vision phenomenon. Based on our previous results, the new system aims at overcoming this shortcoming. It will provide information regarding near space geometry and will assist in the detection of static obstacles that cannot be detected by a cane, such as overhanging or protruding. The new system architecture will be based on so-called local and global modules.The local module is related to the already existing perceptual mode of the previous See ColOr project. Basically, it provides the user with the auditory representation of a row containing 25 points of a captured image. The key idea is to represent a pixel of an image as a sound source located at a particular azimuth angle. Moreover, each emitted sound is assigned to a musical instrument, depending on the colour of the pixel, as colour is helpful to group pixels of a mono-coloured object into a coherent entity. Several new functionalities will be added to this module. For instance, in addition to the sonification of colour, at the user request depth will be sonified by sound duration and volume. Because of the limitations of the auditory channel capacity, when depth will be provided, only a few dominant colours of the sonified row will be taken into account. On the basis of this principle the current prototype for instance allows blindfolded users to follow lines painted on the ground (see the video on http://www.youtube.com/guidobologna ).The global module will encompass several sub-modules. The first is an alerting system for the detection of obstacles in the trajectory of the user. This is based on saliency maps incorporating depth gradient as a feature. The sonification can be performed by a voice warning or by an earcon. The second sub-module is related to the sonification of the depth map of the captured image. This will allow the user to inspect this diagram represented on a touchpad, in order to understand the current scene geometry with their corresponding obstacles. Typically, finger contact point will play the role of the eye and it will trigger a particular sound, depending on the presence/absence of an obstacle at this point coordinate. By means of this map, a visually impaired person will be able to walk toward a specific location.Our system for visual substitution puts forward several novel aspects. First, by its architecture presenting local and global modules, See ColOr_II will imitate the visual system by providing the user with essential cues of vision (local and global) by means of the auditory channel. Secondly, we will introduce a multi-touchpad in the global module, in order to examine a video image with several fingers. Finally, another important aspect is the use of cheap, non-invasive components. Specifically, wireless webcams will be integrated onto sunglasses, a touchpad will lie in a pocket and the whole prototype will run on an ultra-portable computer. Cheap components and non-invasive devices will be likely to be adopted by the visually impaired community.See ColOr_II will be conducted in partnership with the “Association for the wellness of blind and partially-sighted people”, Geneva. They will in particular provide assistance regarding the establishment of precise user requirements, the identification of volunteers for the testing, and the evaluation.
-