Project

Back to overview

Situated Vision to Perceive Object Shape and Affordances

English title Situated Vision to Perceive Object Shape and Affordances
Applicant Caputo Barbara
Number 131187
Funding scheme Project funding (Div. I-III)
Research institution IDIAP Institut de Recherche
Institution of higher education Idiap Research Institute - IDIAP
Main discipline Information Technology
Start/End 01.11.2011 - 30.11.2015
Approved amount 174'738.00
Show all

Lay Summary (English)

Lead
Lay summary
The objective is to provide models and methods to detect, recognise, and categorise the 3D shape of everyday objects and their affordances in homes. The planned innovations are: (1) We propose the Situated Vision paradigm and develop 3D visual perception capabilities from the view of a robot, its task, and the environment it operates in. (2) We show the generality of the Situated Vision approach by evaluating the performance on different robots at the project partners and in different environments. The Situated Vision approach is inspired by recent work in cognitive science, neuroscience and interdisciplinary work in EU projects: it fuses qualitative and quantitative cues to extract and group 3D shape elements and relate them to affordance categories.
Cognitive mechanisms such as situation-based visual attention and task-oriented visual search let the robot execute primitive actions to exploit the perceived affordances. Perception integrates quantitative and qualitative shape information from multiple 2D and 3D measurements. The analysis of the shapes is used to find instances of semantic 3D concepts, such as providing support to objects, enclosing space, etc. that can be used to
find semantic entities, such as table surfaces, cupboards, closets, drawers and to learn which perceived affordances belong to which object category. The system will be tested in three typical home scenarios and clutter in the form of five or more objects around the target object. Four renowned research teams combine their experience to show that the combination of attention (Uni Bonn), categorisation (RWTH Aachen), shape perception (TU Wien) and learning (IDIAP) will bring about a big step forward in cognitive systems for future deployment in service and personal robots.
This joint proposal and cooperation is becoming possible with the D-A-CH treaty between the German, Austrian and Swiss science foundations.
Direct link to Lay Summary Last update: 21.02.2013

Responsible applicant and co-applicants

Employees

Publications

Publication
A deeper look at dataset bias
Tommasi T, Patricia N, Caputo B, Tuytelaars T (2015), A deeper look at dataset bias, in German conference on Pattern Recognition.
Leveraging over Prior Knowledge for Online Learning of Visual Categories
T. Tommasi F. Orabona M. Kaboli B. Caputo (2012), Leveraging over Prior Knowledge for Online Learning of Visual Categories, in British Machine Vision Conference 2012, Surrey.
Learning to Learn, from Transfer Learning to Domain Adaptation: A Unifying Perspective.
Patricia Novi, Caputo Barbara, Learning to Learn, from Transfer Learning to Domain Adaptation: A Unifying Perspective., in International Conference on Computer Vision and Pattern Recognition.
Multi-Souce Adaptive Learning for Fast Control of Prothetics Hand
Patricia Novi, Tommasi Tatiana, Caputo Barbara, Multi-Souce Adaptive Learning for Fast Control of Prothetics Hand, in International Conference on Pattern Recognition.

Associated projects

Number Title Start Funding scheme
146411 Interactive Cognitive Systems, Indoor Scene Recognition for Intelligent Systems 01.04.2013 Project funding (Div. I-III)

-