Data and Documentation
Open Data Policy
FAQ
EN
DE
FR
Suchbegriff
Advanced search
Project
Back to overview
Motion Models for Monocular People Tracking
English title
Motion Models for Monocular People Tracking
Applicant
Fua Pascal
Number
129495
Funding scheme
Project funding (Div. I-III)
Research institution
Laboratoire de vision par ordinateur EPFL - IC - ISIM - CVLAB
Institution of higher education
EPF Lausanne - EPFL
Main discipline
Information Technology
Start/End
01.10.2010 - 30.09.2012
Approved amount
110'110.00
Show all
Keywords (5)
Motion Tracking; Video; Computer Vision; Human Motion; Appearance Models
Lay Summary (English)
Lead
Lay summary
Modeling the human body and its movements is one of the most difficult and challenging problems in Computer Vision. Today, there is great interest in capturing complex motions solely by analyzing video sequences, both because cameras are becoming ever cheaper and more prevalent and because there are so many potential applications. These include athletic training, surveillance, entertainment, and electronic publishing. Existing techniques remain fairly brittle for many reasons: Humans have a complex articulated geometry overlaid with deformable tissues, skin and loosely-attached clothing. They move constantly, and their motion is often rapid, complex and self-occluding. Furthermore, the 3--D body pose is only partially recoverable from its projection in one single image. Reliable 3--D motion analysis therefore requires reliable tracking across frames, which is difficult because of the poor quality of image-data and frequent occlusions. Introducing motion models is an effective means to constrain the search for the correct pose and to increase robustness. Furthermore, instead of a separate pose in each frame, the output becomes the parameters of the motion model, which allows for further analysis and is therefore potentially extremely useful. In this project, we will develop and incorporate such motion models in a working system. We will also develop sophisticated appearance models and take into account the dependencies between body pose and global motion to increase the accuracy of our 3--D reconstructions.Integrating these enhanced detection and refinement methods into a consistent whole will result in a truly automated system that can handle real world environments and videos acquired in potentially adverse conditions, as opposed to benign laboratory settings.
Direct link to Lay Summary
Last update: 21.02.2013
Responsible applicant and co-applicants
Name
Institute
Fua Pascal
Laboratoire de vision par ordinateur EPFL - IC - ISIM - CVLAB
Employees
Name
Institute
Raca Mirko
Publications
Publication
Multi-Commodity Network Flow for Tracking Multiple People
Ben Shitrit Horesh, Berclaz Jérôme, Fleuret François, Fua Pascal, Multi-Commodity Network Flow for Tracking Multiple People, in
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
.
Associated projects
Number
Title
Start
Funding scheme
144318
Motion Models for Monocular People Tracking
01.09.2013
Project funding (Div. I-III)
119754
Motion Models for Monocular People Tracking
01.05.2008
Project funding (Div. I-III)
119754
Motion Models for Monocular People Tracking
01.05.2008
Project funding (Div. I-III)
Abstract
Modeling the human body and its movements is one of the most difficult and challenging problems in Computer Vision. Today, there is great interest in capturing complex motions solely by analyzing video sequences, both because cameras are becoming ever cheaper and more prevalent and because there are so many potential applications. These include athletic training, surveillance, entertainment, and electronic publishing. Existing techniques remain fairly brittle for many reasons: Humans have a complex articulated geometry overlaid with deformable tissues, skin and loosely-attached clothing. They move constantly, and their motion is often rapid, complex and self-occluding. Furthermore, the 3--D body pose is only partially recoverable from its projection in one single image. Reliable 3--D motion analysis therefore requires reliable tracking across frames, which is difficult because of the poor quality of image-data and frequent occlusions. Introducing motion models is an effective means to constrain the search for the correct pose and to increase robustness. Furthermore, instead of a separate pose in each frame, the output becomes the parameters of the motion model, which allows for further analysis and is therefore potentially extremely useful. In this project, we will develop and incorporate such motion models in a working system. We will also develop sophisticated appearance models and take into account the dependencies between body pose and global motion to increase the accuracy of our 3--D reconstructions.Integrating these enhanced detection and refinement methods into a consistent whole will result in a truly automated system that can handle real world environments and videos acquired in potentially adverse conditions, as opposed to benign laboratory settings.
-