Project

Back to overview

Motion Models for Monocular People Tracking

English title Motion Models for Monocular People Tracking
Applicant Fua Pascal
Number 119754
Funding scheme Project funding (Div. I-III)
Research institution Laboratoire de vision par ordinateur EPFL - IC - ISIM - CVLAB
Institution of higher education EPF Lausanne - EPFL
Main discipline Information Technology
Start/End 01.05.2008 - 30.04.2010
Approved amount 99'275.00
Show all

Keywords (4)

Motion Tracking; Video; Computer Vision; Motion Capture

Lay Summary (English)

Lead
Lay summary
Modeling the human body and its movements is one of the most difficult and challenging problems in Computer Vision. Today, there is great interest in capturing complex motions solely by analyzing video sequences, both because cameras are becoming ever cheaper and more prevalent and because there are so many potential applications. These include athletic training, surveillance, entertainment, and electronic publishing.

Existing techniques remain fairly brittle for many reasons: Humans have a complex articulated geometry overlaid with deformable tissues, skin and loosely-attached clothing. They move constantly, and their motion is often rapid, complex and self-occluding. Furthermore, the 3--D body pose is only partially recoverable from its projection in one single image. Reliable 3--D motion analysis therefore requires reliable tracking across frames, which is difficult because of the poor quality of image-data and frequent occlusions. Introducing motion models is an effective means to constrain the search for the correct pose and to increase robustness. Furthermore, instead of a separate pose in each frame, the output becomes the parameters of the motion model, which allows for further analysis and is therefore potentially extremely useful.

In this project, we will develop and incorporate such models in a working system. To this end, we will treat image sequences as cubes of data in which we will look for volume elements that are characteristic of the poses we are looking for. This kind of information has successfully been used to characterize global motion in video sequences but, to the best of our knowledge, not to detect specific poses and orientations. This will result in generic detector of canonical poses that can be trained by giving it short video sequences of people seen in the relevant poses.

This being done, we fill focus on developing more sophisticated appearance models than the ones we have used so far during the interpolation step and to take into account the dependencies between body pose and global motion to increase the accuracy of our 3--D reconstructions.

Integrating these enhanced detection and refinement methods into a consistent whole will result in a truly automated system that can handle real world environments and videos acquired in potentially adverse conditions, as opposed to benign laboratory settings.
Direct link to Lay Summary Last update: 21.02.2013

Responsible applicant and co-applicants

Employees

Associated projects

Number Title Start Funding scheme
129495 Motion Models for Monocular People Tracking 01.10.2010 Project funding (Div. I-III)
129495 Motion Models for Monocular People Tracking 01.10.2010 Project funding (Div. I-III)
111676 Motion Models for Monocular People Tracking 01.05.2006 Project funding (Div. I-III)

-