Crowd simulation; Crowd analysis; generation of population; Augmented Reality; virtual crowd; immersive experience; aerial vision; multiple cameras; mixed real-virtual crowd
M. Ben Moussa and N. Magnenat-Thalmann (2013), “Toward socially responsible agents: integrating attachment and learning in emotional decision-making,”, in Computer Animation and Virtual Worlds
, 24(3-4), 327-334.
Junghyun Ahn Nan Wang Daniel Thalmann and Ronan Boulic (2012), Within-crowd immersive evaluation of collision avoidance behaviors, in CAM VRCAI
N. A. Nijdam B. Kevelham S. Han and N. Magnenat-Thalmann (2012), “An application framework for adaptive distributed simulation and 3D rendering services,”, in ACM VRCAI
B. Kevelham and N. Magnenat-Thalmann (2012), “Fast and accurate GPU-based simulation of virtual garments,”, in ACM VRCAI
M. Ben Moussa N. Magnenat-Thalmann and D. Konstantas (2012), “Facial Affect Recognition for Cognitive-behavioural Therapy,”, in EHST 2012
K. Zawieska M. Ben Moussa B. R. Duffy and N. Magnenat-Thalmann (2012), “The Role of Imagination in Human-Robot Interaction,”, in the Autonomous Social Robots and Virtual Humans workshop in the 25th Annual Conference CASA
B. Kevelham and N. Magnenat-Thalmann (2012), “Virtual try on: an application in need of GPU optimization,”, in Proceedings of the ATIP/A*CRC Workshop on Accelerator Technologies for High-Performance Computing
Stefano Pellegrini Jürgen Gall Leonid Sigal Luc J. Van Gool (2012), Destination Flow for Crowd Simulation, in ECCV 2012
Carozza L. Tingdahl D. Bosche F. and Van Gool L (2012), Markerless vision-based Augmented Reality for enhanced project visualization, in Gerontechnology
, 11(2), 69.
Z. Kasap and N. Magnenat-Thalmann (2012), “Building long-term relationships with virtual and robotic characters: the role of remembering,”, in The Visual Computer
, 28(1), 87-97.
Ralf Dragon Bodo Rosenhahn Jörn Ostermann (2012), Multi-Scale Clustering of Frame-to-Frame Correspondences for Motion Segmentation, in ECCV 2012
Bach Pascal, Silvestre Quentin, Boulic Ronan (2011), The Elusive Stepping Pattern, in Proceedings of the International Skills Conference
, MontpellierBIO Web of Conferences, Montpellier.
Ahn Junghyun, Gobron Stephane, Silvestre Quentin, Ben Shitrit Horesh, Raca Mirko, Pettre Julien, Thalmann Daniel, Fua Pascal, Boulic Ronan (2011), Long Term Real Trajectory Reuse through Region Goal Satisfaction, in proc. of MIG 2011
, EdinburghSpringer-Verlag , Berlin Heidelberg.
Gemma Roig Xavier Boix Horesh Ben Shitrit Pascal Fua (2011), “Conditional Random Fields for Multi-Camera Object Detection”, in International Conference on Computer Vision
Horesh Ben Shitrit Jérôme Berclaz François Fleuret Pascal Fua (2011), “Tracking Multiple People under Global Appearance Constraints”, in International Conference on Computer Vision
Stefano Pellegrini Andreas Ess Luc J. Van Gool (2011), Predicting Pedestrian Trajectories
, Springer, www.springer.com.
Kasap Mustafa, Chague Sylvain, Magnenat-Thalmann Nadia (2011), Virtual face implant for visual character variations, in Proceedings of the 12th International Workshop on Image Analysis for Multimedia Interactive Services
, DelftSpringer-Verlag, Secaucus, NJ, USA.
Berclaz Jerome, Fleuret Francois, Turetken Engin, Fua Pascal (2011), Multiple Object Tracking using K-Shortest Paths Optimization, in IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 33(9), 1806-1819.
Kasap Mustafa, Magnenat-Thalmann Nadia (2011), Skeleton-aware size variations in digital mannequins, in The Visual Computer
, 27, 263-274.
The objective of the 2nd phase of the AerialCrowds project is to move beyond phase 1 as follows:. Increase the flexibility and level of coverage by considering multiple simultaneous aerial engines for the acquisition and tracking of pedestrians, possibly combined with ground-level imagery . Enhance the naturalness of generated crowds by mixing virtual and real pedestrians, where the latter lead through the former, and where social walking models and group tracking algorithms iteratively reinforce each other.. Offer unprecedented editing power for virtual crowds, by providing a suite of tools for adding/removing virtual/real pedestrians for the user-friendly modification of crowd densities . Add a valuable first-person experience, by offering users the ability to validate the mixed real-virtual crowds in a full-body immersive CAVE, and experience the scene in the manner a member of the crowd would.The two first objectives concern the data acquisition and analysis of crowds of pedestrians over a potentially large site. Tracking real pedestrians will benefit from strong models for social walking and vice-versa. The two last objectives stress the key contribution to intensify the behavioral dimension of the combination of real and virtual pedestrians. The Augmented Reality presentation will allow planners to experience either the social interactions of a typical user of a given site or the long range influence one individual has on other individuals at a site scale as a function of local densities and flows. Indeed, we plan to control crowd density, e.g. tripling or halving the number of people, and the result should still look natural. Increasing the density will call for the addition of virtual pedestrians. Decreasing the density will require the removal or real pedestrians, which poses challenging problems as well.The capture of the urban landscape will still use a network of static cameras but also multiple simultaneous aerial engine mounted cameras. The network will allow for real-time video streaming. It will also provide positioning data that can be used in conjunction with Computer Vision based techniques to register the multiple mobile cameras with respect to each other and to the static ones. Ideally the moving aerial cameras should have partially overlapping fields of view. With this conjunction of means we envision a synergistic progress of the social walking model and the group tracking algorithms. This aspect will be complemented with fine-grained social models of pedestrians interacting with one another and with the semantically enriched buildings. Overall it will provide urban planners and animators with an invaluable set of training and simulation resources. They will allow them to interactively overlay crowds of controllable densities on real video sequences in an interactive fashion. To the best of our knowledge, no similar research has been undertaken yet. In terms of rendering, we will strike a balance between efficiency and realism, especially for close distances for which the quality of the interactions will be the focus of the present project (trajectory prediction and adjustment, social interactions, collision avoidance). Special care will be taken to seamlessly blend virtual and real characters. In order to ensure real-time capability, we will manage crowds using dynamic meshes, static meshes, and impostors, to free the computing resources necessary to add the new behavioral models. A multi-party dialogue management will build upon the detection of people's affective states and their goalbased decision-making capability. This particular dimension of the interactions will be completed with expressive behavior generation.To summarize, the present proposal moves from the know-how accumulated for long-range macroscopic crowd models (mostly through planning and potential fields) to a finer understanding and representation of microscopic (individual) behavioral interactions. These will be learned and validated using Computer Vision techniques to analyze real behaviors and paths in real environments. The most basic behaviors relate to locomotion with collision avoidance. We will handle them by enhancing a model of social walking grounded on trajectories extracted from the real video data. If one knows the typical behavior of people in crowds, this information will make the linking of detection across temporal frames more robust. Hence the social walking model will improve the tracking which will in turn improve the social model, as improved tracking leads to larger amounts of good training material for the modeling. The resulting knowledge will allow our system to explore the affordance of a given space for walking by manipulating crowd density, and by allowing planners and movie and game animators to get immersed into the crowd. The tools that we will develop will support multiple applications. Planners would probably want to model crowds where all participants are virtual, as in that case each pedestrian is `aware' of all others in his vicinity, or at least in his field of view, as in actual crowds. Movies and game production can also benefit through virtual crowd generation with natural walking interactions, or by increasing or decreasing the density of real crowds, by adding virtual pedestrians, or removing real pedestrians, resp., or by a combination. A couple of actors can lead a huge, virtual crowd along, like a few tracked points in a motion capture system lead to a detailed character animation.