speech-based user interfaces; sketch-based user interfaces; video motion descriptors; index structures; motion queries; video retrieval
(2017), Enhanced Retrieval and Browsing in the IMOTION System, in Proceedings of the 23rd International Conference on Multimedia Modeling
, Reykjavik, Iceland.
(2016), ADAMpro: Database Support for Big Multimedia Retrieval, in Datenbank-Spektrum
, 16(1), 17-26.
(2016), Dealing with ambiguous Queries in Multimodal Video Retrieval, in Proceedings of the 22nd International Conference on Multimedia Modeling (MMM 2016)
, Miami, FL, USA.
(2016), iAutoMotion ‐ an Autonomous Content‐based Video Retrieval Engine, in Proceedings of the 22nd International Conference on Multimedia Modeling (MMM 2016)
, Miami, FL, USA.
(2016), IMOTION ‐ Searching for Video Sequences using Multi‐Shot Sketch Queries, in Proceedings of the 22nd International Conference on Multimedia Modeling (MMM 2016)
, Miami, FL, USA.
(2016), Interactive video search tools: a detailed analysis of the video browser showdown 2015, in Multimedia Tools and Applications
(2016), Searching in Video Collections using Sketches and Sample Images ‐ The Cineast System, in Proceedings of the 22nd International Conference on Multimedia Modeling (MMM 2016)
, Miami, FL, USA.
(2016), Semantic Sketch‐Based Video Retrieval with Autocompletion, in Proceedings of the 21st ACM International Conference on Intelligent User Interfaces (IUI'16)
, Sonoma, CA, USA.
(2016), The IMOTION System at TRECVID 2016: The Ad-Hoc Video Search Task, in Proceedings of the 2016 TRECVID Ad-Hoc Video Search Task
, Gaithersburg, MD, USA.
(2016), The vitrivr System at TRECVID 2016: The Ad-Hoc Video Search Task, in Proceedings of the 2016 TRECVID Ad-Hoc Video Search Task
, Proceedings of the 2016 TRECVID Ad-Hoc Video Search Task.
(2016), VideoSketcher: Innovative Query Modes for Searching Videos through Sketches, Motion and Sound
(2016), vitrivr - A Flexible Retrieval Stack Supporting Multiple Query Modes for Searching in Multimedia Collections, in Proceedings of the 2016 ACM on Multimedia Conference
, Amsterdam, NL.
(2015), IMOTION - a Content-based Video Retrieval Engine, in Proceedings of the 21st MultiMedia Modelling Conference (MMM2015) - Video Search Showcase Track
, Sydney, Australia.
(2015), OSVC ‐ Open Short Video Collection 1.0
(2014), ADAM — A Database and Information Retrieval System for Big Multimedia Collections, in Proceedings of the 3rd International Congress on Big Data,
, Anchorage, AK, USA.
(2014), ADAM — A System for Jointly Providing IR and Database Queries in Large-Scale Multimedia Retrieval, in Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information
, Gold Coast, Australia.
(2014), Cineast: A Multi-Feature Sketch-Based Video Retrieval Engine, in Proceedings of the 16th IEEE International Symposium on Multimedia (ISM2014)
, Taichung, Taiwan.
(2014), Crowd-based Semantic Event Detection and Video Annotation for Sports Videos, in Proceedings of the 3rd International ACM Workshop on Crowdsourcing for Multimedia
, Orlando, FL, USA.
, Hey, vitrivr! - A Multimodal UI for Video Retrieval, in Proceedings of the 39th European Conference on Information Retrieval (ECIR 2017)
, Aberdeen, Scotland, UK.
Video is increasingly gaining importance as medium to capture and disseminate information. This is not only the case for personal use but also -and most importantly- for professional and educational applications. With the enormous growth of video collections, effective yet efficient content-based retrieval of (parts of) videos is becoming more and more essential. Conventionally, video retrieval relies on metadata such as manual annotations, or inherent features extracted from the video. However, the most decisive information that distinguishes video content from static content, the movement of individual objects across subsequent frames, so far is largely ignored. This is particularly the case for so-called augmented video where additional spatio-temporal data on the movement of objects (e.g., captured by dedicated sensor systems) is available in addition to the actual video content. The IMOTION project will develop and evaluate innovative multi-modal user interfaces for interacting with augmented videos. Starting with an extension of existing query paradigms (keyword search in manual annotations), image search (query by example in key frames), IMOTION will consider novel sketch- and speech-based user interfaces. In particular, novel types of motion queries will be supported where users can specify motion paths of objects, via sketches, gestures, natural language interfaces, or combinations thereof. Several types of user interfaces (voice, tablets, multi-touch tables, interactive paper) will be supported and seamlessly combined so as to smoothly migrate a session from one type of user interface to another during the process of specifying and refining a query. This will be based on novel approaches to representation learning and the extraction of high-level motion descriptors from augmented videos, based on a motion ontology. In addition, IMOTION will develop novel index structures that jointly support traditional video features and the additional motion metadata. A major contribution will be the quantitative and qualitative evaluation and user studies of the intelligent multi-modal interfaces and query paradigms developed in two concrete use cases - sample applications from which the project will select include, but are not limited to, augmented sports videos where users search on the basis of trajectories of player or ball movements, educational videos from the natural sciences where users search for animal movements inside a horde or a swarm, or sketch-based searches for currents in the sea captured by sensors integrated into buys. The IMOTION consortium will openly publish the augmented video collections and the motion metadata created in the course of the project’s evaluation activities.