Project Details
TA2: Together Anywhere, Together Anytime
Project Period: 15. 3. 2010 - 31. 12. 2012
Project Type: grant
Code: 214793
Agency: Information and Communication Technologies (ICT) 7th Framework programme
Program: Seventh Research Framework Programme
social interaction, multimedia processing
TA2 (Together Anywhere, Together Anytime), pronounced "tattoo", aims at defining end-to-end systems for the development and delivery of new, creative forms of interactive, immersive, high quality media experiences for groups of users such as households and families. The overall vision of TA2 can be summarised as "making communications and engagement easier among groups of people separated in space and time".
One of the key components of TA2 is a set of generic and reliable audio, video, and multimodalities integration and recognition tools. This includes automatic extraction of cues from raw data streams. The running TA2 project stresses low-level "instantaneous" cues; it does not deal with semantic-aware integration of contextual information which could significantly improve the quality of cues.
The proposed TA2 project extension focuses on the medium-level (context-aware) cues taking into account not only low-level analysis outputs but also contextual information, e.g., about the activated scenario. The created semantic cues will be used by the TA2 system to orchestrate (i.e. frame, crop and represent) the audio-visual elements of the interaction between people.
The addition of BUT to the consortium will allow the semantic relevance of the metadata extracted from the analysis to be interpreted within the particular contexts described in the project. This will make the subsequent orchestration of the video more effective and more efficient and hence improve the end-user experience. The extension will enable building better applications that help families to interact easily and openly through games, through improved semi automatic production and publication of user generated content, and through enhanced ambient connectedness between families.
Zemčík Pavel, prof. Dr. Ing. (UPGM FIT VUT) , team leader
2012
- POLÁČEK Ondřej, KLÍMA Martin, SPORKA Adam J., ŽÁK Pavel, HRADIŠ Michal, ZEMČÍK Pavel and PROCHÁZKA Václav. A comparative study on distant free-hand pointing. In: EuroiTV '12 Proceedings of the 10th European conference on Interactive tv and video. Berlin, Germany: Association for Computing Machinery, 2012, pp. 139-142. ISBN 978-1-4503-1107-6. Detail
- MOTLÍČEK Petr, VALENTE Fabio and SZŐKE Igor. Improving Acoustic Based Keyword Spotting Using LVCSR Lattices. In: Proc. International Conference on Acoustics, Speech, and Signal Processing 2012. Kyoto: IEEE Signal Processing Society, 2012, pp. 4413-4416. ISBN 978-1-4673-0044-5. Detail
- KRÁL Jiří and HRADIŠ Michal. Restricted Boltzman Machines for Image Tag Suggestion. In: Proceedings of the 19th Conference STUDENT EEICT 2012. Brno: Brno University of Technology, 2012, p. 5. Detail
- HRADIŠ Michal, ŘEZNÍČEK Ivo and BEHÚŇ Kamil. Semantic Class Detectors in Video Genre Recognition. In: Proceedings of VISAPP 2012. Rome: SciTePress - Science and Technology Publications, 2012, pp. 640-646. ISBN 978-989-8565-03-7. Detail
- HRADIŠ Michal, EIVAZI Shahram and BEDNAŘÍK Roman. Voice activity detection in video mediated communication from gaze. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, pp. 329-332. ISBN 978-1-4503-1221-9. Detail
- BEDNAŘÍK Roman, VRZÁKOVÁ Hana and HRADIŠ Michal. What you want to do next: A novel approach for intent prediction in gaze-based interaction. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, pp. 83-90. ISBN 978-1-4503-1221-9. Detail
2011
- HRADIŠ Michal, ŘEZNÍČEK Ivo and BEHÚŇ Kamil. Brno University of Technology at MediaEval 2011 Genre Tagging Task. In: Working Notes Proceedings of the MediaEval 2011 Workshop. Pisa, Italy: CEUR-WS.org, 2011, pp. 1-2. ISSN 1613-0073. Detail
- ŘEZNÍČEK Ivo and ZEMČÍK Pavel. On-line human action detection using space-time interest points. In: Zborník príspevkov prezentovaných na konferencii ITAT, september 2011. Praha: Faculty of Mathematics and Physics, Charles University, 2011, pp. 39-45. ISBN 978-80-89557-01-1. Detail
2010
- HRADIŠ Michal, BERAN Vítězslav, ŘEZNÍČEK Ivo, HEROUT Adam, BAŘINA David, VLČEK Adam and ZEMČÍK Pavel. Brno University of Technology at TRECVid 2010. In: 2010 TREC Video Retrieval Evaluation Notebook Papers. Gaithersburg, MD: National Institute of Standards and Technology, 2010, pp. 1-10. Detail
- ŘEZNÍČEK Ivo and BAŘINA David. Classifier creation framework for diverse classification tasks. In: Proceedings of the DT workshop. Žilina: Brno University of Technology, 2010, p. 3. ISBN 978-80-554-0304-5. Detail
- ŽÁK Pavel, BARTOŇ Radek and ZEMČÍK Pavel. Vision based user interface framework. In: Proceedings of the DT workshop. Žilina, 2010, p. 3. ISBN 978-80-554-0304-5. Detail
2010
- Classifier creation framework for diverse classification tasks, software, 2010
Authors: Bařina David, Hradiš Michal, Řezníček Ivo, Zemčík Pavel Detail - Online human action recognition framework, software, 2010
Authors: Řezníček Ivo, Hradiš Michal, Zemčík Pavel Detail - Shared Image Preprocessing, software, 2010
Authors: Žák Pavel, Hradiš Michal, Smrž Pavel, Zemčík Pavel Detail