Project Details
Multilingual and Cross-cultural interactions for context-aware, and bias-controlled dialogue systems for safety-critical applications
Project Period: 1. 1. 2024 - 31. 12. 2026
Project Type: grant
Code: 101135916
Agency: European Comission EU
Program: HORIZON EUROPE
Human computer interaction and interface, visualization and natural language, artificial intelligence, intelligence systems, multi agents systems, natural language processing, data protection and privacy, machine learning, statistical data processing and applications using data processing, formal, cognitive, functional and computational linguistics, distributed and federated adaptation of Large Language Models, Multilinguality, Multimodality, Human-in-the-loop, Bias mitigation, Grounding.
ELOQUENCE aims to research and develop new technologies supporting collaborative voice/chat bots for both low secure (low risk) and highly secure (high risk) applications. Dialogue engines powered by voice assistants have already been present in various commercial/governmental applications with lower or higher level complexities. In both cases, this complexity can be translated to a problem of analysing unstructured dialogues. Key objective of ELOQUENCE is to understand unstructured dialogues and conduct them in an explainable, safe, knowledge-grounded, trustworthy and unbiased way, while considering and building on top of prior achievements in this domain (e.g. recently launched chatGPT Large Language Models (LLMs). While including key industrial enterprises from Europe in this project (i.e. Omilia, Telefonica. ...) will approach safety with human-in-the-loop for safety-critical applications (i.e., emergency services) and via information retrieval and fact-checking against an online knowledge base for less critical autonomous systems (i.e., home-assistants). ELOQUENCE will target the R&D of these novel conversational AI technologies in multilingual and multimodal environments. Both basic research and its direct deployment through two pilots will be targeted: 1) emergency call contact centres and 2) smart assistants through decentralised training in smart homes.
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT) , team leader
Schwarz Petr, Ing., Ph.D. (DCGM FIT BUT) , team leader
Beneš Karel, Ing. (DCGM FIT BUT)
Fajčík Martin, Ing., Ph.D. (DCGM FIT BUT)
Heřmanský Hynek, prof. Ing., Dr.Eng. (DCGM FIT BUT)
Kesiraju Santosh (DCGM FIT BUT)
Peng Junyi, Msc. Eng. (DCGM FIT BUT)
Pešán Jan, Ing. (DCGM FIT BUT)
Rohdin Johan A., Dr. (DCGM FIT BUT)
Sarvaš Marek, Bc. (DCGM FIT BUT)
Sedláček Šimon, Ing. (DCGM FIT BUT)
Yusuf Bolaji (DCGM FIT BUT)
2024
- ROHDIN Johan A., ZHANG Lin, PLCHOT Oldřich, STANĚK Vojtěch, MIHOLA David, PENG Junyi, STAFYLAKIS Themos, BEVERAKI Dmitriy, SILNOVA Anna, BRUKNER Jan and BURGET Lukáš. BUT systems and analyses for the ASVspoof 5 Challenge. In: Proceedings of ASV spoof 2024 Workshop. Kos Island: International Speech Communication Association, 2024, pp. 24-31. Detail
- POLOK Alexander, KLEMENT Dominik, HAN Jiangyu, SEDLÁČEK Šimon, YUSUF Bolaji, MACIEJEWSKI Matthew, WIESNER Matthew and BURGET Lukáš. BUT/JHU System Description for CHiME-8 NOTSOFAR-1 Challenge. In: Proceedings of CHiME 2024 Workshop. Kos Island: International Speech Communication Association, 2024, pp. 18-22. Detail
- YUSUF Bolaji, ČERNOCKÝ Jan and SARAÇLAR Murat. Pretraining End-to-End Keyword Search with Automatically Discovered Acoustic Units. In: Proceedings of Interspeech 2024. Kos: International Speech Communication Association, 2024, pp. 5068-5072. ISSN 1990-9772. Detail
- PEŠÁN Jan, JUŘÍK Vojtěch, RŮŽIČKOVÁ Alexandra, SVOBODA Vojtěch, JANOUŠEK Oto, NĚMCOVÁ Andrea, BOJANOVSKÁ Hana, ALDABAGHOVÁ Jasmína, KYSLÍK Filip, VODIČKOVÁ Kateřina, SODOMOVÁ Adéla, BARTYS Patrik, CHUDÝ Peter and ČERNOCKÝ Jan. Speech production under stress for machine learning: multimodal dataset of 79 cases and 8 signals. Nature Scientific Data, vol. 11, no. 1, 2024, pp. 1-9. ISSN 2052-4463. Detail