The Chair of Business Informatics at Saarland University and the research department Smart Service Engineering at the German Research Center for Artificial Intelligence (DFKI), led by Prof. Dr.-Ing. Wolfgang Maaß, investigate artificial intelligence can be used for adaptive service designs and innovative business solutions. In collaboration with research and industrial partners, results are applied in domains such as industrial manufacturing, crisis management, healthcare, wellness, and sports, among others.
Latest News
BMWK Project ESCADE at Hannover Exhibition | |
![]() |
Strong coverage of the presentation of the project ESCADE at the Hannover Exhibition 2025 https://mit-blog.de/nachhaltige-rechenzentren-wie-ki-um-bis-zu-90-prozent-energieeffizienter-wird/ https://www.industr.com/de/ki-soll-vom-energiefresser-zum-energiesparer-werden-2784677 https://www.sr.de/sr/home/nachrichten/politik_wirtschaft/hannover_messe_aussteller_saarland_100.html
Published on: 2025-04-17
🔗
|
Master’s Lecture on Data Science Summer Semester 2025 | |
![]() |
Lecture on Data Science at Universität des Saarlandes is going to start this week. Don't miss it. This lecture targets students in business informatics, computer science, bioinformatics, computer linguistics, and beyond. After last year collaboration with Bosch and HYDAC Group, this time projects will be conducted in collaboration with the Boston Consulting Group (BCG).
Published on: 2025-04-08
🔗
|
NAACL 2025 paper acceptance | |
![]() |
Streamlining LLMs: Adaptive Knowledge Distillation for Tailored Language Models Prajvi Saxena, Sabine Janzen, Wolfgang Maass: Large language models (LLMs) like GPT-4 and LLaMA-3 offer transformative potential across industries, e.g., enhancing customer service, revolutionizing medical diagnostics, or identifying crises in news articles. However, deploying LLMs faces challenges such as limited training data, high computational costs, and issues with transparency and explainability. Our research focuses on distilling compact, parameter-efficient tailored language models (TLMs) from LLMs for domain-specific tasks with comparable performance. Current approaches like knowledge distillation, fine-tuning, and model parallelism address computational efficiency but lack hybrid strategies to balance efficiency, adaptability, and accuracy. We present ANON - an adaptive knowledge distillation framework integrating knowledge distillation with adapters to generate computationally efficient TLMs without relying on labeled datasets. ANON uses cross-entropy loss to transfer knowledge from the teacher's outputs and internal representations while employing adaptive prompt engineering and a progressive distillation strategy for phased knowledge transfer. We evaluated ANON's performance in the crisis domain, where accuracy is critical and labeled data is scarce. Experiments showed that ANON outperforms recent approaches of knowledge distillation, both in terms of the resulting TLM performance and in reducing the computational costs for training and maintaining accuracy compared to LLMs for domain-specific applications.
Published on: 2025-03-25
🔗
|