News Article March 25, 2025

NAACL 2025 paper acceptance

Streamlining LLMs: Adaptive Knowledge Distillation for Tailored Language Models

NAACL 2025 paper acceptance

Streamlining LLMs: Adaptive Knowledge Distillation for Tailored Language Models

Prajvi Saxena, Sabine Janzen, Wolfgang Maass:

Large language models (LLMs) like GPT-4 and LLaMA-3 offer transformative potential across industries, e.g., enhancing customer service, revolutionizing medical diagnostics, or identifying crises in news articles. However, deploying LLMs faces challenges such as limited training data, high computational costs, and issues with transparency and explainability. Our research focuses on distilling compact, parameter-efficient tailored language models (TLMs) from LLMs for domain-specific tasks with comparable performance. Current approaches like knowledge distillation, fine-tuning, and model parallelism address computational efficiency but lack hybrid strategies to balance efficiency, adaptability, and accuracy. We present ANON - an adaptive knowledge distillation framework integrating knowledge distillation with adapters to generate computationally efficient TLMs without relying on labeled datasets. ANON uses cross-entropy loss to transfer knowledge from the teacher's outputs and internal representations while employing adaptive prompt engineering and a progressive distillation strategy for phased knowledge transfer. We evaluated ANON's performance in the crisis domain, where accuracy is critical and labeled data is scarce. Experiments showed that ANON outperforms recent approaches of knowledge distillation, both in terms of the resulting TLM performance and in reducing the computational costs for training and maintaining accuracy compared to LLMs for domain-specific applications.

Additional Resources

Other News

What a Day! Tag der Deutschen Einheit 2025 💪🤖

At Tag der Deutschen Einheit (Germany's Unity Day) celebrations in Saarbrücken, our department showcased Mentalytics, an AI system based on Large Language Models (LLMs) that predicts participants' mental states and detects actual pain and effort to perform exercises. Over 220 participants tested the privacy-first technology, which runs entirely on local devices, in an interactive demo that proved both popular and successful.

Read More →

Research Seminar by Prof. Ingmar Weber

Prof. Ingmar Weber presented his group's work on applying AI and data science methods to societal challenges using innovative data sources like satellite imagery and exploring applications of LLMs.

Read More →

Key competency courses "Project Management" and "Generative AI for Students" in semester break 2025

The chair ISS is offering two courses on the key competencies "Project Management" and "Generative AI for Students" in the late summer semester 2025.

Read More →