Publications
Conceptual Modeling: 43rd International Conference, ER 2024, Pittsburgh, PA, USA, October 28-31, 2024, Proceedings. Vol. 15238
Maass, Wolfgang, et al.
Springer Nature
Proceedings of the 43rd International Conference on Conceptual Modeling, presenting the latest advancements in the field.
Digital Resilience in Flux: A Comparative Analysis in Manufacturing Pre- and Post-Crisis
Stein, H., Janzen, S., Haida, B., Maass, W.
HICSS 57/24. Hawaii International Conference on System Sciences (HICSS-2024)
Analyzes digital resilience in manufacturing industries during crises.
Digitale Gesundheitsanwendungen (DiGA) im Spannungsfeld von Fortschritt und Kritik
Schlieter, H., Kählig, M., Hickmann, E., Fürstenau, D., Sunyaev, A., Richter, P., Breitschwerdt, R., Thielscher, C., Gersch, M., Maaß, W., Reuter-Oppermann, M., Wiese, L.
Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz
Im Dezember 2019 wurden in Deutschland Digitale Gesundheitsanwendungen (DiGA) in die Regelversorgung aufgenommen und können somit durch die gesetzlichen Krankenkassen erstattet werden, um PatientInnen bei der Behandlung von Erkrankungen oder Beeinträchtigungen zu unterstützen. Inzwischen gibt es 48 DiGA (Stand: Oktober 2023) im Verzeichnis des Bundesinstituts für Arzneimittel und Medizinprodukte (BfArM), die vor allem in den Bereichen mentale Gesundheit, Hormone und Stoffwechsel sowie Muskeln, Knochen und Gelenke eingesetzt werden. In diesem Artikel beschreibt die Fachgruppe „Digital Health“ der Gesellschaft für Informatik e. V. (GI) die aktuellen Entwicklungen rund um die DiGA sowie das derzeitige Stimmungsbild zu Themen wie Nutzerzentrierung, Akzeptanz von PatientInnen und Behandelnden sowie Innovationspotenzial. Zusammenfassend haben DiGA in den letzten 3 Jahren eine positive Entwicklung in Form eines langsam steigenden Angebots verschiedener DiGA und Leistungsbereiche erfahren. Nichtsdestotrotz sind in einigen Bereichen noch erhebliche regulatorische Weichenstellungen notwendig, um DiGA langfristig in der Regelversorgung zu etablieren. Zentrale Herausforderungen bestehen u. a. in der Nutzerzentrierung oder in der nachhaltigen Verwendung der Anwendungen.
Im Dezember 2019 wurden in Deutschland Digitale Gesundheitsanwendungen (DiGA) in die Regelversorgung aufgenommen und können somit durch die gesetzlichen Krankenkassen erstattet werden, um PatientInnen bei der Behandlung von Erkrankungen oder Beeinträchtigungen zu unterstützen. Inzwischen gibt es 48 DiGA (Stand: Oktober 2023) im Verzeichnis des Bundesinstituts für Arzneimittel und Medizinprodukte (BfArM), die vor allem in den Bereichen mentale Gesundheit, Hormone und Stoffwechsel sowie Muskeln, Knochen und Gelenke eingesetzt werden. In diesem Artikel beschreibt die Fachgruppe „Digital Health“ der Gesellschaft für Informatik e. V. (GI) die aktuellen Entwicklungen rund um die DiGA sowie das derzeitige Stimmungsbild zu Themen wie Nutzerzentrierung, Akzeptanz von PatientInnen und Behandelnden sowie Innovationspotenzial. Zusammenfassend haben DiGA in den letzten 3 Jahren eine positive Entwicklung in Form eines langsam steigenden Angebots verschiedener DiGA und Leistungsbereiche erfahren. Nichtsdestotrotz sind in einigen Bereichen noch erhebliche regulatorische Weichenstellungen notwendig, um DiGA langfristig in der Regelversorgung zu etablieren. Zentrale Herausforderungen bestehen u. a. in der Nutzerzentrierung oder in der nachhaltigen Verwendung der Anwendungen.
ESCADE - Energy-Efficient Large-Scale Artificial Intelligence for Sustainable Data Centers
Vogginger, B., Kappel, D., Faltings, U., Schäfer, M., Hantsch, A., Gawron, S., Dokic, D.
ISC High Performance
The power consumption of data centers has doubled in the last ten years and will account for 13% of global energy consumption by 2030. AI is one of the biggest drivers of data center power consumption: training a natural language processing model such as the GPT-3 model consumes 936,000 kWh and generates approximately 284 t of CO2. ESCADE's goal is to significantly reduce the energy requirements of data centers by using world-leading hardware and software technologies to improve the environmental footprint of AI applications. The focus will be on the use of neuromorphic chip technologies, as these promise efficiency gains of up to 50% in training and up to 80% in the inference of AI models. The SpiNNcloud data center at TU Dresden based on neuromorphic SpiNNaker2 chips will serve as prototype for evaluation of two use-cases: Visual computing for steel industry and efficient training of language models for digital industry. An AI sustainability framework will be developed to monitor the sustainability of AI systems going far beyond mere power measurement. Concepts for integrating neuromorphic chips into classic GPU-based data centers will help planning more sustainable AI data centers. This makes a concrete contribution to decoupling economic growth and prosperity from resource consumption.
The power consumption of data centers has doubled in the last ten years and will account for 13% of global energy consumption by 2030. AI is one of the biggest drivers of data center power consumption: training a natural language processing model such as the GPT-3 model consumes 936,000 kWh and generates approximately 284 t of CO2. ESCADE's goal is to significantly reduce the energy requirements of data centers by using world-leading hardware and software technologies to improve the environmental footprint of AI applications. The focus will be on the use of neuromorphic chip technologies, as these promise efficiency gains of up to 50% in training and up to 80% in the inference of AI models. The SpiNNcloud data center at TU Dresden based on neuromorphic SpiNNaker2 chips will serve as prototype for evaluation of two use-cases: Visual computing for steel industry and efficient training of language models for digital industry. An AI sustainability framework will be developed to monitor the sustainability of AI systems going far beyond mere power measurement. Concepts for integrating neuromorphic chips into classic GPU-based data centers will help planning more sustainable AI data centers. This makes a concrete contribution to decoupling economic growth and prosperity from resource consumption.
FAIR privacy-preserving operation of large genomic variant calling format (VCF) data without download or installation
Martins, Y.C., Bhawsar, P.MS., Balasubramanian, J.B., Russ, D., Wong, W.SW. Maaß, W., Almeida, J.S.
American Medical Informatics Association
Motivation: The proliferation of genetic testing and consumer genomics represents a logistic challenge to the personalized use of GWAS data in VCF format. Specifically, the challenge of retrieving target genetic variation from large compressed files filled with unrelated variation information. Compounding the data traversal challenge, privacy-sensitive VCF files are typically managed as large stand-alone single files (no companion index file) composed of variable-sized compressed chunks, hosted in consumer-facing environments with no native support for hosted execution. Results: A portable JavaScript module was developed to support in-browser fetching of partial content using byte-range requests. This includes on-the-fly decompressing irregularly positioned compressed chunks, coupled with a binary search algorithm iteratively identifying chromosome-position ranges. The in-browser zero-footprint solution (no downloads, no installations) enables the interoperability, reusability, and user-facing governance advanced by the FAIR principles for stewardship of scientific data. Availability - https://episphere.github.io/vcf, including supplementary material
Motivation: The proliferation of genetic testing and consumer genomics represents a logistic challenge to the personalized use of GWAS data in VCF format. Specifically, the challenge of retrieving target genetic variation from large compressed files filled with unrelated variation information. Compounding the data traversal challenge, privacy-sensitive VCF files are typically managed as large stand-alone single files (no companion index file) composed of variable-sized compressed chunks, hosted in consumer-facing environments with no native support for hosted execution. Results: A portable JavaScript module was developed to support in-browser fetching of partial content using byte-range requests. This includes on-the-fly decompressing irregularly positioned compressed chunks, coupled with a binary search algorithm iteratively identifying chromosome-position ranges. The in-browser zero-footprint solution (no downloads, no installations) enables the interoperability, reusability, and user-facing governance advanced by the FAIR principles for stewardship of scientific data. Availability - https://episphere.github.io/vcf, including supplementary material
From Data to Action: A Graph-Based Approach for Decision Support in Civil Protection Operations Planning
Janzen, S., Gdanitz, N., Kirchhöfer, M., Spanke, T., Maaß, W.
ISCRAM
In the face of increasing frequency and severity of crises such as natural disasters, pandemics, and geopolitical conflicts, civil protection organizations are crucial for recovery and support for affected populations. However, the efficiency and effectiveness of these organizations in crisis response are often hindered by the manual generation of operation plans, characterized by cognitive overload and limited analytic overview. Existing systems focus on detecting crises or post-crisis analysis, overlooking proactive planning. We present GRETA, a graph-based operational planning approach utilizing semantic historical data for better decision support. GRETA uses Operational Scenario Patterns to model operations, mapping them onto a knowledge graph in JSON-LD format, thus creating a structured representation of past data to improve future crisis response planning. We tested GRETA with Germany's Federal Agency for Technical Relief, analyzing over 157,450 historic operations from 2012-2022. Results show GRETA enhances plan efficiency, accuracy, and comprehensiveness, aiding especially inexperienced planners.
In the face of increasing frequency and severity of crises such as natural disasters, pandemics, and geopolitical conflicts, civil protection organizations are crucial for recovery and support for affected populations. However, the efficiency and effectiveness of these organizations in crisis response are often hindered by the manual generation of operation plans, characterized by cognitive overload and limited analytic overview. Existing systems focus on detecting crises or post-crisis analysis, overlooking proactive planning. We present GRETA, a graph-based operational planning approach utilizing semantic historical data for better decision support. GRETA uses Operational Scenario Patterns to model operations, mapping them onto a knowledge graph in JSON-LD format, thus creating a structured representation of past data to improve future crisis response planning. We tested GRETA with Germany's Federal Agency for Technical Relief, analyzing over 157,450 historic operations from 2012-2022. Results show GRETA enhances plan efficiency, accuracy, and comprehensiveness, aiding especially inexperienced planners.
From Stateless to Adaptive: Dynamic Personalization for Conversational Language Models
Agnes, C.K. & Maass, W.
19th Women in Machine Learning workshop (WiML 2024) at NeurIPS
Explores dynamic personalization techniques to adapt conversational language models to user needs.
GRASPER: Leveraging Knowledge Graphs for Predictive Supply Chain Analytics
Janzen, S., Stein, H., Baer, S.
Springer
Supply chain disruptions in manufacturing are increasingly prevalent due to the complexity and lack of transparency beyond direct suppliers. Early-stage disruptions often remain undetected, propagate in the network and pose significant challenges to industries reliant on component-based products such as sensors, engines, and electronics. We introduce GRASPER, an AI-based approach for detecting hidden problems in supply chains through graph-theoretic analysis of component shortages. Unlike traditional top-down market analyses, GRASPER converts Bill-of-Material (BOM) data into a semantically enriched knowledge graph in JSON-LD, incorporating historical and current market data on prices, availability, and lead times. By applying and combining graph-theoretical measures, GRASPER identifies critical components, manufacturers, and suppliers that could jeopardize production. The model's effectiveness was validated using a prototype in the sensor manufacturing industry, leveraging an open dataset from social network analysis to assess its performance in pinpointing critical nodes in the knowledge graph. This approach enhances supply chain transparency and resilience, offering significant support for manufacturers in mitigating risks and making informed decisions.
Supply chain disruptions in manufacturing are increasingly prevalent due to the complexity and lack of transparency beyond direct suppliers. Early-stage disruptions often remain undetected, propagate in the network and pose significant challenges to industries reliant on component-based products such as sensors, engines, and electronics. We introduce GRASPER, an AI-based approach for detecting hidden problems in supply chains through graph-theoretic analysis of component shortages. Unlike traditional top-down market analyses, GRASPER converts Bill-of-Material (BOM) data into a semantically enriched knowledge graph in JSON-LD, incorporating historical and current market data on prices, availability, and lead times. By applying and combining graph-theoretical measures, GRASPER identifies critical components, manufacturers, and suppliers that could jeopardize production. The model's effectiveness was validated using a prototype in the sensor manufacturing industry, leveraging an open dataset from social network analysis to assess its performance in pinpointing critical nodes in the knowledge graph. This approach enhances supply chain transparency and resilience, offering significant support for manufacturers in mitigating risks and making informed decisions.
Generative AI in Anti-Doping Analysis in Sports
Rahman, M.R., Maaß, W.
Springer
Doping in sports involves the abuse of prohibited substances to enhance performance in the sporting event. Blood doping, a prevalent method, allows the increase in red blood cell count to improve aerobic capacity, often through blood transfusions or synthetic Erythropoietin (rhEPO). Current indirect detection methods require a large amount of data for performing analysis. In this paper, we study the use of generative modelling for generating synthetic blood sample data to improve anti-doping analysis in sports. We performed experiment on the blood samples collected during the clinical trial. The dataset comprised haematological parameters from real blood samples, which were analyzed to understand the baseline characteristics. The Generative Adversarial Network (GAN) is used to understand the complexity and variability of real blood sample data. Results demonstrated that the model could successfully generate synthetic samples that closely resembled real samples, indi-cating its potential for augmenting datasets used in doping detection. This approach not only enhances the robustness of indirect methods of doping detection by providing a larger dataset for analysis but also addresses ethical concerns related to privacy and consent in using athletes’ biological data.
Doping in sports involves the abuse of prohibited substances to enhance performance in the sporting event. Blood doping, a prevalent method, allows the increase in red blood cell count to improve aerobic capacity, often through blood transfusions or synthetic Erythropoietin (rhEPO). Current indirect detection methods require a large amount of data for performing analysis. In this paper, we study the use of generative modelling for generating synthetic blood sample data to improve anti-doping analysis in sports. We performed experiment on the blood samples collected during the clinical trial. The dataset comprised haematological parameters from real blood samples, which were analyzed to understand the baseline characteristics. The Generative Adversarial Network (GAN) is used to understand the complexity and variability of real blood sample data. Results demonstrated that the model could successfully generate synthetic samples that closely resembled real samples, indi-cating its potential for augmenting datasets used in doping detection. This approach not only enhances the robustness of indirect methods of doping detection by providing a larger dataset for analysis but also addresses ethical concerns related to privacy and consent in using athletes’ biological data.
Geschäftsmodelle
Maaß, W.
Big Data: Grundlagen, Rechtsfragen, Vertragspraxis: Rechtshandbuch
Big Data ist eine Grundlage der datenbasierten Wirtschaft. Entsprechend sind die mit der Auswertung großer Datenmengen verbundenen Rechtsfragen von hoher praktischer Relevanz. Ein Handbuch, das die Erörterung wesentlicher Rechtsfragen von Big Data in einem Werk zusammenfasst, fehlte bisher allerdings. Das mag auch daran liegen, dass sich sowohl der Gegenstand der Betrachtung als auch sein rechtliches Umfeld in den derzeit erst entstehenden Rechtsgebieten zu Daten und zur künstlichen Intelligenz hoch dynamisch entwickeln. Weder die endgültige Entwicklungsrichtung noch das Ende der Entwicklung sind derzeit auch nur annähernd abzusehen. Diese Lücke soll mit vorliegendem Handbuch geschlossen werden, dessen Ziel es ist, die Bedürfnisse der juristischen Praxis nach einer gut zugänglichen und zugleich hochklassigen Information zu praxisrelevanten Fragen zu erfüllen.
Big Data ist eine Grundlage der datenbasierten Wirtschaft. Entsprechend sind die mit der Auswertung großer Datenmengen verbundenen Rechtsfragen von hoher praktischer Relevanz. Ein Handbuch, das die Erörterung wesentlicher Rechtsfragen von Big Data in einem Werk zusammenfasst, fehlte bisher allerdings. Das mag auch daran liegen, dass sich sowohl der Gegenstand der Betrachtung als auch sein rechtliches Umfeld in den derzeit erst entstehenden Rechtsgebieten zu Daten und zur künstlichen Intelligenz hoch dynamisch entwickeln. Weder die endgültige Entwicklungsrichtung noch das Ende der Entwicklung sind derzeit auch nur annähernd abzusehen. Diese Lücke soll mit vorliegendem Handbuch geschlossen werden, dessen Ziel es ist, die Bedürfnisse der juristischen Praxis nach einer gut zugänglichen und zugleich hochklassigen Information zu praxisrelevanten Fragen zu erfüllen.
Improving Cardiovascular Health through AI-based Analysis of Genetic Risk Factors
Maaß, W., Stegle, O., Janzen, S., Hickmann, A.-M., Agnes, C., Rahman, M.R.
KonKIS
Cardiovascular diseases (CVDs) represent a global health challenge, causing approximately 17.9 million deaths worldwide in 2019, including 158,359 deaths in Germany. Traditional risk stratification methods focusing on age, blood pressure, and cholesterol levels often result in inaccurate risk assessments. The goal of stratification is to enable more precise and individualized diagnoses, prognoses, and therapies. By identifying specific subgroups within a broader patient population, medical research can develop targeted and effective treatment strategies tailored to the unique needs of these subgroups in the context of personalized medicine. The landscape of personalized medicine, particularly concerning CVDs, has advanced rapidly due to progress in genomics, data analysis, and artificial intelligence (AI). Incorporating non-traditional risk factors, such as genetic information, into clinical practice has the potential to enhance risk prediction accuracy and treatment stratification, significantly aiding in the prevention of complex diseases. For instance, coronary artery disease (CAD), a prevalent CVD, is influenced by multiple genetic variants (alleles) across the genome. The ApoE gene, which encodes the apolipoprotein E protein, plays a critical role in lipid metabolism and is linked to both neurodegenerative and cardiovascular diseases. The ApoE ε4 allele significantly increases the risk of Alzheimer's and CVDs, including CAD. Early identification of individuals with these genetic markers can lead to proactive cholesterol management, substantially reducing the risk of developing CAD. Polygenic Risk Scores (PRS), which aggregate the effects of numerous small genetic variations, have shown promise for personalized risk prediction. PRS can be particularly useful for individuals with a family history or genetic predisposition to CVDs. Although PRS are still in the research phase for CVDs, growing studies demonstrate their utility in risk prediction, sparking interest in their clinical application. However, challenges related to data privacy and access to sensitive data persist. The transfer of genomic data from patients to physicians is heavily regulated by privacy laws such as the GDPR and the EU's AI Act. This research proposes a patient-centered approach that ensures the confidentiality of individual genomic data. Genetic risk factors will be analyzed using AI to enhance risk prediction and treatment stratification, supporting personalized cardiovascular medicine. The goal is to implement personalized risk prediction and treatment stratification for CVDs through AI-based analysis of genetic risk factors, embedded in a patient-centered approach that guarantees data confidentiality. The envisioned approach involves two modules. The patient module, a client-side software on the patient's smartphone, enables local storage of genomic data and local calculation of CVD risk without transmitting genomic data to cloud providers. Patients can select body organs and send risk analysis results to a chosen physician. The analysis identifies potential disease traits and calculates individual PRS, with results condensed by an AI algorithm. Medical results are not displayed to patients but are interpreted by physicians. The physician module allows physicians to access and visualize results, along with summaries of relevant scientific publications. A language model derived from existing open-source large language models will generate controlled summaries, aiding physicians in personalized diagnoses. This approach paves the way for tailored cardiovascular medicine, where data privacy and ethics play central roles. The resulting decentralized, AI-based platform for personalized genomic health services, exemplified by heart diseases, contrasts with traditional cloud-based methods by empowering patients with data control while enabling personalized medicine based on genomic data, expected to enhance patient trust in personalized medical services.
Cardiovascular diseases (CVDs) represent a global health challenge, causing approximately 17.9 million deaths worldwide in 2019, including 158,359 deaths in Germany. Traditional risk stratification methods focusing on age, blood pressure, and cholesterol levels often result in inaccurate risk assessments. The goal of stratification is to enable more precise and individualized diagnoses, prognoses, and therapies. By identifying specific subgroups within a broader patient population, medical research can develop targeted and effective treatment strategies tailored to the unique needs of these subgroups in the context of personalized medicine. The landscape of personalized medicine, particularly concerning CVDs, has advanced rapidly due to progress in genomics, data analysis, and artificial intelligence (AI). Incorporating non-traditional risk factors, such as genetic information, into clinical practice has the potential to enhance risk prediction accuracy and treatment stratification, significantly aiding in the prevention of complex diseases. For instance, coronary artery disease (CAD), a prevalent CVD, is influenced by multiple genetic variants (alleles) across the genome. The ApoE gene, which encodes the apolipoprotein E protein, plays a critical role in lipid metabolism and is linked to both neurodegenerative and cardiovascular diseases. The ApoE ε4 allele significantly increases the risk of Alzheimer's and CVDs, including CAD. Early identification of individuals with these genetic markers can lead to proactive cholesterol management, substantially reducing the risk of developing CAD. Polygenic Risk Scores (PRS), which aggregate the effects of numerous small genetic variations, have shown promise for personalized risk prediction. PRS can be particularly useful for individuals with a family history or genetic predisposition to CVDs. Although PRS are still in the research phase for CVDs, growing studies demonstrate their utility in risk prediction, sparking interest in their clinical application. However, challenges related to data privacy and access to sensitive data persist. The transfer of genomic data from patients to physicians is heavily regulated by privacy laws such as the GDPR and the EU's AI Act. This research proposes a patient-centered approach that ensures the confidentiality of individual genomic data. Genetic risk factors will be analyzed using AI to enhance risk prediction and treatment stratification, supporting personalized cardiovascular medicine. The goal is to implement personalized risk prediction and treatment stratification for CVDs through AI-based analysis of genetic risk factors, embedded in a patient-centered approach that guarantees data confidentiality. The envisioned approach involves two modules. The patient module, a client-side software on the patient's smartphone, enables local storage of genomic data and local calculation of CVD risk without transmitting genomic data to cloud providers. Patients can select body organs and send risk analysis results to a chosen physician. The analysis identifies potential disease traits and calculates individual PRS, with results condensed by an AI algorithm. Medical results are not displayed to patients but are interpreted by physicians. The physician module allows physicians to access and visualize results, along with summaries of relevant scientific publications. A language model derived from existing open-source large language models will generate controlled summaries, aiding physicians in personalized diagnoses. This approach paves the way for tailored cardiovascular medicine, where data privacy and ethics play central roles. The resulting decentralized, AI-based platform for personalized genomic health services, exemplified by heart diseases, contrasts with traditional cloud-based methods by empowering patients with data control while enabling personalized medicine based on genomic data, expected to enhance patient trust in personalized medical services.
Incorporating Metabolic Information into LLMs for Anomaly Detection in Clinical Time-Series
Rahman, M.R., Liu, R., & Maass, W.
NeurIPS 2024 Workshop on Time Series in the Age of Large Models
Proposes incorporating metabolic data into large language models to detect anomalies in clinical time-series data.
Investigation of Artificial Mental Models for Healthcare AI Systems
Janzen, S., Maaß, W., Saxena, P.
DESRIST
In the evolving landscape of healthcare, personalized Artificial Intelligence (AI) systems are vital for patient-centered care. However, patients facing health challenges often struggle with cognitive limitations, leading to incomplete or biased data that hinders their decision making abilities. To address this issue, this research in progress explores the concept of Artificial Mental Models (AMM) within healthcare AI systems. AMMs are meta representations of patient mental models, capturing their understanding and assumptions about therapy and rehabilitation processes. We present a research design for investigating AMMs in healthcare AI systems that adopts a Design Science Research (DSR) approach consisting of four iterative phases: elicitation, individualization, action, and transfer. In the elicitation phase, discrimination-free basis models are generated through web scraping and synthetic patient data. The individualization phase fine-tunes AMMs for individual patients by incorporating diverse data sources. The action phase integrates AMMs into AI systems and evaluates their real-world impact. The transfer phase applies the resulting framework to support therapy decisions for patients with compromised decision-making abilities. This research aims to enhance therapy outcomes and patient care while advancing the understanding of mental models in healthcare.
In the evolving landscape of healthcare, personalized Artificial Intelligence (AI) systems are vital for patient-centered care. However, patients facing health challenges often struggle with cognitive limitations, leading to incomplete or biased data that hinders their decision making abilities. To address this issue, this research in progress explores the concept of Artificial Mental Models (AMM) within healthcare AI systems. AMMs are meta representations of patient mental models, capturing their understanding and assumptions about therapy and rehabilitation processes. We present a research design for investigating AMMs in healthcare AI systems that adopts a Design Science Research (DSR) approach consisting of four iterative phases: elicitation, individualization, action, and transfer. In the elicitation phase, discrimination-free basis models are generated through web scraping and synthetic patient data. The individualization phase fine-tunes AMMs for individual patients by incorporating diverse data sources. The action phase integrates AMMs into AI systems and evaluates their real-world impact. The transfer phase applies the resulting framework to support therapy decisions for patients with compromised decision-making abilities. This research aims to enhance therapy outcomes and patient care while advancing the understanding of mental models in healthcare.
Listening In: Social Signal Detection for Crisis Prediction
Janzen, S., Saxena, P., Baer, S., Maass, W.
HICSS 57/24. Hawaii International Conference on System Sciences (HICSS-2024)
Crises send out early warning signals, mostly weak and difficult to detect amidst the noise of everyday life. Signal detection based on social media enables early identification of such signals supporting pro-active organizational responses before a crisis occurs. Nonetheless, social signal detection based on Twitter data is not applied in crisis management in practice as it is challenging due to the high volume of noise. With OSOS, we introduce a method for open-domain social signal detection of crisis-related indicators in tweets. OSOS works with multi-lingual Twitter data and combines multiple state-of-the-art models for data pre-processing (SoMaJo) and data filtration (GPT-3). It excels in crisis domains by leveraging fine-tuned GPT-3FT (Curie) model and achieving benchmark results in the CrisisBench dataset. The method was exemplified within a signaling service for crisis management. We were able to evaluate the proposed approach by means of a data set obtained from Twitter (X) in terms of performance in identifying potential social signals for energy-related crisis events.
Crises send out early warning signals, mostly weak and difficult to detect amidst the noise of everyday life. Signal detection based on social media enables early identification of such signals supporting pro-active organizational responses before a crisis occurs. Nonetheless, social signal detection based on Twitter data is not applied in crisis management in practice as it is challenging due to the high volume of noise. With OSOS, we introduce a method for open-domain social signal detection of crisis-related indicators in tweets. OSOS works with multi-lingual Twitter data and combines multiple state-of-the-art models for data pre-processing (SoMaJo) and data filtration (GPT-3). It excels in crisis domains by leveraging fine-tuned GPT-3FT (Curie) model and achieving benchmark results in the CrisisBench dataset. The method was exemplified within a signaling service for crisis management. We were able to evaluate the proposed approach by means of a data set obtained from Twitter (X) in terms of performance in identifying potential social signals for energy-related crisis events.
Neuromorphic hardware for sustainable AI data centers
Vogginger, B., Rostami, A., Jain, V., Arfa, S., Hantsch, A., Kappel, D., Schäfer, M., Faltings, U., Gonzalez, H.A., Lui, C., Mayr, C., Maaß, W.
ariXv
As humans advance toward a higher level of artificial intelligence, it is always at the cost of escalating computational resource consumption, which requires developing novel solutions to meet the exponential growth of AI computing demand. Neuromorphic hardware takes inspiration from how the brain processes information and promises energy-efficient computing of AI workloads. Despite its potential, neuromorphic hardware has not found its way into commercial AI data centers. In this article, we try to analyze the underlying reasons for this and derive requirements and guidelines to promote neuromorphic systems for efficient and sustainable cloud computing: We first review currently available neuromorphic hardware systems and collect examples where neuromorphic solutions excel conventional AI processing on CPUs and GPUs. Next, we identify applications, models and algorithms which are commonly deployed in AI data centers as further directions for neuromorphic algorithms research. Last, we derive requirements and best practices for the hardware and software integration of neuromorphic systems into data centers. With this article, we hope to increase awareness of the challenges of integrating neuromorphic hardware into data centers and to guide the community to enable sustainable and energy-efficient AI at scale.
As humans advance toward a higher level of artificial intelligence, it is always at the cost of escalating computational resource consumption, which requires developing novel solutions to meet the exponential growth of AI computing demand. Neuromorphic hardware takes inspiration from how the brain processes information and promises energy-efficient computing of AI workloads. Despite its potential, neuromorphic hardware has not found its way into commercial AI data centers. In this article, we try to analyze the underlying reasons for this and derive requirements and guidelines to promote neuromorphic systems for efficient and sustainable cloud computing: We first review currently available neuromorphic hardware systems and collect examples where neuromorphic solutions excel conventional AI processing on CPUs and GPUs. Next, we identify applications, models and algorithms which are commonly deployed in AI data centers as further directions for neuromorphic algorithms research. Last, we derive requirements and best practices for the hardware and software integration of neuromorphic systems into data centers. With this article, we hope to increase awareness of the challenges of integrating neuromorphic hardware into data centers and to guide the community to enable sustainable and energy-efficient AI at scale.