5th Virtual Meetup

Online Meetups

5th Virtual Meetup: March 7th, 2024, 16:00 CEST

The fifth virtual meetup will take place on March 7th, 2024. This time we will have three talks, one from academia and two from industry:

Unveiling Adaptive AI: A Comprehensive Classification for AI-Based System Adaptivity
by Doreen Engelhardt, Audi AG

Abstract: With the increasing technical feasibility of artificial intelligence (AI), many companies are incorporating AI-based adaptivity and personalization into consumer products. These products are often marketed with buzzwords like “smart,” “intelligent,” “adaptive,” or “personalized.” However, these terms may represent significantly different levels of adaptivity. The absence of a standardized taxonomy can lead to misunderstandings. To address this issue, Doreen Engelhardt is leading an ISO group in developing a standard for a straightforward classification of adaptive systems. She will present the taxonomy for classifying the level of adaptivity, empowering you to categorize adaptive systems, decode marketing catchphrases, and enhance clarity and transparency in your communication about adaptive systems with stakeholders, including manufacturers, suppliers, and researchers. This will enable you to enhance the user experience for adaptive systems.

About the speaker: Doreen Engelhardt is User Experience Developer at AUDI AG, working within the department ‘Innovations Interior User Experience’ since 2017. Her projects cover a wide range from human-AI interaction, cultural adaptivity, and innovative interior cockpit designs. Additionally, she is the project leader for AI-Interaction in the federally funded project KARLI. Prior to this, she held the position of project leader at AUDI AG, focusing on integrating artificial intelligence into the voice user interface. Her academic background includes a diploma in Applied Media and Communication Studies from the Technical University of Ilmenau.

Escaping the Trap of Distraction: AI-Supported Multitasking in Human-Computer Interaction
by Philipp Wintersberger, University of Applied Sciences Upper Austria (Campus Hagenberg)

Abstract: In the future, humans will cooperate with a wide range of AI-based systems in both working (i.e., decision and recommender systems, language models, or industry robots) and private (i.e., fully- or semi-automated vehicles, smart home applications, or ubiquitous computing systems) environments. Cooperation with these systems involves shared (i.e., concurrent multitasking) and traded (i.e., task switching) interaction. As it is known that frequently changing attention can yield decreased performance as well as higher error rates and stress, future systems must consider human attention as a limited resource to be perceived as valuable and trustworthy. This talk addresses the emerging problems that occur when users frequently switch their attention between multiple systems or activities and proposes to develop a new class of AI-based interactive systems that integrally manage user attention. Therefore, we designed a software architecture that utilizes reinforcement learning and principles of computational rationality to optimize task switching. While computational rationality allows the system to simulate and adapt to different types of users, reinforcement learning does not require labeled training data so that the concept can be applied to a wide range of tasks. The architecture has demonstrated its potential in laboratory studies and is currently extended to support various multitasking situations. The talk concludes with a critical assessment of the underlying concepts while providing a research agenda to improve cooperation with computer systems.

About the speaker: Philipp Wintersberger is a Professor of Interactive Systems at the University of Applied Sciences Upper Austria (Campus Hagenberg) and a lecturer at TU Wien. He leads an interdisciplinary team of scientists on FWF, FFG, and industry-funded research projects focusing on human-machine cooperation in safety-critical AI-based systems. He has (co)authored various works published at major journals and conferences (such as ACM CHI, IUI, AutomotiveUI, or Human Factors), and his contributions have won several awards. Further, he is a member of the ACM AutomotiveUI steering committee, has contributed to HCI conferences in various roles in the past (Technical Program Chair AutomotiveUI’21, Workshop Chair MuM’23, Diversity and Inclusion Chair Muc’22), and is one of the main organizers of the CHI workshop on Explainable Artificial Intelligence (XAI).

Ein Blick in die Zukunft von großen Sprachmodellen in der Industrie
by Tobias Grosse-Puppendahl, Porsche AG

Abstract: Große Sprachmodelle, wie ChatGPT, haben das Bild von KI in der Industrie grundlegend verändert. Wo früher noch große Mengen an Trainingsdaten benötigt wurden, werden heute Aufgaben einfach nur noch beschrieben. Vortrainierte Modelle ermöglichen einen schnellen Einstieg auch für Menschen, die keinen professionellen Hintergrund in der KI-Entwicklung haben. KI wird somit zu einer Integrationsaufgabe, weniger zu einer Entwicklungsaufgabe. In diesem Vortrag soll die Frage beantwortet werden, welche Entwicklungen wir in der Zukunft erwarten können und wie sich die Industrie auf diese neue Herausforderung einstellen muss.

About the speaker: Tobias Grosse-Puppendahl ist ein KI-Architekt bei Porsche mit Schwerpunkt auf Machine-Learning-Infrastruktur. Er trägt zur akademischen Forschung im Bereich des Ubiquitous Computing und der Mensch-Computer-Interaktion bei. Insbesondere interessiert er sich für neue Möglichkeiten der Wahrnehmung und Interaktion mit der Umgebung, mit Hilfe von Daten und KI. Zuvor war er Researcher in der Sensors & Devices Gruppe bei Microsoft Research Cambridge und Morgan Fellow am Downing College, University of Cambridge. In seiner Doktorarbeit forschte er am Fraunhofer-Institut für graphische Datenverarbeitung IGD.