UCAI ’20: Workshop on User-Centered Artificial Intelligence
Magdeburg, Germany, 9th September 2020
On Human-Centered AI in Medicine
by Andreas Holzinger (Medical University Graz, Austria, and University of Alberta, Edmonton, Canada)
Data-driven Machine Learning is currently extremely successful, e.g. Deep Learning exceeds human performance in certain tasks – even in the medical domain. One challenge is in the availability of large amounts of top-quality data sets and the lack of contextual knowledge, limiting generalization and causal understanding. The most successful approaches are increasingly becoming complex thus opaque. Due to increasing legal aspects of AI in medicine, explainability is attracting increasing interest. While explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, causability considers quality questions and human factors, e.g. “what is a good explanation?”. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by explainable AI methods. The future could be a synergistic hybrid approach to integrate existing a priori knowledge and human experience into statistical learning in order to exploit the full benefit of data driven methods. The basis for such an approach is in interactive machine learning with the human-in-the-loop because a human domain expert complements AI with implicit knowledge. Humans are robust, can generalize from few examples, understand relevant representations and concepts and are able to explain causal links between them. Such an interactivity between human and AI will contribute to enhance robustness, reliability, accountability, fairness and trust in AI and foster ethical responsible machine learning with the human-in-control.
About the Speaker
Andreas Holzinger works on data driven Artificial Intelligence (AI) and machine learning (ML) motivated by efforts to improve human health. Andreas pioneered in interactive ML with the human-in-the-loop, paving the way towards explainability and causability. Andreas promotes a synergistic approach of Human-Centered AI (HCAI) to put the human-in-control of AI to align it with human values, privacy, security and safety. Andreas’ contributions to the international research community are reflected in 13k+ citations, h-index = 55, Scopus h-index = 37. Andreas is lead of the Human-Centered AI Lab (Holzinger Group) at the Medical University Graz and Visiting Professor for explainable AI at the University of Alberta, Edmonton, Canada. Since 2016 Andreas is teaching machine learning in health informatics at Vienna University of Technology. Andreas serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. He is in the advisory board of the Artificial Intelligence Strategy “AI Made in Germany 2030” of the German Federal Government. human-in-the-loop. Andreas’ goal is to augment human intelligence with artificial intelligence to help to solve problems in health informatics. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his second Ph.D. in Computer Science from TU Graz in 2003. He serves as Austrian Representative for Artificial Intelligence in IFIP TC 12 and in this position is organizer of the IFIP Cross-Domain Conference “Machine Learning & Knowledge Extraction (CD-MAKE)” and is member of IFIP WG 12.9 Computational Intelligence, the ACM, IEEE, GI, the Austrian Computer Science and the Association for the Advancement of Artificial Intelligence (AAAI). Andreas Holzinger was elected a full member of the Academia Europea – the European Academy of Sciences, in 2019 in the section informatics.