[at] – Exploring the Evolving Landscape of XAI: The Current State of AI Explainability and Its Technological Applications

June 13th, 5.15 – 5.45 pm

XAI is playing an increasingly important role in state-of-the-art AI applications. This trend is on the one hand driven by legislation and regulation that requires AI-models in many domains to generate traceable decisions. On the other hand, it is driven by the wide-spread realization that the efficient use and user acceptance of AI requires an understanding of the decision process of complex models. It is therefore essential to develop an understanding of common XAI methods and how they can be included to increase the benefits generated by AI-systems. In our talk we aim to discuss the current state of explainable AI in the industry and the role it will play in data processing pipelines in the future. Additionally, we will shed a light on how XAI can be integrated in current MLOps workflows and how it can be used to monitor AI systems online.


Dr. Johannes Nagele, Practice Lead Data Lab, Data Science & AI | [at] Alexander Thamm GmbH

picture of Dr. Johannes Nagele

With a science background in biophysics and brain research, Johannes has over 10 years of experience in statistics, data science, machine learning, and artificial intelligence. He combines his many years of hands-on experience with conceptual approaches to the analysis of complex systems.

At [at] Johannes is the head of the data lab practice and supports his team in the implementation of numerous cross-industry projects as an expert and team lead.



Dr. Luca-René Bruder, Senior Data Science | [at] Alexander Thamm GmbH

picture of Dr. Luca BruderLuca has a science background in Cognitive Neuroscience and combines this with expertise in the fields of Reinforcement Learning and Bayesian Modelling. At [at] Luca leads a large research project on Explainable AI in autonomous driving and computer vision as well as the Excellence Cluster on Explainable AI.