Machine learning (ML) approaches like Deep Learning are considered “black boxes” because it is impossible for humans to understand their internal decision-making processes. For some use cases, safety and trust are as important as high accuracy. In this talk, we will consider the following aspects of dependable and trustworthy ML: explainability, verification, and uncertainty quantification. The talk will provide solution approaches and the benefits will be demonstrated with practical use cases.
Prof. Dr. Marco Huber, Professor for Cognitive Production Systems | Fraunhofer IPA
Prof. Huber received his diploma, Ph.D. and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT). From 2009 to 2011 he headed the research group “Variable Image Acquisition and Processing” at Fraunhofer IOSB in Karlsruhe. After several years in industry he became a professor for cognitive production systems with the University of Stuttgart from October 2018 on. At the same time, he is director of two departments with Fraunhofer IPA in Stuttgart. His research interests include machine learning, planning and decision making, image processing, and robotics in the manufacturing domain.