T15: AI life cycle in industry: case-studies

Monday, 24 July 2023, 13:30 - 17:30 CEST (Copenhagen, Denmark)
Back to Tutorials' Program

 

Dr. Christof Budnik (short bio)

Senior Key Expert for Verification & Validation; Siemens Corporation, Princeton, NJ, USA
research topics: Robustness of AI, adversarial attacks, and verification of safety-critical systems

 

Dr. Helmut Degen (short bio)

Senior Key Expert for User Experience; Siemens Corporation, Princeton, NJ, USA
research topics: explainable AI and efficiency. Helmut drives Explainable AI (XAI) research at Siemens with a focus on human-computer interaction.

 

Ralf Gross (Master of Science) (short bio)

Innovation Manager for Industrial AI; Siemens AG, Nuremberg, Germany
research topics: industrial-grade AI.

 

Dr. Michael Lebacher (short bio)

Research Scientist - Applied Data Science; Siemens AG, Munich, Germany
research topics: Explainable AI, Statistical Modelling.

 

Stefan Hagen Weber (Masters in Business Informatics) (short bio)

Senior Key Expert for Visual Analytics; Siemens AG, Munich, Germany
research topics: Visual-based explainable AI.

 

Objectives:

Engage with the tutorial audience, present, and discuss the following topics:

  • What are the needed interfaces and relationships between the disciplines business, user experience, explainability algorithms, and validation for an effective explainability?
  • What are the needs and user tasks of different personas using AI/ML-enables systems when it comes to explainability?
  • What are the core elements of explainability to achieve an adoption of AI/ML technology in industry?
  • How does explainability increase the value of AI/ML?
  • What are the boundaries and technical limitations of explainability?
  • How to generalize and apply the insights to other use cases and industries?

We expect the tutorial audience to discuss the topics and share their own experiences and knowledge.

 

Content:

  • Business: How companies in the ecosystem of production industries (potentially) benefit from AI / ML along the AI system lifecycle. What we learned from market research interviews (30+ with data scientists, project managers, sales persons, CTOs, etc.) regarding business, user experience, explainability algorithms, and validation.
  • User experience: Show the human-computer interaction research results for explainability AI: several industrial user roles, role specific interaction concepts (wire frames) and a comprehensive explainability model. Designing for and with AI often leads to new interaction patterns. It means that the future user interaction may not support a corresponding, known user task. Not having a established reference usage, design for innovation can be a challenge. In a hands-on exercise, we are going to practice the technique of “task questions” that support design for innovative products and user involvement concepts.
  • Explainability algorithms: Show current state of XAI algorithms in practice and academia towards XAI including the needed model design as well as their current limitations to generate the needed explanations for different user roles.
  • Validation: Explain the requirements and application of explainable machine learning models. Two validation problems are addressed: Ensuring the reliability of a ML model and its exploitation of valid explainability content for end users.

 

Benefits:

  • Shows the relationship of explainability and AI across disciplines, including business, user experience, explainability algorithms, and validation.
  • The presented example is an industrial case study with applicability to other industries.
  • The facilitators reserve discussion time during the tutorial to discuss topics of interest and the applicability to other industrial use cases.

 

Target Audience:

  • Practitioners: R&D, technical teams, ML teams, product management, product owners.
  • Academia: Professors, PhD, and master students.

Bio Sketch of Presenters:

Dr. Christof J. Budnik is a Senior Key Expert Engineer for Model-based Testing and Verification of Intelligent Systems at Siemens Technology in Princeton, NJ. He leads research and business projects in several industrial domains, striving for innovative test technologies to solve real world problems. Before joining Siemens he was department head of software quality for a German company in the global smart card market. Dr. Budnik obtained his Ph.D. in Electrical Engineering 2006 from the University of Paderborn, Germany which was awarded by the faculty. He is author of more than sixty published contributions at international journals and conferences among them some receiving best paper awards. Dr. Budnik is a regular program chair member of several software engineering conferences and workshops and serves as associate guest editor and reviewer for selected journals.

 

Helmut Degen is co-chair of the "AI in HCI" Conference (Artificial Intelligence in Human-Computer Interaction, affiliated with the HCI International Conference) that brings together researchers from industry and academia to share advances in the research area of AI in HCI. He received a PhD (Dr. phil.) from the Freie Universität Berlin and a master's in Computer Science (Diplom-Informatiker) from the Karlsruhe Institute of Technology (both in Germany).

 

Ralf Gross graduated as Master of Science in Electrical Engineering, Electronics and Information Technology at the Friedrich-Alexander-University Erlangen-Nuremberg in 2019. After graduation he joined Siemens Digital Industries as AI Engineer in pre-development, focused on the development of AI-based solutions for shopfloor environment and leading research projects focusing on AI technology. Today, Ralf works as Innovation Manager and Group Lead for an Innovation Team focusing on Industrial AI, aspiring to combine cutting-edge technology with customer needs in production environments.

 

Michael Lebacher graduated as Master of Science in Statistics at the Humboldt-Universität zu Berlin in 2016. This was followed by doctoral studies at the Department of Statistics at the LMU Munich. In 2019 he obtained his PhD with the thesis "New Approaches for Statistical Network Data Analysis" centered around intrinsically interpretable stochastic models for network modelling. To date, Michael works as a Research Scientist for applied Data Science at Siemens AG, dealing with topics that include Machine Learning, Explainable AI (XAI) and Statistical Modelling.

 

Stefan Hagen Weber graduated as Master of Business Informatics at the University of Mannheim before he joined Siemens AG in 1999. He drives the technology Visual Analytics with respect to the area of decision support and business intelligence in combination with machine learning algorithms within Siemens AG. As a Senior Key Expert for Visual Analytics he has years of experience in strategic roadmap and portfolio planning, implementations, project management, and technical consultancy for Visual Analytics, data integration and data science.