Talk

Model Governance and Explainable AI as tools for legal compliance and risk management

On their journey towards machine learning (ML) in production, organizations often solely focus on MLOps, building the technical infrastructure and processes for model training, serving, and monitoring. However, as ML-based systems are increasingly employed in business-critical applications, ensuring their trustworthiness and legal compliance becomes paramount. To this end, highly complex “black box” AI systems pose a particular challenge.

Using the example of ML-based recruiting tools, we show how even seemingly innocuous applications can carry significant risks. Then, we demonstrate how organizations can utilize Model Governance and Explainable AI to manage them by enabling stakeholders such as management, non-technical employees, and auditors to assess the performance of AI systems and their compliance with both business and regulatory requirements.

We‘d love to show you a YouTube video right here. To do that, we need your consent to load third party content from youtube.com

Date
2022-06-15
Time
10:45 - 11:15
Conference / Event
WeAreDevelopers World Congress 2022
Venue
CityCube Berlin, Messedamm 26, Berlin
  • Slide 1
    1/25
  • Slide 2
    2/25
  • Slide 3
    3/25
  • Slide 4
    4/25
  • Slide 5
    5/25
  • Slide 6
    6/25
  • Slide 7
    7/25
  • Slide 8
    8/25
  • Slide 9
    9/25
  • Slide 10
    10/25
  • Slide 11
    11/25
  • Slide 12
    12/25
  • Slide 13
    13/25
  • Slide 14
    14/25
  • Slide 15
    15/25
  • Slide 16
    16/25
  • Slide 17
    17/25
  • Slide 18
    18/25
  • Slide 19
    19/25
  • Slide 20
    20/25
  • Slide 21
    21/25
  • Slide 22
    22/25
  • Slide 23
    23/25
  • Slide 24
    24/25
  • Slide 25
    25/25

Swipe for more

Scroll or use your arrow keys for more

Scroll for more

Use your arrow keys for more

Download Slides

TAGS