12-01, 12:00–12:30 (UTC), Talk Track II
Numerous tools generate "explanations" for the outputs of machine-learning models and similarly complex AI systems. However, such “explanations” are prone to misinterpretation and often fail to enable data scientists or end-users to assess and scrutinize “an AI.” We share best practices for implementing “explanations” that their human recipients understand.
Methods and techniques from the realm of artificial intelligence (AI), such as machine learning, find their way into ever more software and devices. As more people interact with these highly complex and opaque systems in their private and professional lives, there is a rising need to communicate AI-based decisions, predictions, and recommendations to their users.
So-called “interpretability” or “explainability” methods claim to allow insights into the proverbial “black boxes.” Many data scientists use tools like SHAP, LIME, or partial dependence plots in their day-to-day work to analyze and debug models.
However, as numerous studies have shown, even experienced data scientists are prone to interpret the “explanations” generated by these tools in ways that support their pre-existing beliefs. This problem becomes even more severe when “explanations” are presented to end-users in hopes of allowing them to assess and scrutinize an AI system’s output.
In this talk, we’ll explore the problem space using the example of counterfactual explanations for price estimates. Participants will learn how to employ user studies and principles from human-centric design to implement “explanations” that fulfill their purpose.
No prior data science knowledge is required to follow the talk, but a basic familiarity with the concept of minimizing an objective function will be helpful.
No previous knowledge expected
My journey into Python started in a physics research lab, where I discovered the merits of loose coupling and adherence to standards the hard way. I like automated testing, concise documentation, and hunting complex bugs.
I recently completed my PhD on the design of human-AI interactions and now work to use Explainable AI to open up new areas of application for AI systems.