PAPIs 2018 has ended
Back To Schedule
Tuesday, October 16 • 12:00pm - 12:20pm
Machine Learning Interpretability in the GDPR Era

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Despite breakthroughs in statistical performance, the widespread adoption of algorithmic decision making has also led to a rise in “black box” machine learning with unintended negative consequences. In response, attention towards methods that improve the interpretability and understanding of machine learning has also increased. These methods are useful not only for explaining how decisions are made, but also for improving models and ultimately gaining trust in adopting machine learning systems. While using interpretable machine learning methods has innate advantages, the explicit requirement of interpretability has only recently been formalized as part of the General Data Protection Regulation, implemented by the EU as of 25 May 2018. This regulation, which protects the privacy and usage of data of EU citizens, specifically outlines a “right to explanation” in regards to algorithmic decision making. This talk explores the definition of interpretability in machine learning, the trade-offs with complexity and performance, and surveys the major methods used to interpret and explain machine learning models in the context of GDPR compliance.

avatar for Gregory Antell

Gregory Antell

Product Manager & Machine Learning Scientist, BigML

Tuesday October 16, 2018 12:00pm - 12:20pm EDT
Horace Mann