Authors: Damon Crockett, Joe Walsh, Klaus Ackermann, Andrea Navarrete, Rayid Ghani
Abstract: The recent spread of machine learning methods into critical decision-making, especially in public policy domains, has necessitated a focus on their intelligibility and transparency. The literature on intelligibility in machine learning offers a range of methods for identifying model variables important for making predictions, but measures of predictor importance may be poorly understood by human users, leaving the crucial matter unexplained – viz., why the predictor in question is important. There is a critical need for tools that can interpret predictor importances in such a way as to help users understand, trust, and take action on model predictions. We describe a prototype system for achieving these goals and discuss a particular use case – early intervention systems for police departments, which model officers' risk of having "adverse incidents'' with the public.