Authors: Thilo Spinner, Udo Schlegel, Hanna Schaefer, Mennatallah El-Assady
Abstract: Interactive and explainable machine learning have become invaluable when promoting trust and accountability in automated decision-making. In this paper, we propose a framework for interactive and explainable machine learning. Its core is the XAI pipeline comprising three phases: (1) the understanding of machine learning models, (2) the diagnosis of model limitations using different explainable AI methods and (3) the refinement and optimization of models. The pipeline is embedded in a larger framework of eight global monitoring and steering mechanisms including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for explainable and interactive machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with six participants across different expertise levels to examine how our system influences their workflow and decision-making processes. The results confirm that our tightly integrated system leads to an interactive and more informed machine learning process.