Toward A Normative Account of Machine Learning Explanation Via Levels of Abstraction

Publication date

DOI

Document Type

Master Thesis

Collections

Open Access logo

License

CC-BY-NC-ND

Abstract

The aim of this thesis will be to bridge between the domains of explainable artificial intelligence (XAI) and the normative criteria required for the responsible deployment of such models. The innate difficulty in understanding complex information processing systems such as those constituting the field of artificial intelligence motivates the need for methods to untangle their inner workings. Toward this end, I argue for the use of a fundamental epistemological method - that of Levels of Abstraction (LoAs) - for clarifying the workings of such systems. I begin by articulating a predominant account of scientific understanding from Kareem Khalifa to argue that opacity, as the main obstacle to understanding, is a phenomenon relative to those seeking an explanation (Section 2). After describing the Method of LoAs, I motivate a transition from using Marr’s levels of analysis to LoAs in the domain of AI to ground normative criteria for comparing explanations (Section 3). I then provide further examples of the usefulness of LoAs in the domain of AI for the sake of conceptualizing the responsibility gap and understanding advanced properties in AI models (Section 4).

Keywords

Levels of Abstraction; Machine Learning; Understanding; Explanation; Epistemology; Normativity; Marr's Levels

Citation