New Mechanism for complexity: How to enable understanding of emergent phenomena through the lens of Machine-Learning

Publication date

DOI

Document Type

Master Thesis

Collections

Open Access logo

License

CC-BY-NC-ND

Abstract

In this thesis, I deal with a topic in epistemology of Machine Learning (ML). With an outstanding predictive accuracy and its ability to handle large amounts of data, ML is increasingly applied to complex systems science. However, ML models are often opaque and sometimes described as ”ruthless correlation extractors”, which makes them ineffective for understanding on a process-level. I seek to improve upon the concept called ”linkuncertainty”, introduced by Emily Sullivan, who addressed the question of how we could gain understanding through ML. In her drawn picture, mechanistic knowledge is just a passive precondition for an abstract level of understanding that is not further specified. Instead, I focus on mechanisms as a desired target of understanding, while grounding my analytical terminology within the recent movement of ”New Mechanism”. On the backdrop of a symbiotic (statistical/mechanistic) modelling framework, I first use case studies that apply ML in the field of climate science, to further centre my ideas around a ML model, called AgentNet, which deals with agent-based complex systems in a physically transparent way. Based on my analysis, I introduce a novel concept that I labelled ”Correspondence Principle for Mechanistic Interpretability”, or short "CPMint". It features a threefold correspondence-scheme between a ML model and the target system - First, on the ontological, second on the functional, and third on the predictive, phenomenological level, thus serving as a recipe to establish ”mechanistic interpretability”. In contrast to Sullivan’s ”link uncertainty”, CPMint capitalises on introducing physical transparency into the ML model, which makes it a guide to setting up ML models that aim at contributing to procedural knowledge within complex systems.

Keywords

Mechanism; Understanding; Machine-learning; Complexity;

Citation