Operationalizing Consent as an Abstraction for Responsible Autonomy in Sociotechnical Systems
Publication date
Authors
DOI
Document Type
Master Thesis
Metadata
Show full item recordCollections
License
CC-BY-NC-ND
Abstract
As intelligent agents increasingly operate in complex sociotechnical systems, aligning their behavior with human norms becomes critical. Consent, a fundamental concept in human social interaction, has recently been proposed as a formal mechanism for achieving responsible autonomy in sociotechnical systems. This thesis investigates the effectiveness of the consent model introduced by Apeiron et al. through comprehensive agent-based simulations. Our central goal is to empirically evaluate how well this model enables normative responsible behavior as a framework for the responsible resource-sharing problem in sociotechnical environments.
After exploring the proposed consent model, we investigate system-level and persona-level consequences of varying degrees of consent sensitivity through extensive simulations with three different agent personas.
Finally, we conduct a feasibility investigation demonstrating large language models' ability to internalize and reason about the consent model, since LLM-based agents are gaining popularity in various fields where they frequently interact with other human and computational agents.
Ultimately, this work aims to aid the sociotechnical system governance efforts by providing the first systematic operationalization of consent for the responsible resource-sharing problem in sociotechnical systems.
Keywords
Responsible Autonomy; MAS; Intelligent Agents; Agent Based Simulation