Why AI Safety Requires Uncertainty, Incomplete Preferences, and Non-Archimedean Utilities

By: Alessio Benavoli, Alessandro Facchini, Marco Zaffalon

Published: 2025-12-30

View on arXiv →
#cs.AI

Abstract

This theoretical paper argues for the necessity of incorporating uncertainty, incomplete preferences, and non-Archimedean utilities into AI safety frameworks. It suggests that current approaches to AI alignment might be insufficient without explicitly modeling these complexities, which are crucial for ensuring safe and robust AI systems that can handle real-world ambiguities and ethical dilemmas.

FEEDBACK

Projects

No projects yet

Why AI Safety Requires Uncertainty, Incomplete Preferences, and Non-Archimedean Utilities | ArXiv Intelligence