EURO 2024 Copenhagen
Abstract Submission

EURO-Online login

2630. Risk, Uncertainty and AI: non-probabilistic methods for anticipating and preventing AI risks

Invited abstract in session MB-11: Behavioral Decision Analysis I, stream Behavioural OR.

Monday, 10:30-12:00
Room: 12 (building: 116)

Authors (first author is the speaker)

1. Vicki Bier
University of Wisconsin-Madison
2. Alexander Gutfraind
Department of Medicine, Loyola University of Chicago

Abstract

The rapid advance of AI has created risks that are difficult to foresee, let alone quantify, making AI risks an area of deep uncertainty. Despite this, researchers have begun applying probabilistic methods to AI risks at a coarse level; e.g., existential catastrophe. This type of analysis may be useful for supporting strategic policy-level considerations, but is not particularly useful for design or operational decisions.

We argue that many practical AI problems could be addressed by drawing from a toolkit of non-probabilistic risk-management methods. There exists a large class of strategies for managing risks that could be utilized to improve AI safety, using methods from fields such as safety engineering, product management, and military planning. These techniques could be used both to anticipate AI risks, and to mitigate AI risks even when they are imperfectly understood or cannot be quantified. These methods range from qualitative fault trees and event trees, to simple but effective solutions like checklists, what-if thinking, and pre-deployment testing (even by non-expert users).

We distinguish between safe design vs. rapid reaction to undesired behaviors. Reactive solutions include contingency planning, monitoring, and anomaly detection. These strategies can also operate in parallel, creating defense in depth. Drawing on these non-probabilistic methods should make it possible to develop safer AI applications, while allowing the field to advance.

Keywords

Status: accepted


Back to the list of papers