Paper ID: 2311.00710

Interactive AI Alignment: Specification, Process, and Evaluation Alignment

Michael Terry, Chinmay Kulkarni, Martin Wattenberg, Lucas Dixon, Meredith Ringel Morris

Modern AI enables a high-level, declarative form of interaction: Users describe the intended outcome they wish an AI to produce, but do not actually create the outcome themselves. In contrast, in traditional user interfaces, users invoke specific operations to create the desired outcome. This paper revisits the basic input-output interaction cycle in light of this declarative style of interaction, and connects concepts in AI alignment to define three objectives for interactive alignment of AI: specification alignment (aligning on what to do), process alignment (aligning on how to do it), and evaluation alignment (assisting users in verifying and understanding what was produced). Using existing systems as examples, we show how these user-centered views of AI alignment can be used descriptively, prescriptively, and as an evaluative aid.

Submitted: Oct 23, 2023