![]()
Ideally, artificial intelligence agents aim to help humans, but what does that mean when humans want conflicting things? My colleagues and I[1] have come up with a way to measure the alignment of the goals of a group of humans and AI agents.
The alignment problem[2] – making sure that AI systems act according to human values – has...