How humans judge machines

As AI diffuses through everything and deployed on a micro scale with multiple, small interactions, understanding how humans judge machine decisions before systems are designed will be an important skill and step in the process.

Humans judge machines according to a simple principle:

  • People judge humans by their intentions
  • People judge machines by their consequences

The implications of this principle are far reaching. It means that people using AI need to be able to intuit the human intent behind it. It means that humans are forgiven for mistakes, as long as their intentions are good, while machines are not forgiven if a decision results in an unfair outcome, even if this outcome was accepted as possible right from the outset.

Because humans attribute agency (and therefore, a degree of intent) to anything that is “active,” mental models of how people understand the intent of the machine and the humans behind it are just as important as anticipating consequences.

For AI designers, this insight means it’s especially important to understand intent and consequences from both perspectives and to match this up with communication, onboarding and mental models.

In the early stages of design, two canvases can be useful for defining how humans judge machines:

  • step 1: use the intentions and consequence canvas to understand how the roles of humans and machines define affect what should and shouldn’t be subject to algorithm.
  • step 2: use the how humans judge machines canvas to develop ideas for mental models and success and failure.

Share on email
Share on facebook
Share on linkedin
Share on twitter