UX is now AX

Mental models are one of the most important concepts in UX design. A mental model is based on belief, not facts: it’s a model of what users know (or think they know) about a system. Individual users each have their own mental model. A mental model is internal to each user’s brain, and different users might construct different mental models of the same user interface. Additionally, mental models are in flux precisely because they’re embedded in a brain rather than fixed in an external medium. 

In the age of AI, the impact and importance of mental models is far more complex and dynamic as the system is also in flux and in ways that are not always easily explained or predicted, especially when content is personalized.

New research from Penn State offers a framework for how to think about algorithmic experience (AX), a new way of considering UX when learning algorithms and autonomous decision making are at play. The framework identifies two paths—cues and actions—that AI developers can focus on to gain trust and improve user experience. For an ideal AX, users ought to be aware of how the algorithm functions and what it tracks in order to provide personalized services. 

Cues are signals that can trigger a range of mental and emotional responses from people and are superficial indicators of what the AI looks like or does. Cues can be obvious and are modeled on intuitive, human-like features such as a human face on a robot or the voice that virtual assistants like Siri and Alexa use. Or they can be more subtle, such as how Netflix explains why it is recommending a certain movie. Each cue can trigger a distinct mental short cut or heuristic. The most important function of a cue is to build trust in the AI.

Two biases come into play here—automation bias, where humans can overly trust machines—and algorithmic aversion, where people distrust machines, perhaps because they were fooled in the past. It’s important for designers to assess cues with both of these biases potentially being at play.

If you provide clear cues on the interface, you can help shape how the users respond, but if you don’t provide good cues, you will let the user’s prior experience and folk theories, or naive notions, about algorithms to take over.

S. Shyam Sundar, James P. Jimirro Professor of Media Effects, per PennState News

And because AI is interactive, the other side of the AX equation is action; how the AI interacts with people to determine the user experience. Action is about collaboration; how an AI works with a user to engage in a mutual task. Virtual assistants rely heavily on the action route—it’s all about the interaction and the relative costs versus benefits of collaboration for the user.

The trick with both actions and cues is to achieve the right balance. A cue that does not transparently tell the user that AI is at work in the device might trigger negative cues, but if the cue provides too much detail, people may try to corrupt — or “game” — the interaction with the AI. 

“If your smart speaker asks you too many questions, or interacts with you too much, that could be a problem, too. People want collaboration. But they also want to minimize their costs. If the AI is constantly asking you questions, then the whole point of AI, namely convenience, is gone.”

S. Shyam Sundar, James P. Jimirro Professor of Media Effects, per PennState News

If we want AI to increase human agency, add convenience, preserve privacy, develop trust and be more personalized, designers who understand the new rules of AX will have a huge advantage because will have developed the skills required to cater for each person’s unique circumstances. AX is the art of factoring these into the inferences made by the AI system.

Photo by Taras Shypka on Unsplash

Share on email
Share on facebook
Share on linkedin
Share on twitter