Designing for Trust

“Don’t let your UI write a check your AI can’t cash.” – Eytan Adar

The hottest takes on AI swing between doomsday prophecies and techno-utopian bliss. Meanwhile, most people are just trying to decide how to invite AI into their lives and a lot of that decision comes down to trust.

And trust is a design problem. When an interface over-promises, users assume the model is a mind-reader. When it under-communicates, they miss out on game-changing capabilities.

As I’ve watched AI seep into everyday tools, I keep seeing the same misfires particularly around ghost promises - shiny capabilities that hint at magic the model can’t consistently deliver.

In my own practice when I am stung by a fail in AI capability it takes me months before I may give the same task a try again if I will even try again at all.

Getting this calibration wrong isn’t just annoying; it erodes credibility.

So what does good look like?

Friction-light transparency - In UX, the trend has become to reduce friction and make things easy. When it comes to AI, when should friction be introduced to help maintain trust and encourage users to keep their hands on the wheel?

Progressive autonomy: How can we allow users to be in control for how much autonomy the system has? How can those options for control be seamlessly integrated into an experience so a user is reminded that they are in the driver's seat?

Our job as designers and researchers is to tune that trust dial and build interfaces that keep users engaged and informed, skeptical and empowered.

If your UI keeps cashing checks your AI can’t cover, the bank of user trust will bounce you out the door.c

Next
Next

What Teaching Taught Me About UXR