Chatbots! After floating around the edges of the useful web for a few years now, the right underlying technology is finally making it plausible for companies and organisations to build and run their own online bots. That’s cool, and it’s brought to mind a lesson that I first learned in 2009, when we launched an experimental (now dead) chatbot at NAB.
For writers, that lesson is:
Let your bot be a bot. Make it sound like a bot, let it behave like a bot and, most of all, tell people it’s a bot.
Your bot is here to help people learn stuff and complete tasks. You’re not here to beat the Turing Test.
Over-humanised bots are just another version of skeuomorphism gone bad.
We didn’t get this right in 2009. We used a photo of a woman in a headset, I wrote in as chatty a voice as I could, and although we used the terms like “online assistant” (too vague) and later “virtual assistant”, we weren’t clear enough about who or what this thing actually was.
The photo didn’t last long, but even the image we replaced it with was probably too human:
I read a lot of chat logs, and one of the biggest frustrations people had was noticing partway through a conversation that the nice call centre lady they thought they were dealing with was actually artificial. Things would start off okay, but then one of the bot’s answers wouldn’t feel quite right, or it would say the same thing twice. Cue angry questions, LOTS OF ALL CAPS, and more exclamation marks than I ever needed to see. A couple of people composed full letters of complaint directly into the tiny little chat window.
People didn’t care about talking to a machine. They cared that the machine had tried to trick them.
But when people realised from the start that they were dealing with a machine, they were happier to do thing like adjust their language if a question didn’t work the first time. If you know you’re talking to a computer, it’s not a big deal to ask things twice. If you think you’re talking to a human, it’s one of life’s major frustrations. Getting the language right puts people in the right frame of mind, and it gives your bot the space it needs to be itself.
On top of that, when people treat a bot like a bot, things work better. They do things like use simpler grammar (no double negatives, for example) and stick to a single topic.
The Turing Test is a moonshot
Robotic stuff is exciting. It’s cool, and over time it’s taking a larger and larger place in the general geekworld landscape. That’s all awesome. But.
But one of things a lot of us have heard about, and want to see achieved, is defined by the Turing Test. Without getting into what exactly a successful Turing Test would look like, let’s just say that it boils down to someone not realising that their interlocuter is a bot rather than a human.
This sounds like a great aim, and it is. There are news stories and books and blogs and podcasts all about it. There’s serious research happening right now that will get us closer and closer to winning. But this is a moonshot goal – more a scientific milestone than an actual test of a good chatbot.
A good chatbot is an interface. It tells you stuff. It answers questions, and guides you through processes. You don’t need to fake full human intelligence and wit to do any of that. Machines don’t need to replicate what humans can do to be useful. We ought to just build the best machines that we can. The Turing Test, as the mythical highest standard that a bot could reach, is distracting.
So unless you’re trying to win a Nobel Prize, keep it simple. If your bot can deduce what people want to know or do, then pass on information or facilitate the right interactions, that’s pretty damn great.
We’ve already learned this design lesson
Remember a few years ago when we all finally got over skeuomorphic design and agreed that flat design – elements created to exist and work best on a 2D screen – made more sense? My take on chatbots is like that. We don’t want a computerised attempt at something that looks and feels human. We want an interface that makes the most of its own reality – essentially, a pre-programmed call-and-response machine – and works well within it.
Doing it bot style
For writers, bots introduce some new things to think about.
- The voice (and, to a lesser degree, the tone) in which you write needs to match the medium. That will probably mean discovering a new voice that fits your brand.
- You’ll probably be writing chunks with even less control over their order than you’re used to. Test your chunks individually, and in lots of different orders, to see how they sound. (Once you give up trying to fake humanity and let your bot be a bot, this gets much easier.)
- Using the same words and terms as your users is even more important in bot world than most others. Luckily, your chat logs can help you learn fast. Don’t be surprised if you see an input style that hovers somewhere between “search term” and “normal chat”. Adapt to it.
- The faster you can discover the right direction for a chat to go in, the better. It’s okay, and useful, to have ordered conversation ready to go once you know what someone wants to do. Ask direct questions and set funnels up behind them.
And thanks to @amythibodeau for the nudge I needed to write this post
I’d been kicking the ideas in this post around for a while now. When I saw this tweet of Amy Thibodeau’s this morning I finally got around to writing it all out. Thanks, Amy, for unknowingly getting me motivated. 🙂
— Amy Thibodeau (@amythibodeau) October 11, 2016
This post is 983 words long with an average reading grade of 8.2.