in Notes

Reflections on Conversational Design (1)

What is a voice user interface? And what artifacts allow designers to express their intentions, and share it with others? I’ve been mulling over something Rebecca Evanhoe said in a Botmock AMA from earlier this year about these very questions. She said a conversational designer needs to be able to design these three things:

  1. The things the computer says: the prompts I write as a conversational designer
  2. The flow of the conversation–the “conversational pathways”–arising from the things the computer says (and the expectations provided)
  3. The interaction model behind it all, the “grammar” that anticipates what a user might say, and links those intents to an utterance

I like this way of thinking about it. First, it highlights that the pathways (2) and interaction model (3) derive from the the prompts we write (1). Those prompts: these are the beating heart and soul of conversational design. The syntax, grammar, and diction; the prosody, volume, and emphasis; the personality conveyed; the sounds used; all of this emerges from how we write the prompts.

And second, it made me realize something. I was going to argue that the prompts and pathways are really human-centered, and that we really have to deal with platform limitations when we start on the interaction model. To some extent, that’s true; but of course, not entirely. Yes, we have to start with how people actually talk, but anticipate the platform limitations from the very start.

And actually, the interaction model is where we really have to anticipate what people will actually say. A robust anticipation is vital, because otherwise, the conversation will falter: the agent that was designed (by me!) won’t know what someone meant.

Write a Comment

Comment