Why I Started Microblogging

So, I’ve started to microblog. I was inspired by Alan Jacobs’ recent article, getting back to the open web via micro.blog. One of the big reasons he supports starting a microblog this way is is because he owns the content; it’s part of his own domain, his turf. And that’s appealing to me. Additionally, he (and I) can cross-post micro posts to Twitter “without stepping into the minefields of Twitter itself.” And that’s really appealing. And further, I often run across things that I’d like to share but don’t deserve their own post. Outside of Twitter, how do I share it? A microblog creates a space for that.. It becomes, in Alan Jacobs’ words, “a way for me to put everything I do online that is visually small — anything small enough not to require scrolling: quotes, links, images, audio files — in one place, and a place on my own site.”

So that’s why I started. But I wasn’t sure how I’d use my microblog when I did start, or if I’d even keep it up. 8 days in, I’ve had the chance to reflect on how I’musing it: what have I learned about the practice, and myself?

  • I’ve enjoyed linkblogging. When I read something, I can share the link along with a quote or reflection on how it affected me. It’s a great space to think out loud.
  • It’s become my social media home base. I don’t have Facebook or Instagram, but now I have a place to share photos. I have Twitter, but as mentioned, it lets me side-step actually being on Twitter while still sharing on the platform. These blog posts, too, appear on my micro.blog.
  • It’s a record of my thinking and reading that I can look back on. And thanks to IFTTT, it’s all backing up on my Day One journaling app, so I can see it side-by-side with my personal stuff.
  • Every day for the past four days, I’ve posted a photo to go along with the August 2020 photo challenge. I’ve had a few people compliment me on what I’ve shared. I’ve been able to do the same for others. And in a smaller community, that just seems to mean more.
  • As Austin Kleon notes, blogging is a great way to discover what you have to say. My microblog has given me a chance to have thoughts, and this longer blog has given me a space to figure out what it means–to discover what it is I have to say. In other words, my microblog is where I collect the raw materials; my blog is where I assemble them into questions and, perhaps, answers. It’s a place where I figure out what I really think.

I anticipate that my microblog will evolve, and I’ll find new purposes for it, while shedding others. But whatever it becomes, I have to say–I’ve enjoyed it so far. And perhaps that’s the most important thing. It’s a space for short reflection or ideation, coupled with a small community, all on my own domain and turf. And that’s awesome.

Reflections on Conversational Design (2)

Smart Speaker on a desk

Spoken sentences and words are the heart and soul of a voice experience. It’s in these moments, when we “have the mic” (so to speak), that designers can establish personality; express generosity; and create a sonic world for another person to inhabit.

But what goes into crafting, and critiquing, these spoken sentences? Where visual designs have some foundational pillars–typography, layout, and color–what does conversational design have that’s similar? What is the “color” of conversation? The “layout” of VUI design? The “typography” of speech? What, in other words, are the different disciplines that a conversational designer can draw from to craft and critique a conversational experience?

(Asked still another way, what disciplines does a conversational designer need to be fluent in? If I were hiring, what would I be looking for? If I were training a designer, what would I be drawing from?)

Here are some of the areas that I think are important to understanding

  • Linguistics. Written words and spoken words are different. We tend to write in lengthy sentences with a careful structure and a wider vocabulary. We tend to talk in chunks of seven words or so, interrupting ourselves as we go along, and using a simpler, shorter vocabulary. We use more vocatives, we take shortcuts–contractions, ellipsis, other “reduced forms”–and we tend to repeat ourselves, using “bundles” of relatively formulaic phrases. And of course, lest we forget, speech is interactive, which sets itself apart entirely from any kind of academic or news-like writing. A conversational designer should know the basics of how the spoken word differs from the written word, and why that’s important–which is fundamentally a linguistic question. And while linguistics is a large and intimidating field, most of the “speech verses writing” questions are tackled in sociolinguistics–a field that also talks about…
  • The Properties of Speech. Not only is spoken syntax and grammar different, but there’s an added element: speech is, well, speech. It’s spoken! And so it contains “paralinguistic” properties: breath, tone and intonation, prosody, volume, pitch, the speed at which we speak. Speech also has to be vocalized with a voice that has a certain timbre, or particular qualities (i.e. a baritone, smooth, female voice or a low, gravely, male voice). A conversational designer needs to know the basics of speech, and how it’s controlled with whatever technology they’re working with: whether it be a text-to-speech engine, or a voice actor in a studio.
  • Stance and Persona. Technically, this is directly linked to the first two points, but it bears repeating: speech expresses an attitude, toward the other person. We might refer to them as “sir” or “dude,” we might say “Please pass the butter” or (with a blunt imperative) “Pass the butter–now.” All of these suggest emotion and feeling toward the other person in the conversation. This also combines to express a personality: bubbly and outgoing, or short and direct, or clear and professional, or casual and friendly. A conversational designer should know how to establish this “art direction” for voice experiences, and what personality they want to project. This is all vital because people will make judgments about your conversational interface’s personality, even when they should know its a computer. That’s what people do.
  • Memory. Unlike graphical interfaces, which linger in space to be viewed and reviewed, voice interfaces do not linger. Once something is spoken, it’s gone, and resides only in the memory. But memory is limited. So we have to be really aware of cognitive load: we can’t give too many options, nor say too much, in any one turn, lest we overwhelm a person’s memory (or patience). Much of conversational design’s “best practices” comes down to keeping prompts short, sweet, and simple–working with human memory, instead of against it.
  • Sound and Music. Traffic, a honk, hammers, and birds–suddenly, you’re in the heart of a bustling city. A soft vibration, gongs, and steady throbs of “ohm,” and you’re now in a monastery, ready to meditate. The familiar three notes, and suddenly, you’re prepared to hear broadcasters or comedians from NBC. Sound and music can transport you. Or with a short “Ching,” it can inform you (You just got paid!). It can establish mood, or the completion of a task. It can change your emotions, or invoke memories. A conversational designer should know the basics of sound: pitch, rhythm, timbre, and melody, and the varieties of information and emotion it can convey.
  • Platform Limitations and Opportunities. As much as I’d like to design for Jarvis, most voice interfaces are far dumber than that–the burden of weak AI. People can’t speak with computers as naturally as they’d like and expect to be understood. For example, if someone wants a large pepperoni pizza with sausage, pepperoni, and pineapple but with gluten free curst–well, with current limitations, we have to ask for only some of that information at a time. We have to be aware of the limitations, and help the user work with those limitations instead of against it, lest we provoke confusion, frustration, or anger. And we need to be aware of the opportunities each platform and technology affords. These technologies and abilities are always changing; a conversational designer needs to stay abreast of the trends and technologies.

There are other things we need to consider, of course. The general “UX” process of testing with real people; a consideration of context; how voice interacts with graphical interfaces, such as on a smart speaker with a screen; recommended best practices; the nuances of creative writing and crafting a brilliant persona; the drawbacks of VUI design, and discerning which use cases are appropriate for voice and which are not; something of the history of the field. The list could go on. But the above points cover what I think someone should know, first and foremost, to design the foundational artifact of VUI design: prompts and speech. Armed with these concepts, I’ve found that it’s easier to both describe and prescribe the right prompts; to accomplish whatever goal the user has in mind, in the right way.

Reflections on Conversational Design (1)

What is a voice user interface? And what artifacts allow designers to express their intentions, and share it with others? I’ve been mulling over something Rebecca Evanhoe said in a Botmock AMA from earlier this year about these very questions. She said a conversational designer needs to be able to design these three things:

  1. The things the computer says: the prompts I write as a conversational designer
  2. The flow of the conversation–the “conversational pathways”–arising from the things the computer says (and the expectations provided)
  3. The interaction model behind it all, the “grammar” that anticipates what a user might say, and links those intents to an utterance

I like this way of thinking about it. First, it highlights that the pathways (2) and interaction model (3) derive from the the prompts we write (1). Those prompts: these are the beating heart and soul of conversational design. The syntax, grammar, and diction; the prosody, volume, and emphasis; the personality conveyed; the sounds used; all of this emerges from how we write the prompts.

And second, it made me realize something. I was going to argue that the prompts and pathways are really human-centered, and that we really have to deal with platform limitations when we start on the interaction model. To some extent, that’s true; but of course, not entirely. Yes, we have to start with how people actually talk, but anticipate the platform limitations from the very start.

And actually, the interaction model is where we really have to anticipate what people will actually say. A robust anticipation is vital, because otherwise, the conversation will falter: the agent that was designed (by me!) won’t know what someone meant.

Kranzberg’s Laws

Melvin Kranzberg

In October, 1985, Melvin Kranzberg (an eminent historian of technology) gave an address outlining six “laws” he’d noticed as he studied technology. As he points out, these aren’t “laws in the sense of commandments but rather a series of truisms” about how technology develops.

Before diving into the laws, though, he makes a few points about technological determinism: the idea that “Technology… has become autonomous and has outrun human control.” Not all scholars, he points out, agree. Lynn White Jr, for example, has said that technology “merely opens a door, it does not compel one to enter.” But as Kranzberg rightly points out in a provocative extension of the metaphor:

Nevertheless, several questions do arise. True, one is not compelled to enter White’s open door, but an open door is an invitation. Besides, who decides which doors to open–and, once one has entered the door, are not one one’s future directions by the contours of the corridor or chamber into which one has stepped? Equally important, once one has crossed the the threshold, can one turn back?

These are really deep questions, and ones to which Kranzberg admits “we historians do know the answer.” Technological determinism is a complex idea. Concretely, I wonder: was the internet inevitable? What do “the contours of the corridor or chamber” made by social media, smart speakers, and artificial intelligence look like? Can we turn back? Is there any reason we’d want to?

I don’t know. But I resist the idea that technological determinism. I’m not keen on what Mike Sacasas has called “the Borg complex”, the idea that “resistance is futile.” I’ve always been of the opinion that “what we can see, we can change.” Or to put that in the words of Marshall McLuhan, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”

But I digress–back to Kranzberg’s address. His six laws:

  • Law 1: Technology is neither good nor bad; nor is it neutral. Here, like the historian of technology he is, he’s talking about social change. Introduce the internet (or any technology) and it will change things in ways expected and unexpected. It’s the law of unintended consequences: there will be unexpected benefits and drawbacks, and often, perverse results–effects contrary to what was intended. And it will be different based on the variety of cultures and contexts. (He gives a great example of the pest control DDT in both the United States and India.)
  • Law 2: Invention is the mother of necessity. In other words, once a technology is made, it will necessitate the improvement of a variety of other inventions so it can work most effectively. (Or as Andy Crouch puts it, less forcefully, “What does this artifact make possible? What can people do or imagine, thanks to this artifact, that they could not before?”)
  • Law 3: Technology comes in packages, big and small. He gives the example of the radar, which a variety of people claim to have invented because it’s a complex technology made up of many pieces, all invented in different times and places. In a class I taught on voice technology, I was fond of illustrating the many technologies underlying a virtual assistant, all of which silently and invisibly allow us to play music or turn a light on in a room.
  • Law 4: Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions. Consider the adoption of Google Glasses, which has (and may always) run into privacy concerns. Kranzberg gives the example of communal kitchens, which would reduce housework but conflict with our modern idea of a home.
  • Law 5: All history is relevant, but the history of technology is the most relevant. It’s a bold and arguable claim, but I think he makes a good point for it.
  • Law 6: Technology is a very human activity–and so Is the history of technology. “Or to put it another way, man could not have become Homo Sapiens, “man the thinker,” had he not at the same time been Homo faber, “man the maker.”

It’s a fantastic address, and clarifying. I hope to write some more about these laws, and some reflections on what they mean for designers and technologists. But at a minimum, they encourage me to think more explicitly about the history of technology, “the most relevant” history of all. Even if that claim is hyperbolic, it’s surely more necessary to think about how things got the way they are. As Kranzberg says, “the history of technology is the story of man and tool–hand and mind–working together.”

UX as creative tension

Last week, I wrote about Matt Damon Smith’s definition of user experience, which is centered around the journey between where a user is (point A) and where a user wants to be (point B). This journey assumes there’s a gap between the current state and the desired future. All of this reminds me of Peter Senge’s concept of “creative tension”, which he defines as:

The juxtaposition of vision (what we want) and a clear picture of current reality (where we are relative to what we want) generates what we call “creative tension”: a force to bring them together, caused by the natural tendency of tension to seek resolution…

Peter Senge, The Fifth Discipline (p. 132)

Elsewhere, he compares this tension to a rubber band:

Senge’s concept of Creative Tension

I love how Senge links this back to “the natural tendency of tension to seek resolution.” Consider the example of music: great musicians build their songs around musical tension and resolution, the idea that certain chords want to “resolve” down to a home chord. Another example is marketing, which is–for better or for worse–about creating tension, prompting a “this is what your life could be like!” moment where the product or service can fill in the gap.

As a user experience designer, at least one of our purposes is helping users resolve the tension between what is, and what ought to be. And ideally, it should be as delightful and pleasing as hearing musical notes “land” in a pleasing place.

the interface makes the experience

For the past several years, whenever the “UX vs UI” debate has come up amongst my designer friends, I’ve held the position that UX is not UI. UI design is one of many skills involved in strong user experience design: a good UX designer needs to be familiar with information architecture, graphic design, requirement writing, copywriting, speaking to programmers, etc, etc. A person who only excels in UI design is a mere pixel pusher.

I still agree with this. But in working through Matt Smith’s shift nudge course on UI design, I realized something. This distinction works if I’m describing things from the perspective of my industry, which is focused on UX designers. In other words, this is a debate about roles, skills, and tasks.

But Matt Smith looks at this debate by talking about digital experiences themselves, rather than describing UX designers’ roles and skills; he’s describing the product, and not the architect. And a person experiences a digital product through the user interface. If a digital experience is about taking a user from where they are (point A) to where they’d like to be (point B), this is principally accomplished through the interface itself.

A drawing of Matt Smith’s conception of UX and UI design; adapted from his Shift Nudge curriculum

To put it another way: when describing the industry, a UX designer is much more than a UI designer. But when describing an actual experience, the interface design is the core that dictates the quality of that experience.

This explains, perhaps, why UX is often seen as interface design. Digital products are experienced by way of an interface! Although many other skills are needed to fashion the right experience, in the end, it is the interface that makes the experience.

This applies mainly to a single digital product or interface. The longue duruee of UX–the plurality of experiences with a brand across a variety of products, touchpoints, and interactions–is usefully described as CX, though even there, digital interfaces of some kind can make up a majority of the interactions.

“Helvetica”

I watched the Helvetica documentary this evening, all about–you guessed it–Helvetica. My New York City is prominent, especially because the subway systems are littered with Helvetica.

That word–littered–has such a negative connotation, as if Helvetica is a disease. And certainly, some of the people interviewed in the documentary think so. It was fun to see which of them possessed a dislike (or hatred) of the font, i.e. Erik Spiekermann. It was also interesting to see who really liked it, and felt they could do amazing things with just three or four fonts, i.e. Massimo Vignelli. Some of the people interviewed feel like type is a crystal goblet, and you shouldn’t see the goblet, but the content that’s in it. And some want the type to express something.

My main question going in was: is Helvetica a good font? I left with the impression that… it is. It’s spoiled by overuse and familiarity, but on its own merits, it’s legible and clear. Lars Müller called Helvetica “the perfume of the city,” and that appears to be true–not just of New York City, but of everywhere. The vignettes and montages in this documentary were really good at conveying just how ubiquitous this typeface really is.

Another question, which assumes that Helvetica is actually alright: is it possible to improve it? (Some of the people being interviewed joked that Helvetica was the End of History as far as type was concerned.) The question of improvement could be taken at least a few ways. First, can it be improved from a rationalist sense? Can we find a more geometrically pleasing, scientifically “good” ecology of type forms that combine together to create–well, something better than what we’ve got? Second, someone more engrained with romanticism or expressivism would probably laugh, and say–absolutely. It represents capitalism, or bureaucracy, or corporations, or the Veitnam war. It’s got to change, as all things must, to better capture the zeitgeist and make way for a new generation, who have new values beyond just “ideal proportions” and “rationalistic geometry.”

Anyway, it was a good documentary. A bit dated–the MySpace part made me wistful and nostalgic for the days when profile pages could have so much personality–but still good. This 2017 AIGA profile was a good 10-year anniversary that I enjoyed, and suggests the documentary still holds up.

Cosmic Calendars and the Powers of Ten

Recently, while visiting a showcase at a Herman Miller exhibit, I learned about Charles and Ray Eames–a power couple if there ever was one. Among their many contributions is a short movie they made together in 1968 (and released in 1977), a movie I’d seen but hadn’t realized who was behind it: the famous “Powers of Ten” video. It opens with a picnic; the narrator (voice by the famed physicist Phillip Morrison) says this:

We begin with a scene just one meter wide, viewed from just one meter away. Now every ten seconds we will look from ten times farther away, and our field of view will be ten times wider.

Powers of Ten

From there, it zooms out on a fast-paced journey until the screen encompasses superclusters and galaxies-upon-galaxies. And once there, you zoom back in to “our next goal, a proton in the nucleus of a carbon atom beneath the skin on the hand of a sleeping man in the picnic.”

On these scales, you can observe wonderful patterns. For example, in “Powers of Ten,” the narrator pauses to “notice the alternation between great activity and relative inactivity,” something he calls a rhythm. I love that: that the entire universe, as we zoom inward and outward, contains a rhythm–a “strong, regular, repeated pattern,” suggesting that even the universe has a pulse.

Powers of Ten seems related to Carl Sagan’s “Cosmos,” which aired just three years later in 1980. He had an episode where he describes the Cosmic Calendar–a pedagogical exercise where he “compresses the local history of the universe into a single year,” a unit of time that most of us can grasp and hold onto. He goes on to highlight that “if the universe began on January 1st, it was not until May that the Milky Way formed,” and that our sun and earth formed sometime in September. Once he arrives at human history, he changes the scale “from months to minutes… each minute 30,000 years long.” It’s wonderful. (A recently updated version, incorporating newer science and CGI, is narrated by Neil DeGrasse Tyson.)

As noted, both of these videos came out close to each other. I suspect they stem from a growing realization of the chronometric revolution–a term that David Christian coined to describe the development, in the middle of the twentieth century, of “new chronometric techniques, new ways of dating past events.” What did these new methods mean? “For the first time, these techniques allowed the construction of reliable chronologies extending back before the first written documents, before even the appearance of the first humans, back to the early days of our planet and even to the birth of the Universe as a whole.”

It seems to Eames and Sagan were both reacting to these new senses of scale: the vastness of both time and space, a vastness our human minds are ill-equipped to grasp and hande. A year, I understand. A billion? Not so much.

Other films since have grasped this, trying to help us get a “hook” into deep space and time. Some of these cinematic forays focus on narrative: not just what happened, but why, and how we ended up here. My favorite attempt at this is Big History Project, which draws a line from the Big Bang, to the formation of stars, to the explosion of new chemical elements, to the creation of planets, to the development of life, to the dawn of humanity, and beyond. At each of these moments, the themes of energy, complexity, thresholds, and “Goldilocks conditions” are used to show how something like us could have happened, especially in a universe ruled by entropy.

John Boswell’s Melodysheep films, especially his timelapse of the entire universe, is another telling: less focused on teaching and more focused on helping you feel something. The music, visuals, and speech combine to evoke a sense of the width and wonder of everything that’s happened since the Big Bang.

For me, videos like these create a kind of overview effect–a cognitive shift, where I start to realize how small I am–and how incredible (and fragile) existence is. And it all seems to have begun, at least cinematically, and for me,with the Eames’ wonderful video.

How to Cooperate in Conversation Design

You turn to me, and say, “Any updates on the designs I asked you about?” To which I reply, “That sandwich from Einstein’s was very, very good.”

You’re instantly confused, and for a very good reason. Unless talking about sandwiches is code for something, I was answering a very different question from the one you asked. And this violates something we usually take for granted: when we talk with each other, we’re cooperating. When I lie, or ramble, or reply with something irrelevant, I’ve stopped cooperating.

This idea is known as the cooperative principle. More precisely, it’s the idea that in conversation, we contribute as much to the conversation as is needed, moment-by-moment, to achieve some goal.

Unless you’re a sociopath (and I assume you are not), you do this naturally. In fact, Paul Grice, the man who invented it, means it as a description for how we normally talk, and not as a prescription for how we should talk. Again, we do this naturally. Grice took the natural, and therefore invisible, thing, and made it visible by articulating it.

Thinking for Yourself

But if we all do it naturally, why is discussing the principle important for designers? Put simply, it is easier to cooperate when we talk than it is to write. Why? Here’s John Trimble, in his excellent book, “The Art of Writing”:

Most of the [novice writer’s] difficulties start with the simple fact that the paper he writes on is mute. Because it never talks back to him, and because he’s concentrating so hard on generating ideas, he readily forgets–unlike the veteran–that another human being will eventually bet trying to make sense of what’s he saying. The result? His natural tendency as a writer is to think primarily of himself–hence to write primarily for himself. Here, in a nutshell, lies the ultimate reason for most bad writing.

John Trimble, “Writing with Style”

(And for most bad design, I’d add, but I digress.)

When we carry a normal conversation with other people, those other people are not on mute. We know what is being said and who is hearing it. We can see their faces, and gauge their understanding: are eyebrows raised? Are they nodding their heads? Are they making eye contact? Are they looking away, disengaged? And what do they say in response? Do they ask questions? Are they getting to their goal? All that they say–the content of their speech, the inflection of their voice, their facial expressions and body language–all of these are constantly available, constantly reminding us that we are speaking for others, and constantly telling us whether we’re playing our part well (or not).

When we write, on the other hand, we are, in very real ways, blind and deaf. Writing is a solitary act, and so it is easy to write for ourselves, to think for ourselves. And so–bringing things full circle–we forget to cooperate, to play our part in the conversation.

As writers, we are designers.

Design is often perceived as visual, but a digital product relies on language. Designing a product involves writing the button labels, menu items, and error messages that users interact with, and even figuring out whether text is the right solution at all. When you write the words that appear in a piece of software, you design the experience someone has with it.

Metts and Welfle, “Writing is Designing”

As an interface designer, this is important to remember. As a conversational designer, it is especially important. A conversational interface relies primarily, and sometimes wholly, on the strength of our writing. And the strength of our writing–our capacity to cooperate–relies on how well we understand our audience.

Following Grice’s Maxims

Let us turn back to the cooperative principle: the idea that we should, at each moment of a conversation, contribute to achieve whatever goal. In normal conversation, there can be many goals: to inform; to comfort; just to listen, and offer comfort and presence; to shoot the breeze and get others laughing. All of these are important to our humanity. But as conversational designers designing conversational computer interfaces, we have a more limited set of aims: to inform, to entertain, and/or to accomplish some task. We want to cooperate with the user and help them achieve these ends. What are practical guidelines to do this?

Luckily for us, Paul Grice gave four maxims. Again, these are descriptive–we naturally do these things. They are:

  • Maxim of Quality (Tell the Truth)
  • Maxim of Quantity (Say Only as Much as Necessary)
  • Maxim of Relevance (Be relevant)
  • Maxim of Manner (Be clear)

Let’s talk about each in turn.

The maxim of quality. We should only say what we understand to be true. We shouldn’t say what is false. When we lie, we are failing to cooperate.

The maxim of quantity. Napoleon once said “Quantity has a quality of its own.” He was suggesting that the size of his army–massive for the time–overcame any defects in their training and preparation.

But what is true for the battlefield is not true for conversation. We do not want to provide too much information. And neither do we want to provide too little. We want to provide the right amount. We all know long-winded people who say too much, who go on for far too long to say what they mean. But it’s also possible to provide too little information. Imagine me asking someone, here in New York City, “How do I get to Chicago?” They might say, “Head due northwest for XXX miles.” True, so far as it goes. But also much less information than I was hoping for. Like Goldlilocks trying to avoid the porridge that is too hot and too cold, we try to provide the amount that is not too much or too little, but “just right.”

The maxim of relevance. Be relevant. Go along with the topic. If I ask you for the time, don’t reply with your opinion of how bad the latest episode of the Bachelor was. It’s irrelevant to what I was asking for.

The maxim of manner. Be clear. Make your writing and speech easy to understand and unambiguous. If I ask you where the closest Starbucks is, do not give me the latitude and longitude. It’s true; it’s concise; and it’s even relevant. But it’s not clear, at all, how I’m supposed to use that information. Ernest Hemingway once wrote that “The indispensable characteristic of a good writer is a style marked by lucidity.”

The maxim of manner is arguably the most important of them all. Something can be relevant, true, and sufficient. But if it is not clear, it cannot be judged as relevant, true, or sufficient.

Let Context Guide

As I said earlier, conversation fills many roles in our lives: to laugh, to comfort, to learn, to love, to persuade, to entertain. But for conversational interfaces, the goals are much more limited. They are usually to inform, to entertain, or to accomplish some task. And when we switch between these contexts and goals–not to mention other contexts like physical location or mobility–we need to consider the impact on the situation, in light of Grice’s maxims.

In conversational design, we deal fundamentally in “turns.” (This is, perhaps, the best parallel to what graphic designers call the “artboard”, or put more simply, a screen.) A turn is made up of the utterance (“what the user says”) and the resulting response (“what the voice assistant says”).

As designers, we have the control over the response the voice assistant provides. Amazon stresses a “one-breath test” for the length of these responses. This means that if a single response by Alexa or Google Assistant cannot be said in less than one breath, than it’s perhaps too long. And this is true most of the time. It is true when the aim is to inform or accomplish most tasks. But it is not always true.

Consider Kung Fu Panda, a popular Alexa skill made by RAIN. The turns are much, much longer than a single breath, because the aim is to entertain.

Or consider Headspace, another voice app RAIN made. I was the lead designer for this app, which ties into the popular Headspace product, which offers guided meditations to everyone. The menu is exceptionally simple:

In the first two responses, the goal (getting quickly to a meditation) dictate that we be brief and clear: here are your options. We broke the conversation up into tiers, to avoid an excessively long list of options at the beginning. But once we reached the meditation, we played a ten-minute response: a guided meditation. Far from being too long, this was cooperating with the user: providing them a guided meditation, where they expected to only listen.

A more difficult lesson I learned with Headspace: in the first iteration, we played a short message at the end of the meditation, explaining how to get access to more meditations. I thought this would be helpful. But far from achieving its goal, users hated it. Just when users had achieved some stillness and quiet, we interrupted it, ruining ten minutes of patient silence. Metts and Welfle have said that “when writing is designing… the goal is not to grab attention, but to help your users accomplish their tasks.” We were grabbing their attention again, when our purpose should have been to help them achieve their tasks at every step.

How to Write for Others

Some of the key points:

  • Conversation is about cooperation.
  • We naturally cooperate in normal conversation. But when writing, our audience is on mute. So it’s easy to forget.
  • Grice’s maxims describe how we normally cooperate. We tell the truth; we say enough (not too much or too little); we stay relevant; and above all, we’re clear.
  • Context is important. Conversational interfaces are usually made to inform, to entertain, or to accomplish some task–and sometimes all of these. Keep this in mind at each turn.

How do we do this? I’ll write more about that in another article. The key, of course, is to keep the audience in mind. Never let your writing–whether it be for a blog post, website copy, a chatbot, or a voice interface–go out without having first thought what your audience wants, and how well you’ve provided that.

The Great Man Theory of Design History

I’ve always wondered why one style becomes “the thing” in different eras–whether it’s the 1890s or the 1960s. So it was a welcome surprise that, one page into Owen Jones’ design classic The Grammar of Ornament, I discovered he tries to answer this very thing:

Man’s earliest ambition is to create… As we advance higher, from the decoration of the rude tent or wigwam to the sublime works of a Phidias or Praxiteles, the same feeling is everywhere apparent: the highest ambition is still to create, to stamp on this earth the impress of an individual mind.

From time to time a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track, yet never so closely as to destroy the individual ambition to create; hence the cause of styles, and of the modification of styles.

Owen Jones, The Grammar of Ornament, 32-33

“From time to time a mind stronger than those around will impress itself on a generation.” Hence, he says, the cause of style–and the modifications of past styles.

This basically sounds like the “Great Man Theory of History,” but applied to design history.

If you’re not familiar with this idea, it comes from Thomas Carlyle, and it’s basically: history happens because “a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track.” It ascribes momentous changes in history not to systems and trends, but to people who are forces of nature, and who were far from inevitable. Think Julius Caesar, Napoleon Bonaparte, Adolf Hitler, Winston Churchill, etc.

For design, I think Owen Jones would ascribe changes to major people. He’d probably say that he and Henry Cole and others like them were the “strong minds,” producing the Arts & Craft movement that followed, and that without them that movement would never have happened–or at least, not happened the way it did. He probably would have said the Glasgow movement followed the “strong minds” of The Four–Charles Makintosh, James MacNair, Margaret and Frances MacDonald. Without those four, those trends in design wouldn’t have occurred.

I’m not sure I buy this idea entirely. The Wikipedia page on the “Great Man Theory of History” has several criticisms of the theory, which usually amount to: the individual is always shaped by the social environment, so it’s the larger trends and forces that make the rise of some individual perhaps inevitable: they light the match on a pile of burning wood that’s already there. That said, Dan Carlin–somewhere in his large ouvre of Hardcore History podcasts–has said that he thinks the answer lies somewhere in between: if Winston Churchill hadn’t been in a position of authority in World War 2, would the outcome have changed? If Hitler had been someone with more mental stability, could that the war have changed? Entirely possible on both accounts. But of course, trends and forces are involved, too: producing the currents that gave rise to Nazism and nationalism.

So it’s probably a mix in history of design, as well. The Arts & Crafts movement may have been an inevitable trend to the alienation and de-personalization caused by the Industrial Revolution. But it’s possible that Henry Cole and Owen Jones and John Ruskin and William Morris’ specific opinions and preferences were not inevitable. Same goes for other major designers and the trends they worked in. (I’d also add–there are Great Women too!)

It’s definitely interesting, though. What does it take to create a style that goes “viral,” to use our language today? A style that catches on? And is that style an expression of the spirit of the age–the zeitgeist? Or does a specific style come about because of a forceful mind, “impress[ing] itself on a generation”? Or is it something in between?