Categories
Notes

UX as creative tension

Last week, I wrote about Matt Damon Smith’s definition of user experience, which is centered around the journey between where a user is (point A) and where a user wants to be (point B). This journey assumes there’s a gap between the current state and the desired future. All of this reminds me of Peter Senge’s concept of “creative tension”, which he defines as:

The juxtaposition of vision (what we want) and a clear picture of current reality (where we are relative to what we want) generates what we call “creative tension”: a force to bring them together, caused by the natural tendency of tension to seek resolution…

Peter Senge, The Fifth Discipline (p. 132)

Elsewhere, he compares this tension to a rubber band:

Senge’s concept of Creative Tension

I love how Senge links this back to “the natural tendency of tension to seek resolution.” Consider the example of music: great musicians build their songs around musical tension and resolution, the idea that certain chords want to “resolve” down to a home chord. Another example is marketing, which is–for better or for worse–about creating tension, prompting a “this is what your life could be like!” moment where the product or service can fill in the gap.

As a user experience designer, at least one of our purposes is helping users resolve the tension between what is, and what ought to be. And ideally, it should be as delightful and pleasing as hearing musical notes “land” in a pleasing place.

Categories
Notes

the interface makes the experience

For the past several years, whenever the “UX vs UI” debate has come up amongst my designer friends, I’ve held the position that UX is not UI. UI design is one of many skills involved in strong user experience design: a good UX designer needs to be familiar with information architecture, graphic design, requirement writing, copywriting, speaking to programmers, etc, etc. A person who only excels in UI design is a mere pixel pusher.

I still agree with this. But in working through Matt Smith’s shift nudge course on UI design, I realized something. This distinction works if I’m describing things from the perspective of my industry, which is focused on UX designers. In other words, this is a debate about roles, skills, and tasks.

But Matt Smith looks at this debate by talking about digital experiences themselves, rather than describing UX designers’ roles and skills; he’s describing the product, and not the architect. And a person experiences a digital product through the user interface. If a digital experience is about taking a user from where they are (point A) to where they’d like to be (point B), this is principally accomplished through the interface itself.

A drawing of Matt Smith’s conception of UX and UI design; adapted from his Shift Nudge curriculum

To put it another way: when describing the industry, a UX designer is much more than a UI designer. But when describing an actual experience, the interface design is the core that dictates the quality of that experience.

This explains, perhaps, why UX is often seen as interface design. Digital products are experienced by way of an interface! Although many other skills are needed to fashion the right experience, in the end, it is the interface that makes the experience.

This applies mainly to a single digital product or interface. The longue duruee of UX–the plurality of experiences with a brand across a variety of products, touchpoints, and interactions–is usefully described as CX, though even there, digital interfaces of some kind can make up a majority of the interactions.

Categories
Notes

“Helvetica”

I watched the Helvetica documentary this evening, all about–you guessed it–Helvetica. My New York City is prominent, especially because the subway systems are littered with Helvetica.

That word–littered–has such a negative connotation, as if Helvetica is a disease. And certainly, some of the people interviewed in the documentary think so. It was fun to see which of them possessed a dislike (or hatred) of the font, i.e. Erik Spiekermann. It was also interesting to see who really liked it, and felt they could do amazing things with just three or four fonts, i.e. Massimo Vignelli. Some of the people interviewed feel like type is a crystal goblet, and you shouldn’t see the goblet, but the content that’s in it. And some want the type to express something.

My main question going in was: is Helvetica a good font? I left with the impression that… it is. It’s spoiled by overuse and familiarity, but on its own merits, it’s legible and clear. Lars Müller called Helvetica “the perfume of the city,” and that appears to be true–not just of New York City, but of everywhere. The vignettes and montages in this documentary were really good at conveying just how ubiquitous this typeface really is.

Another question, which assumes that Helvetica is actually alright: is it possible to improve it? (Some of the people being interviewed joked that Helvetica was the End of History as far as type was concerned.) The question of improvement could be taken at least a few ways. First, can it be improved from a rationalist sense? Can we find a more geometrically pleasing, scientifically “good” ecology of type forms that combine together to create–well, something better than what we’ve got? Second, someone more engrained with romanticism or expressivism would probably laugh, and say–absolutely. It represents capitalism, or bureaucracy, or corporations, or the Veitnam war. It’s got to change, as all things must, to better capture the zeitgeist and make way for a new generation, who have new values beyond just “ideal proportions” and “rationalistic geometry.”

Anyway, it was a good documentary. A bit dated–the MySpace part made me wistful and nostalgic for the days when profile pages could have so much personality–but still good. This 2017 AIGA profile was a good 10-year anniversary that I enjoyed, and suggests the documentary still holds up.

Categories
Notes

Cosmic Calendars and the Powers of Ten

Recently, while visiting a showcase at a Herman Miller exhibit, I learned about Charles and Ray Eames–a power couple if there ever was one. Among their many contributions is a short movie they made together in 1968 (and released in 1977), a movie I’d seen but hadn’t realized who was behind it: the famous “Powers of Ten” video. It opens with a picnic; the narrator (voice by the famed physicist Phillip Morrison) says this:

We begin with a scene just one meter wide, viewed from just one meter away. Now every ten seconds we will look from ten times farther away, and our field of view will be ten times wider.

Powers of Ten

From there, it zooms out on a fast-paced journey until the screen encompasses superclusters and galaxies-upon-galaxies. And once there, you zoom back in to “our next goal, a proton in the nucleus of a carbon atom beneath the skin on the hand of a sleeping man in the picnic.”

On these scales, you can observe wonderful patterns. For example, in “Powers of Ten,” the narrator pauses to “notice the alternation between great activity and relative inactivity,” something he calls a rhythm. I love that: that the entire universe, as we zoom inward and outward, contains a rhythm–a “strong, regular, repeated pattern,” suggesting that even the universe has a pulse.

Powers of Ten seems related to Carl Sagan’s “Cosmos,” which aired just three years later in 1980. He had an episode where he describes the Cosmic Calendar–a pedagogical exercise where he “compresses the local history of the universe into a single year,” a unit of time that most of us can grasp and hold onto. He goes on to highlight that “if the universe began on January 1st, it was not until May that the Milky Way formed,” and that our sun and earth formed sometime in September. Once he arrives at human history, he changes the scale “from months to minutes… each minute 30,000 years long.” It’s wonderful. (A recently updated version, incorporating newer science and CGI, is narrated by Neil DeGrasse Tyson.)

As noted, both of these videos came out close to each other. I suspect they stem from a growing realization of the chronometric revolution–a term that David Christian coined to describe the development, in the middle of the twentieth century, of “new chronometric techniques, new ways of dating past events.” What did these new methods mean? “For the first time, these techniques allowed the construction of reliable chronologies extending back before the first written documents, before even the appearance of the first humans, back to the early days of our planet and even to the birth of the Universe as a whole.”

It seems to Eames and Sagan were both reacting to these new senses of scale: the vastness of both time and space, a vastness our human minds are ill-equipped to grasp and hande. A year, I understand. A billion? Not so much.

Other films since have grasped this, trying to help us get a “hook” into deep space and time. Some of these cinematic forays focus on narrative: not just what happened, but why, and how we ended up here. My favorite attempt at this is Big History Project, which draws a line from the Big Bang, to the formation of stars, to the explosion of new chemical elements, to the creation of planets, to the development of life, to the dawn of humanity, and beyond. At each of these moments, the themes of energy, complexity, thresholds, and “Goldilocks conditions” are used to show how something like us could have happened, especially in a universe ruled by entropy.

John Boswell’s Melodysheep films, especially his timelapse of the entire universe, is another telling: less focused on teaching and more focused on helping you feel something. The music, visuals, and speech combine to evoke a sense of the width and wonder of everything that’s happened since the Big Bang.

For me, videos like these create a kind of overview effect–a cognitive shift, where I start to realize how small I am–and how incredible (and fragile) existence is. And it all seems to have begun, at least cinematically, and for me,with the Eames’ wonderful video.

Categories
Essays

How to Cooperate in Conversation Design

You turn to me, and say, “Any updates on the designs I asked you about?” To which I reply, “That sandwich from Einstein’s was very, very good.”

You’re instantly confused, and for a very good reason. Unless talking about sandwiches is code for something, I was answering a very different question from the one you asked. And this violates something we usually take for granted: when we talk with each other, we’re cooperating. When I lie, or ramble, or reply with something irrelevant, I’ve stopped cooperating.

This idea is known as the cooperative principle. More precisely, it’s the idea that in conversation, we contribute as much to the conversation as is needed, moment-by-moment, to achieve some goal.

Unless you’re a sociopath (and I assume you are not), you do this naturally. In fact, Paul Grice, the man who invented it, means it as a description for how we normally talk, and not as a prescription for how we should talk. Again, we do this naturally. Grice took the natural, and therefore invisible, thing, and made it visible by articulating it.

Thinking for Yourself

But if we all do it naturally, why is discussing the principle important for designers? Put simply, it is easier to cooperate when we talk than it is to write. Why? Here’s John Trimble, in his excellent book, “The Art of Writing”:

Most of the [novice writer’s] difficulties start with the simple fact that the paper he writes on is mute. Because it never talks back to him, and because he’s concentrating so hard on generating ideas, he readily forgets–unlike the veteran–that another human being will eventually bet trying to make sense of what’s he saying. The result? His natural tendency as a writer is to think primarily of himself–hence to write primarily for himself. Here, in a nutshell, lies the ultimate reason for most bad writing.

John Trimble, “Writing with Style”

(And for most bad design, I’d add, but I digress.)

When we carry a normal conversation with other people, those other people are not on mute. We know what is being said and who is hearing it. We can see their faces, and gauge their understanding: are eyebrows raised? Are they nodding their heads? Are they making eye contact? Are they looking away, disengaged? And what do they say in response? Do they ask questions? Are they getting to their goal? All that they say–the content of their speech, the inflection of their voice, their facial expressions and body language–all of these are constantly available, constantly reminding us that we are speaking for others, and constantly telling us whether we’re playing our part well (or not).

When we write, on the other hand, we are, in very real ways, blind and deaf. Writing is a solitary act, and so it is easy to write for ourselves, to think for ourselves. And so–bringing things full circle–we forget to cooperate, to play our part in the conversation.

As writers, we are designers.

Design is often perceived as visual, but a digital product relies on language. Designing a product involves writing the button labels, menu items, and error messages that users interact with, and even figuring out whether text is the right solution at all. When you write the words that appear in a piece of software, you design the experience someone has with it.

Metts and Welfle, “Writing is Designing”

As an interface designer, this is important to remember. As a conversational designer, it is especially important. A conversational interface relies primarily, and sometimes wholly, on the strength of our writing. And the strength of our writing–our capacity to cooperate–relies on how well we understand our audience.

Following Grice’s Maxims

Let us turn back to the cooperative principle: the idea that we should, at each moment of a conversation, contribute to achieve whatever goal. In normal conversation, there can be many goals: to inform; to comfort; just to listen, and offer comfort and presence; to shoot the breeze and get others laughing. All of these are important to our humanity. But as conversational designers designing conversational computer interfaces, we have a more limited set of aims: to inform, to entertain, and/or to accomplish some task. We want to cooperate with the user and help them achieve these ends. What are practical guidelines to do this?

Luckily for us, Paul Grice gave four maxims. Again, these are descriptive–we naturally do these things. They are:

  • Maxim of Quality (Tell the Truth)
  • Maxim of Quantity (Say Only as Much as Necessary)
  • Maxim of Relevance (Be relevant)
  • Maxim of Manner (Be clear)

Let’s talk about each in turn.

The maxim of quality. We should only say what we understand to be true. We shouldn’t say what is false. When we lie, we are failing to cooperate.

The maxim of quantity. Napoleon once said “Quantity has a quality of its own.” He was suggesting that the size of his army–massive for the time–overcame any defects in their training and preparation.

But what is true for the battlefield is not true for conversation. We do not want to provide too much information. And neither do we want to provide too little. We want to provide the right amount. We all know long-winded people who say too much, who go on for far too long to say what they mean. But it’s also possible to provide too little information. Imagine me asking someone, here in New York City, “How do I get to Chicago?” They might say, “Head due northwest for XXX miles.” True, so far as it goes. But also much less information than I was hoping for. Like Goldlilocks trying to avoid the porridge that is too hot and too cold, we try to provide the amount that is not too much or too little, but “just right.”

The maxim of relevance. Be relevant. Go along with the topic. If I ask you for the time, don’t reply with your opinion of how bad the latest episode of the Bachelor was. It’s irrelevant to what I was asking for.

The maxim of manner. Be clear. Make your writing and speech easy to understand and unambiguous. If I ask you where the closest Starbucks is, do not give me the latitude and longitude. It’s true; it’s concise; and it’s even relevant. But it’s not clear, at all, how I’m supposed to use that information. Ernest Hemingway once wrote that “The indispensable characteristic of a good writer is a style marked by lucidity.”

The maxim of manner is arguably the most important of them all. Something can be relevant, true, and sufficient. But if it is not clear, it cannot be judged as relevant, true, or sufficient.

Let Context Guide

As I said earlier, conversation fills many roles in our lives: to laugh, to comfort, to learn, to love, to persuade, to entertain. But for conversational interfaces, the goals are much more limited. They are usually to inform, to entertain, or to accomplish some task. And when we switch between these contexts and goals–not to mention other contexts like physical location or mobility–we need to consider the impact on the situation, in light of Grice’s maxims.

In conversational design, we deal fundamentally in “turns.” (This is, perhaps, the best parallel to what graphic designers call the “artboard”, or put more simply, a screen.) A turn is made up of the utterance (“what the user says”) and the resulting response (“what the voice assistant says”).

As designers, we have the control over the response the voice assistant provides. Amazon stresses a “one-breath test” for the length of these responses. This means that if a single response by Alexa or Google Assistant cannot be said in less than one breath, than it’s perhaps too long. And this is true most of the time. It is true when the aim is to inform or accomplish most tasks. But it is not always true.

Consider Kung Fu Panda, a popular Alexa skill made by RAIN. The turns are much, much longer than a single breath, because the aim is to entertain.

Or consider Headspace, another voice app RAIN made. I was the lead designer for this app, which ties into the popular Headspace product, which offers guided meditations to everyone. The menu is exceptionally simple:

In the first two responses, the goal (getting quickly to a meditation) dictate that we be brief and clear: here are your options. We broke the conversation up into tiers, to avoid an excessively long list of options at the beginning. But once we reached the meditation, we played a ten-minute response: a guided meditation. Far from being too long, this was cooperating with the user: providing them a guided meditation, where they expected to only listen.

A more difficult lesson I learned with Headspace: in the first iteration, we played a short message at the end of the meditation, explaining how to get access to more meditations. I thought this would be helpful. But far from achieving its goal, users hated it. Just when users had achieved some stillness and quiet, we interrupted it, ruining ten minutes of patient silence. Metts and Welfle have said that “when writing is designing… the goal is not to grab attention, but to help your users accomplish their tasks.” We were grabbing their attention again, when our purpose should have been to help them achieve their tasks at every step.

How to Write for Others

Some of the key points:

  • Conversation is about cooperation.
  • We naturally cooperate in normal conversation. But when writing, our audience is on mute. So it’s easy to forget.
  • Grice’s maxims describe how we normally cooperate. We tell the truth; we say enough (not too much or too little); we stay relevant; and above all, we’re clear.
  • Context is important. Conversational interfaces are usually made to inform, to entertain, or to accomplish some task–and sometimes all of these. Keep this in mind at each turn.

How do we do this? I’ll write more about that in another article. The key, of course, is to keep the audience in mind. Never let your writing–whether it be for a blog post, website copy, a chatbot, or a voice interface–go out without having first thought what your audience wants, and how well you’ve provided that.

Categories
Notes

The Great Man Theory of Design History

I’ve always wondered why one style becomes “the thing” in different eras–whether it’s the 1890s or the 1960s. So it was a welcome surprise that, one page into Owen Jones’ design classic The Grammar of Ornament, I discovered he tries to answer this very thing:

Man’s earliest ambition is to create… As we advance higher, from the decoration of the rude tent or wigwam to the sublime works of a Phidias or Praxiteles, the same feeling is everywhere apparent: the highest ambition is still to create, to stamp on this earth the impress of an individual mind.

From time to time a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track, yet never so closely as to destroy the individual ambition to create; hence the cause of styles, and of the modification of styles.

Owen Jones, The Grammar of Ornament, 32-33

“From time to time a mind stronger than those around will impress itself on a generation.” Hence, he says, the cause of style–and the modifications of past styles.

This basically sounds like the “Great Man Theory of History,” but applied to design history.

If you’re not familiar with this idea, it comes from Thomas Carlyle, and it’s basically: history happens because “a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track.” It ascribes momentous changes in history not to systems and trends, but to people who are forces of nature, and who were far from inevitable. Think Julius Caesar, Napoleon Bonaparte, Adolf Hitler, Winston Churchill, etc.

For design, I think Owen Jones would ascribe changes to major people. He’d probably say that he and Henry Cole and others like them were the “strong minds,” producing the Arts & Craft movement that followed, and that without them that movement would never have happened–or at least, not happened the way it did. He probably would have said the Glasgow movement followed the “strong minds” of The Four–Charles Makintosh, James MacNair, Margaret and Frances MacDonald. Without those four, those trends in design wouldn’t have occurred.

I’m not sure I buy this idea entirely. The Wikipedia page on the “Great Man Theory of History” has several criticisms of the theory, which usually amount to: the individual is always shaped by the social environment, so it’s the larger trends and forces that make the rise of some individual perhaps inevitable: they light the match on a pile of burning wood that’s already there. That said, Dan Carlin–somewhere in his large ouvre of Hardcore History podcasts–has said that he thinks the answer lies somewhere in between: if Winston Churchill hadn’t been in a position of authority in World War 2, would the outcome have changed? If Hitler had been someone with more mental stability, could that the war have changed? Entirely possible on both accounts. But of course, trends and forces are involved, too: producing the currents that gave rise to Nazism and nationalism.

So it’s probably a mix in history of design, as well. The Arts & Crafts movement may have been an inevitable trend to the alienation and de-personalization caused by the Industrial Revolution. But it’s possible that Henry Cole and Owen Jones and John Ruskin and William Morris’ specific opinions and preferences were not inevitable. Same goes for other major designers and the trends they worked in. (I’d also add–there are Great Women too!)

It’s definitely interesting, though. What does it take to create a style that goes “viral,” to use our language today? A style that catches on? And is that style an expression of the spirit of the age–the zeitgeist? Or does a specific style come about because of a forceful mind, “impress[ing] itself on a generation”? Or is it something in between?

Categories
Notes

New Materials and Revolutions in Design

Going through Dardi and Pasca’s Design History Handbook, I’m realizing that the materials available to designers have been a major influence in the history of design. Perhaps this is obvious, but it’s also obviously profound. I quote:

Designers, accustomed for millennia to operating with natural materials, were no faced not only with the enormous availability of iron, cast iron, and glass, but with new procedures. Vulcanization allowed for the use of gutta-percha or rubber to simulate wood, stone, and metals, inlays included; electrotyping made it possible to reproduce objects by electrochemically depositing a metal into a mold; granite and marble could now be easily cut. The new responsibility of designers was to give shape and meaning to the artificial processes [and materials] that the Industrial Revolution was developing.

Design History Handbook, p. 17

Maybe what I didn’t realize was the full extent of materials suddenly available to people once the Industrial Revolution came around: an explosion of new materials and methods to make them.

Here’s a pile of vulcanized rubber–again, a never-before-seen material–witnessed at the Great Exhibition in 1851:

Charles Goodyear, display of products in Indian vulcanized rubber (1851-1852)–a material that . From the Library of Congress.

Can you imagine? Seeing a material like this for the first time?

Andy Crouch, in his book Culture Making, talks about questions to ask of any “cultural artifact” (meaning anything from an iPhone to an omelet). One of those questions is: what new horizons of possibility does this artifact open up? What does it make possible?

So: what did vulcanization, and vulcanized rubber, make possible? It made possible “rubber hoses, shoe soles, tires, bowling balls, bouncing balls, hockey pucks, toys, erasers, and instrument mouthpieces.” Tires, of course is the biggest of these (which is why Goodyear Tires is named after Charles Goodyear, the inventor of vulcanization). But note that “most rubber products in the world are vulcanized, whether the rubber is natural or synthetic.”

In other words, one artifact–from Charles Goodyear–made possible businessmen and designers who could suddenly imagine new worlds of possibility: a world with tires, toys, and instrument mouthpieces. And of course, a world with tires, was a world with cars; and a world with cars, a world with highways and commutes. (Thanks, Charles. Could’ve done without commutes.)

What a revolution that just one of those new materials instigated. Each of these new materials became the blueprint for new products, technologies, and works of art. And those products, technologies, and works of art have remade the world we live in today.

My friend Will Hall explained to me that in the Bauhaus school of design (in the early 1900s), they had two teachers in every classroom: a master of form and a master of works. The former was a visual artist; the latter an expert in the production of the new materials, and the machines involved. Together, the students learned to create and craft from all sorts of materials: textile, wood, glass, color, clay, stone, and much more besides. They were tutored in the realms of possibility that each material opened, and how to act in that realm. This, combined with a distinctive aesthetic approach, is part of what made the Bauhaus school so successful.

So: I’m only just learning about the larger history of design, but it seems apparent that design is shaped by the materials available to me. New materials? New opportunities. What materials do I have to work with?

It’s exciting: as a UX designer in the twenty-first century, I get to work with digital materials of all kinds. Pixels, sound waves, moving pictures, photographs; a flexible canvas of colors and layout and typography. These materials can be combined to create something wonderful: a digital interface, and ideally, a wonderful subjective experience within that interface.

I do a lot with conversational design, and I’m starting to wake up to the materials I really have to work with. New synthesized voices from places like Vocal ID and Lyrebird; new abilities to alter not only the gender and tone of the voice, but the paralanguage–the prosody, the breath, the intonation, and feeling of the voice; libraries of foley sounds and music loops; the ability to create music, fashioned from the raw materials of rhythm, pitch, timbre, amplitude, and harmony–all available to me from the use of different instruments, and even synthesized music. All of these can be combined to create immersive soundscapes and aural experiences on smart speakers and phones.