Why I Started Microblogging

So, I’ve started to microblog. I was inspired by Alan Jacobs’ recent article, getting back to the open web via micro.blog. One of the big reasons he supports starting a microblog this way is is because he owns the content; it’s part of his own domain, his turf. And that’s appealing to me. Additionally, he (and I) can cross-post micro posts to Twitter “without stepping into the minefields of Twitter itself.” And that’s really appealing. And further, I often run across things that I’d like to share but don’t deserve their own post. Outside of Twitter, how do I share it? A microblog creates a space for that.. It becomes, in Alan Jacobs’ words, “a way for me to put everything I do online that is visually small — anything small enough not to require scrolling: quotes, links, images, audio files — in one place, and a place on my own site.”

So that’s why I started. But I wasn’t sure how I’d use my microblog when I did start, or if I’d even keep it up. 8 days in, I’ve had the chance to reflect on how I’musing it: what have I learned about the practice, and myself?

  • I’ve enjoyed linkblogging. When I read something, I can share the link along with a quote or reflection on how it affected me. It’s a great space to think out loud.
  • It’s become my social media home base. I don’t have Facebook or Instagram, but now I have a place to share photos. I have Twitter, but as mentioned, it lets me side-step actually being on Twitter while still sharing on the platform. These blog posts, too, appear on my micro.blog.
  • It’s a record of my thinking and reading that I can look back on. And thanks to IFTTT, it’s all backing up on my Day One journaling app, so I can see it side-by-side with my personal stuff.
  • Every day for the past four days, I’ve posted a photo to go along with the August 2020 photo challenge. I’ve had a few people compliment me on what I’ve shared. I’ve been able to do the same for others. And in a smaller community, that just seems to mean more.
  • As Austin Kleon notes, blogging is a great way to discover what you have to say. My microblog has given me a chance to have thoughts, and this longer blog has given me a space to figure out what it means–to discover what it is I have to say. In other words, my microblog is where I collect the raw materials; my blog is where I assemble them into questions and, perhaps, answers. It’s a place where I figure out what I really think.

I anticipate that my microblog will evolve, and I’ll find new purposes for it, while shedding others. But whatever it becomes, I have to say–I’ve enjoyed it so far. And perhaps that’s the most important thing. It’s a space for short reflection or ideation, coupled with a small community, all on my own domain and turf. And that’s awesome.

Reflections on Conversational Design (2)

Smart Speaker on a desk

Spoken sentences and words are the heart and soul of a voice experience. It’s in these moments, when we “have the mic” (so to speak), that designers can establish personality; express generosity; and create a sonic world for another person to inhabit.

But what goes into crafting, and critiquing, these spoken sentences? Where visual designs have some foundational pillars–typography, layout, and color–what does conversational design have that’s similar? What is the “color” of conversation? The “layout” of VUI design? The “typography” of speech? What, in other words, are the different disciplines that a conversational designer can draw from to craft and critique a conversational experience?

(Asked still another way, what disciplines does a conversational designer need to be fluent in? If I were hiring, what would I be looking for? If I were training a designer, what would I be drawing from?)

Here are some of the areas that I think are important to understanding

  • Linguistics. Written words and spoken words are different. We tend to write in lengthy sentences with a careful structure and a wider vocabulary. We tend to talk in chunks of seven words or so, interrupting ourselves as we go along, and using a simpler, shorter vocabulary. We use more vocatives, we take shortcuts–contractions, ellipsis, other “reduced forms”–and we tend to repeat ourselves, using “bundles” of relatively formulaic phrases. And of course, lest we forget, speech is interactive, which sets itself apart entirely from any kind of academic or news-like writing. A conversational designer should know the basics of how the spoken word differs from the written word, and why that’s important–which is fundamentally a linguistic question. And while linguistics is a large and intimidating field, most of the “speech verses writing” questions are tackled in sociolinguistics–a field that also talks about…
  • The Properties of Speech. Not only is spoken syntax and grammar different, but there’s an added element: speech is, well, speech. It’s spoken! And so it contains “paralinguistic” properties: breath, tone and intonation, prosody, volume, pitch, the speed at which we speak. Speech also has to be vocalized with a voice that has a certain timbre, or particular qualities (i.e. a baritone, smooth, female voice or a low, gravely, male voice). A conversational designer needs to know the basics of speech, and how it’s controlled with whatever technology they’re working with: whether it be a text-to-speech engine, or a voice actor in a studio.
  • Stance and Persona. Technically, this is directly linked to the first two points, but it bears repeating: speech expresses an attitude, toward the other person. We might refer to them as “sir” or “dude,” we might say “Please pass the butter” or (with a blunt imperative) “Pass the butter–now.” All of these suggest emotion and feeling toward the other person in the conversation. This also combines to express a personality: bubbly and outgoing, or short and direct, or clear and professional, or casual and friendly. A conversational designer should know how to establish this “art direction” for voice experiences, and what personality they want to project. This is all vital because people will make judgments about your conversational interface’s personality, even when they should know its a computer. That’s what people do.
  • Memory. Unlike graphical interfaces, which linger in space to be viewed and reviewed, voice interfaces do not linger. Once something is spoken, it’s gone, and resides only in the memory. But memory is limited. So we have to be really aware of cognitive load: we can’t give too many options, nor say too much, in any one turn, lest we overwhelm a person’s memory (or patience). Much of conversational design’s “best practices” comes down to keeping prompts short, sweet, and simple–working with human memory, instead of against it.
  • Sound and Music. Traffic, a honk, hammers, and birds–suddenly, you’re in the heart of a bustling city. A soft vibration, gongs, and steady throbs of “ohm,” and you’re now in a monastery, ready to meditate. The familiar three notes, and suddenly, you’re prepared to hear broadcasters or comedians from NBC. Sound and music can transport you. Or with a short “Ching,” it can inform you (You just got paid!). It can establish mood, or the completion of a task. It can change your emotions, or invoke memories. A conversational designer should know the basics of sound: pitch, rhythm, timbre, and melody, and the varieties of information and emotion it can convey.
  • Platform Limitations and Opportunities. As much as I’d like to design for Jarvis, most voice interfaces are far dumber than that–the burden of weak AI. People can’t speak with computers as naturally as they’d like and expect to be understood. For example, if someone wants a large pepperoni pizza with sausage, pepperoni, and pineapple but with gluten free curst–well, with current limitations, we have to ask for only some of that information at a time. We have to be aware of the limitations, and help the user work with those limitations instead of against it, lest we provoke confusion, frustration, or anger. And we need to be aware of the opportunities each platform and technology affords. These technologies and abilities are always changing; a conversational designer needs to stay abreast of the trends and technologies.

There are other things we need to consider, of course. The general “UX” process of testing with real people; a consideration of context; how voice interacts with graphical interfaces, such as on a smart speaker with a screen; recommended best practices; the nuances of creative writing and crafting a brilliant persona; the drawbacks of VUI design, and discerning which use cases are appropriate for voice and which are not; something of the history of the field. The list could go on. But the above points cover what I think someone should know, first and foremost, to design the foundational artifact of VUI design: prompts and speech. Armed with these concepts, I’ve found that it’s easier to both describe and prescribe the right prompts; to accomplish whatever goal the user has in mind, in the right way.

Reflections on Conversational Design (1)

What is a voice user interface? And what artifacts allow designers to express their intentions, and share it with others? I’ve been mulling over something Rebecca Evanhoe said in a Botmock AMA from earlier this year about these very questions. She said a conversational designer needs to be able to design these three things:

  1. The things the computer says: the prompts I write as a conversational designer
  2. The flow of the conversation–the “conversational pathways”–arising from the things the computer says (and the expectations provided)
  3. The interaction model behind it all, the “grammar” that anticipates what a user might say, and links those intents to an utterance

I like this way of thinking about it. First, it highlights that the pathways (2) and interaction model (3) derive from the the prompts we write (1). Those prompts: these are the beating heart and soul of conversational design. The syntax, grammar, and diction; the prosody, volume, and emphasis; the personality conveyed; the sounds used; all of this emerges from how we write the prompts.

And second, it made me realize something. I was going to argue that the prompts and pathways are really human-centered, and that we really have to deal with platform limitations when we start on the interaction model. To some extent, that’s true; but of course, not entirely. Yes, we have to start with how people actually talk, but anticipate the platform limitations from the very start.

And actually, the interaction model is where we really have to anticipate what people will actually say. A robust anticipation is vital, because otherwise, the conversation will falter: the agent that was designed (by me!) won’t know what someone meant.

Kranzberg’s Laws

Melvin Kranzberg

In October, 1985, Melvin Kranzberg (an eminent historian of technology) gave an address outlining six “laws” he’d noticed as he studied technology. As he points out, these aren’t “laws in the sense of commandments but rather a series of truisms” about how technology develops.

Before diving into the laws, though, he makes a few points about technological determinism: the idea that “Technology… has become autonomous and has outrun human control.” Not all scholars, he points out, agree. Lynn White Jr, for example, has said that technology “merely opens a door, it does not compel one to enter.” But as Kranzberg rightly points out in a provocative extension of the metaphor:

Nevertheless, several questions do arise. True, one is not compelled to enter White’s open door, but an open door is an invitation. Besides, who decides which doors to open–and, once one has entered the door, are not one one’s future directions by the contours of the corridor or chamber into which one has stepped? Equally important, once one has crossed the the threshold, can one turn back?

These are really deep questions, and ones to which Kranzberg admits “we historians do know the answer.” Technological determinism is a complex idea. Concretely, I wonder: was the internet inevitable? What do “the contours of the corridor or chamber” made by social media, smart speakers, and artificial intelligence look like? Can we turn back? Is there any reason we’d want to?

I don’t know. But I resist the idea that technological determinism. I’m not keen on what Mike Sacasas has called “the Borg complex”, the idea that “resistance is futile.” I’ve always been of the opinion that “what we can see, we can change.” Or to put that in the words of Marshall McLuhan, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”

But I digress–back to Kranzberg’s address. His six laws:

  • Law 1: Technology is neither good nor bad; nor is it neutral. Here, like the historian of technology he is, he’s talking about social change. Introduce the internet (or any technology) and it will change things in ways expected and unexpected. It’s the law of unintended consequences: there will be unexpected benefits and drawbacks, and often, perverse results–effects contrary to what was intended. And it will be different based on the variety of cultures and contexts. (He gives a great example of the pest control DDT in both the United States and India.)
  • Law 2: Invention is the mother of necessity. In other words, once a technology is made, it will necessitate the improvement of a variety of other inventions so it can work most effectively. (Or as Andy Crouch puts it, less forcefully, “What does this artifact make possible? What can people do or imagine, thanks to this artifact, that they could not before?”)
  • Law 3: Technology comes in packages, big and small. He gives the example of the radar, which a variety of people claim to have invented because it’s a complex technology made up of many pieces, all invented in different times and places. In a class I taught on voice technology, I was fond of illustrating the many technologies underlying a virtual assistant, all of which silently and invisibly allow us to play music or turn a light on in a room.
  • Law 4: Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions. Consider the adoption of Google Glasses, which has (and may always) run into privacy concerns. Kranzberg gives the example of communal kitchens, which would reduce housework but conflict with our modern idea of a home.
  • Law 5: All history is relevant, but the history of technology is the most relevant. It’s a bold and arguable claim, but I think he makes a good point for it.
  • Law 6: Technology is a very human activity–and so Is the history of technology. “Or to put it another way, man could not have become Homo Sapiens, “man the thinker,” had he not at the same time been Homo faber, “man the maker.”

It’s a fantastic address, and clarifying. I hope to write some more about these laws, and some reflections on what they mean for designers and technologists. But at a minimum, they encourage me to think more explicitly about the history of technology, “the most relevant” history of all. Even if that claim is hyperbolic, it’s surely more necessary to think about how things got the way they are. As Kranzberg says, “the history of technology is the story of man and tool–hand and mind–working together.”

UX as creative tension

Last week, I wrote about Matt Damon Smith’s definition of user experience, which is centered around the journey between where a user is (point A) and where a user wants to be (point B). This journey assumes there’s a gap between the current state and the desired future. All of this reminds me of Peter Senge’s concept of “creative tension”, which he defines as:

The juxtaposition of vision (what we want) and a clear picture of current reality (where we are relative to what we want) generates what we call “creative tension”: a force to bring them together, caused by the natural tendency of tension to seek resolution…

Peter Senge, The Fifth Discipline (p. 132)

Elsewhere, he compares this tension to a rubber band:

Senge’s concept of Creative Tension

I love how Senge links this back to “the natural tendency of tension to seek resolution.” Consider the example of music: great musicians build their songs around musical tension and resolution, the idea that certain chords want to “resolve” down to a home chord. Another example is marketing, which is–for better or for worse–about creating tension, prompting a “this is what your life could be like!” moment where the product or service can fill in the gap.

As a user experience designer, at least one of our purposes is helping users resolve the tension between what is, and what ought to be. And ideally, it should be as delightful and pleasing as hearing musical notes “land” in a pleasing place.

the interface makes the experience

For the past several years, whenever the “UX vs UI” debate has come up amongst my designer friends, I’ve held the position that UX is not UI. UI design is one of many skills involved in strong user experience design: a good UX designer needs to be familiar with information architecture, graphic design, requirement writing, copywriting, speaking to programmers, etc, etc. A person who only excels in UI design is a mere pixel pusher.

I still agree with this. But in working through Matt Smith’s shift nudge course on UI design, I realized something. This distinction works if I’m describing things from the perspective of my industry, which is focused on UX designers. In other words, this is a debate about roles, skills, and tasks.

But Matt Smith looks at this debate by talking about digital experiences themselves, rather than describing UX designers’ roles and skills; he’s describing the product, and not the architect. And a person experiences a digital product through the user interface. If a digital experience is about taking a user from where they are (point A) to where they’d like to be (point B), this is principally accomplished through the interface itself.

A drawing of Matt Smith’s conception of UX and UI design; adapted from his Shift Nudge curriculum

To put it another way: when describing the industry, a UX designer is much more than a UI designer. But when describing an actual experience, the interface design is the core that dictates the quality of that experience.

This explains, perhaps, why UX is often seen as interface design. Digital products are experienced by way of an interface! Although many other skills are needed to fashion the right experience, in the end, it is the interface that makes the experience.

This applies mainly to a single digital product or interface. The longue duruee of UX–the plurality of experiences with a brand across a variety of products, touchpoints, and interactions–is usefully described as CX, though even there, digital interfaces of some kind can make up a majority of the interactions.

“Helvetica”

I watched the Helvetica documentary this evening, all about–you guessed it–Helvetica. My New York City is prominent, especially because the subway systems are littered with Helvetica.

That word–littered–has such a negative connotation, as if Helvetica is a disease. And certainly, some of the people interviewed in the documentary think so. It was fun to see which of them possessed a dislike (or hatred) of the font, i.e. Erik Spiekermann. It was also interesting to see who really liked it, and felt they could do amazing things with just three or four fonts, i.e. Massimo Vignelli. Some of the people interviewed feel like type is a crystal goblet, and you shouldn’t see the goblet, but the content that’s in it. And some want the type to express something.

My main question going in was: is Helvetica a good font? I left with the impression that… it is. It’s spoiled by overuse and familiarity, but on its own merits, it’s legible and clear. Lars Müller called Helvetica “the perfume of the city,” and that appears to be true–not just of New York City, but of everywhere. The vignettes and montages in this documentary were really good at conveying just how ubiquitous this typeface really is.

Another question, which assumes that Helvetica is actually alright: is it possible to improve it? (Some of the people being interviewed joked that Helvetica was the End of History as far as type was concerned.) The question of improvement could be taken at least a few ways. First, can it be improved from a rationalist sense? Can we find a more geometrically pleasing, scientifically “good” ecology of type forms that combine together to create–well, something better than what we’ve got? Second, someone more engrained with romanticism or expressivism would probably laugh, and say–absolutely. It represents capitalism, or bureaucracy, or corporations, or the Veitnam war. It’s got to change, as all things must, to better capture the zeitgeist and make way for a new generation, who have new values beyond just “ideal proportions” and “rationalistic geometry.”

Anyway, it was a good documentary. A bit dated–the MySpace part made me wistful and nostalgic for the days when profile pages could have so much personality–but still good. This 2017 AIGA profile was a good 10-year anniversary that I enjoyed, and suggests the documentary still holds up.

Cosmic Calendars and the Powers of Ten

Recently, while visiting a showcase at a Herman Miller exhibit, I learned about Charles and Ray Eames–a power couple if there ever was one. Among their many contributions is a short movie they made together in 1968 (and released in 1977), a movie I’d seen but hadn’t realized who was behind it: the famous “Powers of Ten” video. It opens with a picnic; the narrator (voice by the famed physicist Phillip Morrison) says this:

We begin with a scene just one meter wide, viewed from just one meter away. Now every ten seconds we will look from ten times farther away, and our field of view will be ten times wider.

Powers of Ten

From there, it zooms out on a fast-paced journey until the screen encompasses superclusters and galaxies-upon-galaxies. And once there, you zoom back in to “our next goal, a proton in the nucleus of a carbon atom beneath the skin on the hand of a sleeping man in the picnic.”

On these scales, you can observe wonderful patterns. For example, in “Powers of Ten,” the narrator pauses to “notice the alternation between great activity and relative inactivity,” something he calls a rhythm. I love that: that the entire universe, as we zoom inward and outward, contains a rhythm–a “strong, regular, repeated pattern,” suggesting that even the universe has a pulse.

Powers of Ten seems related to Carl Sagan’s “Cosmos,” which aired just three years later in 1980. He had an episode where he describes the Cosmic Calendar–a pedagogical exercise where he “compresses the local history of the universe into a single year,” a unit of time that most of us can grasp and hold onto. He goes on to highlight that “if the universe began on January 1st, it was not until May that the Milky Way formed,” and that our sun and earth formed sometime in September. Once he arrives at human history, he changes the scale “from months to minutes… each minute 30,000 years long.” It’s wonderful. (A recently updated version, incorporating newer science and CGI, is narrated by Neil DeGrasse Tyson.)

As noted, both of these videos came out close to each other. I suspect they stem from a growing realization of the chronometric revolution–a term that David Christian coined to describe the development, in the middle of the twentieth century, of “new chronometric techniques, new ways of dating past events.” What did these new methods mean? “For the first time, these techniques allowed the construction of reliable chronologies extending back before the first written documents, before even the appearance of the first humans, back to the early days of our planet and even to the birth of the Universe as a whole.”

It seems to Eames and Sagan were both reacting to these new senses of scale: the vastness of both time and space, a vastness our human minds are ill-equipped to grasp and hande. A year, I understand. A billion? Not so much.

Other films since have grasped this, trying to help us get a “hook” into deep space and time. Some of these cinematic forays focus on narrative: not just what happened, but why, and how we ended up here. My favorite attempt at this is Big History Project, which draws a line from the Big Bang, to the formation of stars, to the explosion of new chemical elements, to the creation of planets, to the development of life, to the dawn of humanity, and beyond. At each of these moments, the themes of energy, complexity, thresholds, and “Goldilocks conditions” are used to show how something like us could have happened, especially in a universe ruled by entropy.

John Boswell’s Melodysheep films, especially his timelapse of the entire universe, is another telling: less focused on teaching and more focused on helping you feel something. The music, visuals, and speech combine to evoke a sense of the width and wonder of everything that’s happened since the Big Bang.

For me, videos like these create a kind of overview effect–a cognitive shift, where I start to realize how small I am–and how incredible (and fragile) existence is. And it all seems to have begun, at least cinematically, and for me,with the Eames’ wonderful video.

The Great Man Theory of Design History

I’ve always wondered why one style becomes “the thing” in different eras–whether it’s the 1890s or the 1960s. So it was a welcome surprise that, one page into Owen Jones’ design classic The Grammar of Ornament, I discovered he tries to answer this very thing:

Man’s earliest ambition is to create… As we advance higher, from the decoration of the rude tent or wigwam to the sublime works of a Phidias or Praxiteles, the same feeling is everywhere apparent: the highest ambition is still to create, to stamp on this earth the impress of an individual mind.

From time to time a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track, yet never so closely as to destroy the individual ambition to create; hence the cause of styles, and of the modification of styles.

Owen Jones, The Grammar of Ornament, 32-33

“From time to time a mind stronger than those around will impress itself on a generation.” Hence, he says, the cause of style–and the modifications of past styles.

This basically sounds like the “Great Man Theory of History,” but applied to design history.

If you’re not familiar with this idea, it comes from Thomas Carlyle, and it’s basically: history happens because “a mind stronger than those around will impress itself on a generation, and carry with it a host of others of less power following in the same track.” It ascribes momentous changes in history not to systems and trends, but to people who are forces of nature, and who were far from inevitable. Think Julius Caesar, Napoleon Bonaparte, Adolf Hitler, Winston Churchill, etc.

For design, I think Owen Jones would ascribe changes to major people. He’d probably say that he and Henry Cole and others like them were the “strong minds,” producing the Arts & Craft movement that followed, and that without them that movement would never have happened–or at least, not happened the way it did. He probably would have said the Glasgow movement followed the “strong minds” of The Four–Charles Makintosh, James MacNair, Margaret and Frances MacDonald. Without those four, those trends in design wouldn’t have occurred.

I’m not sure I buy this idea entirely. The Wikipedia page on the “Great Man Theory of History” has several criticisms of the theory, which usually amount to: the individual is always shaped by the social environment, so it’s the larger trends and forces that make the rise of some individual perhaps inevitable: they light the match on a pile of burning wood that’s already there. That said, Dan Carlin–somewhere in his large ouvre of Hardcore History podcasts–has said that he thinks the answer lies somewhere in between: if Winston Churchill hadn’t been in a position of authority in World War 2, would the outcome have changed? If Hitler had been someone with more mental stability, could that the war have changed? Entirely possible on both accounts. But of course, trends and forces are involved, too: producing the currents that gave rise to Nazism and nationalism.

So it’s probably a mix in history of design, as well. The Arts & Crafts movement may have been an inevitable trend to the alienation and de-personalization caused by the Industrial Revolution. But it’s possible that Henry Cole and Owen Jones and John Ruskin and William Morris’ specific opinions and preferences were not inevitable. Same goes for other major designers and the trends they worked in. (I’d also add–there are Great Women too!)

It’s definitely interesting, though. What does it take to create a style that goes “viral,” to use our language today? A style that catches on? And is that style an expression of the spirit of the age–the zeitgeist? Or does a specific style come about because of a forceful mind, “impress[ing] itself on a generation”? Or is it something in between?

New Materials and Revolutions in Design

Going through Dardi and Pasca’s Design History Handbook, I’m realizing that the materials available to designers have been a major influence in the history of design. Perhaps this is obvious, but it’s also obviously profound. I quote:

Designers, accustomed for millennia to operating with natural materials, were no faced not only with the enormous availability of iron, cast iron, and glass, but with new procedures. Vulcanization allowed for the use of gutta-percha or rubber to simulate wood, stone, and metals, inlays included; electrotyping made it possible to reproduce objects by electrochemically depositing a metal into a mold; granite and marble could now be easily cut. The new responsibility of designers was to give shape and meaning to the artificial processes [and materials] that the Industrial Revolution was developing.

Design History Handbook, p. 17

Maybe what I didn’t realize was the full extent of materials suddenly available to people once the Industrial Revolution came around: an explosion of new materials and methods to make them.

Here’s a pile of vulcanized rubber–again, a never-before-seen material–witnessed at the Great Exhibition in 1851:

Charles Goodyear, display of products in Indian vulcanized rubber (1851-1852)–a material that . From the Library of Congress.

Can you imagine? Seeing a material like this for the first time?

Andy Crouch, in his book Culture Making, talks about questions to ask of any “cultural artifact” (meaning anything from an iPhone to an omelet). One of those questions is: what new horizons of possibility does this artifact open up? What does it make possible?

So: what did vulcanization, and vulcanized rubber, make possible? It made possible “rubber hoses, shoe soles, tires, bowling balls, bouncing balls, hockey pucks, toys, erasers, and instrument mouthpieces.” Tires, of course is the biggest of these (which is why Goodyear Tires is named after Charles Goodyear, the inventor of vulcanization). But note that “most rubber products in the world are vulcanized, whether the rubber is natural or synthetic.”

In other words, one artifact–from Charles Goodyear–made possible businessmen and designers who could suddenly imagine new worlds of possibility: a world with tires, toys, and instrument mouthpieces. And of course, a world with tires, was a world with cars; and a world with cars, a world with highways and commutes. (Thanks, Charles. Could’ve done without commutes.)

What a revolution that just one of those new materials instigated. Each of these new materials became the blueprint for new products, technologies, and works of art. And those products, technologies, and works of art have remade the world we live in today.

My friend Will Hall explained to me that in the Bauhaus school of design (in the early 1900s), they had two teachers in every classroom: a master of form and a master of works. The former was a visual artist; the latter an expert in the production of the new materials, and the machines involved. Together, the students learned to create and craft from all sorts of materials: textile, wood, glass, color, clay, stone, and much more besides. They were tutored in the realms of possibility that each material opened, and how to act in that realm. This, combined with a distinctive aesthetic approach, is part of what made the Bauhaus school so successful.

So: I’m only just learning about the larger history of design, but it seems apparent that design is shaped by the materials available to me. New materials? New opportunities. What materials do I have to work with?

It’s exciting: as a UX designer in the twenty-first century, I get to work with digital materials of all kinds. Pixels, sound waves, moving pictures, photographs; a flexible canvas of colors and layout and typography. These materials can be combined to create something wonderful: a digital interface, and ideally, a wonderful subjective experience within that interface.

I do a lot with conversational design, and I’m starting to wake up to the materials I really have to work with. New synthesized voices from places like Vocal ID and Lyrebird; new abilities to alter not only the gender and tone of the voice, but the paralanguage–the prosody, the breath, the intonation, and feeling of the voice; libraries of foley sounds and music loops; the ability to create music, fashioned from the raw materials of rhythm, pitch, timbre, amplitude, and harmony–all available to me from the use of different instruments, and even synthesized music. All of these can be combined to create immersive soundscapes and aural experiences on smart speakers and phones.