Recent Talk: Audio UX

I recently gave a talk at the Milwaukee UX meet up. I spoke about the user experience of audio. It was meant to be short (less than 10 minutes).

Here’s the deck.

Some notes and summaries per slide:

  1. The user experience of audio is deeper than we think.
  2. I’ve had about five years experience working with audio through my company HarQen. We’ve transitioned a lot, but a mainstay has been the capturing and playback of voice. Right now, HarQen can be called a Voice Asset Management company—we manage the voice data layer for enterprise companies.
  3. We know that audio is linear. It has a beginning and an end. You don’t know where you are at any given point unless you look at the timestamp.
  4. In the 2D world, we have several tools that guide and aid us. We have things like bold contrast, bullet points, color, layout, etc.
  5. So what if we could apply those 2D tools to audio? How could we do it? My hypothesis is through metadata linked to timestamps.
  6. I did a demo of our two products, Voice Advantage and Symposia.
  7. All interactions with computers come down to one of two things: input and output. The key thing is that output is mostly useless or nonexistent without input. So the success of good audio consumption hangs on the related input.
  8. Audio is really nothing more than communication. Thus, we can learn a lot by thinking about audio in the context of communication theory. (I went into talking about various theories.)
  9. To date, experiences with audio have been mostly synchronous. The main way you interact with voice and audio is with real time communication. (Excluding music here.)
  10. Well, unless you count voicemail.
  11. Which we do at HarQen…
  12. Because it’s our competitor, just like email is Basecamp’s competitor.
  13. So perhaps a way to heighten the input (read: metadata generation) of audio which would aid the output (read: listening of audio) is by rethinking about how we can make more audio interactions asynchronous?