Making sound design accessible for all

Veosport approached Klean, the agency I founded and operated for nearly six years.

Veosport had just launched their first product, the Veosport bluetooth headphones. They asked us to build an EQ app that could shape the sound of the headphones to each person’s likeness. They believed this would provide a platform where their product could differentiate and show off its versatility.

Veosport was backed by UFC hall of famer Urijah Faber. To start, they chose to niche down to the fitness space, more specifically mixed martial arts. The product and app would have to appeal to these people first, gain early traction, and then expand.

My role

Lead designer through the end to end process: discovery, user research, requirements, design, testing, directing engineering, client management, support through launch.

The team

1 engineer, 1 database architect, 2 c-level stakeholders

Timeline

March 2017 - July 2017

The problem

Problem statement

How might we create a simple EQ app that allows users to create the perfect listening environment customized to their preferences?

Breakdown of the problem

Competition in the headphone space
The headphone space was crowded in 2017 and difficult to stand out amongst industry heavyweights like Bose and Beats by Dre.
EQ is a hard concept
Most people aren’t familiar with EQ and have either never used one, or didn’t realize they were using one.
Does EQ increase product stickiness
The intent was to give the app away for free to further promote product stickiness and support of future product iterations.

Market research

Researching mobile EQ apps

There weren’t a ton of great examples. The two leading EQ apps at the time were EQu and Equalizer +. Both were quite cumbersome to use, and on the surface appeared like you needed an audio engineering degree to use.

It wasn’t clear what each control was doing, if you were making progress or just destroying the shape of your sound.

Competitor app
EQu
Competitor app
Equalizer +

Researching professional grade eQ software

I knew a bit about professional level audio engineering given my music side hobby. I decided to deep dive into EQs that the pros use. Many of these softwares have been around for decades with small iterations each cycle. The patterns were well known and familiar to that segment.

Professional audio engineering software
FabFilter Pro-Q 3

One of the main things I wanted to address in our designs was the scope of changes that we allowed. The reality was two fold:

1

The more parameters we allow you to control, the more difficult the app is to learn.

2

Your ears are very sensitive, and a bluetooth headphone is quite close to your eardrum. The proximity allows for your ears to hear more micro adjustments, but is more sensitive to bigger adjustments.

This meant we could give users less parameters to control and a smaller sandbox to play in, and it was more likely they would achieve greater results.

Surveys and tests in the field

I knew a decent amount about audio production going into this project. An advantage, sure, but I wanted to do my best to set those biases aside. I wanted to get a true baseline for where the average person’s knowledge of EQ rested.

Given our personas of MMA enthusiasts, we figured we’d go straight to the source and do some in-person field tests at the MMA gym.

We surveyed some baseline questions:

We then watched people play around with the competitor’s apps. We wanted to watch their movements and behaviors and determine points of frustration. There were many.

Key insights

Most people struggled with basic things like orienting within the app, being able to quickly find the bass or treble controls, and increase or decrease bass or treble.

Given that these people were working out while using their headphones, they didn’t want to spend a lot of time fiddling with settings. In fact, next to zero time.

The current market leaders in EQ failed the “on the go” test. It was taking people ~5 minutes to create basic settings, and people were hitting frustration around the 10-15 second mark.

Early prototypes

Exploring the basic interaction

Knobs vs. freeform

I believed figuring out the way users interacted with the app would create sort of boundaries that the EQ parameters would have to then fall into.

In the professional audio space, the two leading patterns are:

1

Fixed - Manipulating EQ points with a control, fixed by certain parameters.

2

Freeform - Moving EQ points with your mouse, more freely and less constrained.

Fixed model
Freeform model

In playing with both models I noticed that it was easier to screw up your EQ with a freeform model, yet the fixed model made it hard to visualize what you were doing. Making too subtle of changes led to a lack of visual and auditory aid.

Hearing is believing

Another big issue when using EQ softwares is making changes while the tool is in bypass. It’s a big internet meme in the audio production space. You move all these knobs around, drive yourself crazy to the point you believe you hear the changes, all to find out the EQ wasn’t activated the whole time. The brain is a wonderful tool, but it can also be a weapon used against us.

To combat this, I believed we should find a way to allow users to make edits in real-time and hear the changes as they were happening.

I wanted to give users a visual and auditory experience that helped reinforce the changes that were being made, while also leading to better comprehension and reduced app learning time.

I believed using a combination of freeform looking visuals along with the fixed model of reducing parameters and complexity, was a good fit. Not only for our user’s education level on EQ, but also due to constraints of accuracy on mobile.

Diving deeper into EQ mechanics

Reducing parameters and complexity

Standard professional audio EQs have a vertical and horizontal range of motion. The frequency is on the X axis and the volume is on the Y axis.

Essentially, each frequency has what’s called a band, and each band can be increased or decreased. By increasing, you’re allowing more of that frequency in. By decreasing, you’re allowing less of that frequency in. This is the basics of how major recordings are produced, car stereos are tuned and headphones are calibrated.

Because the headphones went into your ear and were close to your eardrum, my assumption was most users would hear more of the impact of their changes. This could allow us to pull back and reduce the spread of change that we allowed on both the vertical and horizontal axis of the EQ.

We first experimented with allowing each frequency band to increase or decrease by 30 decibel (dB). This is fairly standard across the professional grade EQs we researched. For our use case, this ended up being far too reaching.

30 dB model
20 dB model
18 dB model
12 dB model

We pulled it back to 20 db, 18, 12 and then 9.

9 db was still plenty of control, but it kept the user from doing too big of an adjustment. It also anchored visually in a nice way. 9 db looks like a lot, and it still is. It shows the user that 3-6 is likely more the range of motion that’s more pleasing to the ears.

9 dB model

Finding the sweet spot of control

Professional EQs allow for each frequency band to be controlled. This will typically lead to poor results though.

Most professional EQs today start with 5-10 bands for control. More can be added, but they give you a baseline. By controlling fewer bands, the user makes less adjustments. This leads to a more natural sounding EQ.

I wanted to implement a similar philosophy. Give the user a starting point and subtly guide them for the best chance at success.

NOT FINAL UI

Making freeform models work on mobile

On desktop, freeform models allow you to start with a frequency band and adjust vertically or horizontally. As soon as you move a band horizontally though, you have changed the frequency that you’re manipulating. Again, the frequency is the X axis.

I knew this would be very problematic on mobile and lead to sour results.

I opted to create a model that utilized 5 preset bands at specific frequencies. The user could move each one vertically, changing the frequency's volume, but that’s it. The frequency stayed fixed.

I also added some labels to help users make sense of what each band area was doing with broad familiar labels: bass, mids, treble.

Testing our first prototype

For our first test, we went back to where we started - the gym. We found busy people, trying to get workouts in, and asked for a couple minutes of their time.

We asked them:

1

Put on your favorite song.

2

Open the Veosport EQ app and create a custom eq.

3

Boost the bass to your liking.

4

When you’re done, save it.

We surveyed 5 people. 1 was able to do all tasks with little friction. The other 4 completed the tasks, but they struggled and it became an annoying interruption in their workout.

Overall the feedback was:

Rapid iteration

Reducing horizontal confusion

With our testing feedback in hand, it was clear we were still giving users too much choice, and it was leading to poor results.

I opted to reduce our 5 band model to 3 bands. Bass, mid and treble. This was fairly familiar to folks since we heard that they remember using something similar in their car stereo or TV setup.

Reducing vertical confusion

I believed we were exposing users to too much change vertically. Moving a frequency even up 9 db is quite significant. Much more could not only lead to poor results, but worse - blown speakers.

I opted to reel in our vertical parameters, and max them out at 6 db. Users could increase or decrease the bass, mid or treble frequencies by 6 db.

Testing the second prototype

Again, we found busy people trying to get workouts in, and asked for a couple minutes of their time.

We asked them:

1

Put on your favorite song.

2

Open the Veosport EQ app and create a custom eq.

3

Boost the bass to your liking.

4

When you’re done, save it.

We surveyed 5 people. 4 were able to do all tasks with little friction. Progress was being made.

How do we explain what EQ is?

We still needed to explain fundamentally what EQ was, but once in the app, users were navigating it well. I believed this could be handled in a couple creative ways.

Hearing is believing

Again, you must hear the changes you’re making. Without that real-time feedback, it’s very difficult to deeply understand what you’re manipulating.

Presets

I believed presets could also help reinforce the changes that are occurring.

Users may also prefer to choose a preset for Rap or Podcasts vs. creating their own.

By allowing presets to be selected both for ease, as well as, exploration and discovery - EQ could be easily learned. And it wouldn’t take an audio engineering degree.

How do we make recalling EQ easy?

Nothing is worse than losing saved progress. What was that EQ I liked again? The worst.

Colors

I believed something as simple as coloring each preset and custom EQ with a unique shade, could really help reinforce what your favorite settings were.

Unique names are a start, but names and bold colors are even better.

Bringing the EQ shape to the dashboard

Another interesting solution that I believed could help with recall was showing the unique EQ shape for each custom or preset EQ.

Not initially, but overtime users would start to learn what lines made what sounds.

The line, the color and the name in combination could really help steer the user and further imprint in memory.

Other unique interactions

On/off toggle

This was something a bit unique to the app. We wanted to allow users to turn on presets and then turn the app off completely to allow their headphones to return to normal. A similar concept is to turn a filter on and off in a photo app.

I wanted to really overemphasize when the app was on or off. With a photo app, it’s clearer that the filter has been turned off. When dealing with audio, your brain can often trick you into hearing things that aren’t there. I wanted to visually make things clear.

Editing

Presets could not be edited, but I wanted to explore what it might look like if we allowed users to make edits to their custom EQs.

The most obvious starting point is no edits, no deletes. That would’ve been fine for v1, but I wanted to see if there were any other lightweight options.

I didn’t want to open up full blown editing where users could change their EQs. That wasn’t necessary for now. Users could just make a new one.

What if they screwed up their EQ though? What if they did that a few times?

It would be quite annoying to not be able to clear those out.

I explored a multi-select editing and delete interaction. Users would be able to select custom EQs and delete. It would set the stage for future editing features.

I also explored a simple pattern. Single select, swipe to delete.

At this phase of the product, single select  was more appropriate and lighter weight to implement.

We could’ve also gone with no delete function, and waited until the user asked, but we opted it was lightweight enough to implement in this version.

Launching Veosport EQ

The launch was successful, and we ended up coming in ahead of our timeline.

The feedback has been warm, and users have seen it as a cool perk to their Veosport headphones.

Time will tell if this leads to an increase in stickiness, but at the least it’s increasing customer delight with something cool and unexpected.