A multi-disciplinary designer’s journey in field recording, sound design, and music.

Musical Sound Design for Installations

Posted: April 12th, 2018 | Author: | Filed under: interactive audio, sound design, theory

Want a challenge? Try to play back interface sounds on the show floor at CES. [Intel Booth, CES 2012.]

Want a challenge? Try to play back interface sounds on the show floor at CES. [Intel Booth, CES 2012.]

For those that might not know,  for the last decade I earned my living designing digital installations: Multi-touch interactive walls, interactive projection mapping, gestural interfaces for museum exhibits, that sort of thing. Sometimes these things have sound, other times they don’t. When these digital experiences are sonified, regardless of whether they are imparting information in a corporate lobby or being entertaining inside of a museum, clients always want something musical over something abstract, something tonal over something mechanical or atonal.

In my experience, there are several reasons for this. [All photos in this post are projects that I creative-directed and created sound for while I was the Design Director of Stimulant.]

Expectations and Existing Devices

It’s what people expect out of computing devices. The computing devices that surround me almost all use musical tones for feedback or information, from the Roomba to the XBox to Windows to my microwave. It could be synthesized waveforms or audio-file playback, depending on the device, but the “language” of computing interfaces in the real world have been primarily musical, or at least tonal/chromatic. This winds up being a client expectation, even though the things I design tend not to look like any computer one uses at home or work.

Roomba

Yes, I strapped wireless lavs to my Roomba. The things I do for science.

Devices all around us also use musical tropes for positive and negative message conveyance. From Roombas to Samsung dishwashers, tones rising in pitch within a major key or resolving to a 3rd, 5th, or full octave are used to convey positive status or a message of success. Falling tones within a minor key or resolving to odd intervals are used to convey negative status or a message of failure. These cues, of course, are entirely culture-specific, but they’re used with great frequency.

The only times I’ve ever heard non-musical, actually annoying sound is very much on purpose and always to indicate extremely dire situations. The home fire alarm is maybe the ultimate example, as are klaxons at military or utility installations. Trying to save lives is when you need people’s attention above all else. However, even excessive use of such techniques can lead to change blindness, which is deep topic for another day. Do you really want a nuclear engineer to turn a warning sound off because it triggers too often?

The Problem with Science Fiction

Science fiction interface sounds often don’t translate well into real world usage.

This prototype "factory of the future" had to have its sound design elevated over the sounds of compressors and feeders to ensure zero defects. [GlaxoSmithKline, London, England]

This prototype “factory of the future” had to have its sound design elevated over the sounds of compressors and feeders to ensure zero defects…and had to not annoy machine operators, day in and day out. [GlaxoSmithKline, London, England]

My day job has been inexorably linked to science fiction: The visions of computing devices and interfaces that are shown in films like Blade Runner, The Matrix, Minority Report, Oblivion,  Iron Man, and even televisions shows like CSI set the stage for what our culture (i.e., my client) sees as future-thinking interface design. (There’s even a book about this topic.) People think transparent screens look cool, when in reality they’re a cinematic conceit so that we can see more of the actors, their emotions, and their movement. These are not real devices – they, and the sounds they make, are props to support a story.

Audio for these cinematic interfaces – what Mark Coleran termed FUI, or Fantasy User Interfaces – may be atonal or abstract so that it doesn’t fight with the musical soundtrack of the film. If such designs are musical, they’re more about timbres than pitch, more Autechre than Arvo Part. This just isn’t a consideration in most real-world scenarios.

Listener Fatigue

Digital installations are not always destinations unto themselves. They are often located in places of transition, like lobbies or hallways.

I’ve designed several digital experiences for lobbies, and there’s always one group of stakeholders that I need to be aware of, but my own clients don’t bring to the table: The front desk and/or security staff. They’re the only people who need to live with this thing all day, every day, unlike visitors or other employees who’ll be with a lobby touchwall for only a few moments during the day. Make these lobby workers annoyed and you’ll be guaranteed that all sound will be turned off. They’ll unplug the audio interface from the PC powering the installation, or turn the PC volume to zero.

This lobby installation started with abstract chirps, bloops, and blurps, but became quite musical after the client felt the sci-fi sounds were far too alienating. [Quintiles corporate lobby, Raleigh NC]

This lobby installation started with abstract chirps, bloops, and blurps, but became quite musical after the client felt the sci-fi sounds were far too alienating. Many randomized variations of sounds were created to lessen listener fatigue. There was also one sound channel per screen, across five screens. [Quintiles corporate lobby, Raleigh NC]

Music tends to be less fatiguing than atonal sound effects, in my experience, and triggers parts of the brain that evoke emotions rather than instinctual reactions (in ways that neuroscience is still struggling to understand). But more specifically, sounds with without harsh transients and with relatively slow attacks are more calming.

Randomized and parameterized/procedural sounds really help with listener fatigue as well. If you’re in game audio, the tools used in first- and third-person games to vary footsteps and gunshots are incredibly important to creating everyday sounds that don’t get stale and annoying.

The Environment

Another reality is that our digital experiences are often installed in acoustically bright spaces, and technical sounding effects with sharp transients can really bounce around untreated spaces…especially since many corporate lobbies are multi-story interior atriums! A grab bag of ideas have evolved from years of designing sounds for such environments.

This installation had no sound at all, despite our best attempts and deepest desires. The environment was too tall, too acoustically bright, and too loud. Sometimes it just doesn't work. [Genentech, South San Francisco, CA]

This installation had no sound at all, despite our best attempts and deepest desires. The environment was too tall, too acoustically bright, and too loud. Sometimes it just doesn’t work. [Genentech, South San Francisco, CA]

Many clients ask for directional speakers, which comes with three big caveats. First, they are never as directional as the specification indicate. A few work well, but many don’t, so caveat emptor (they also come with mounting challenges). Second, their frequency response graphs look like broken combs, partially a factor of how they work, and so you can’t expect smooth reproduction of all sound. Finally, most are tuned to the human voice, so of course musical sound reproduction is not only compromised sonically, but anything lower than 1 kHz starts to bleed out of the specified sound cone. That’s physics, anyway – not much will stop low-frequency sound waves except large air gaps with insulation on both sides.

The only consistently effective trick I’ve found for creating sounds that punch through significant background noise is rising or falling pitch, which lends itself nicely to musical tones that ascend or descend. Most background noise tends to be pretty steady-state, so this can help a sound punch through the environmental “mix.”

One cool trick is to sample the room tone and make the sounds in the same key as the ambient fundamental – it might not be a formal scale, but the intervals will literally be in harmony with one another.

Broadband background noise can often mask other sounds, making them harder to hear. In fact, having the audio masked by background noise if you’re not right in front of the installation itself might be a really good idea. I did a corporate lobby project where there was an always-running water feature right behind the installation we created; since it was basically a white noise generator, it completely masked the interface’s audio for passersby, keeping the security desk staff much happier and not being intrusive into the sonic landscape for the casual visitor or the everyday employee.

Music, Music Everywhere

Of course, sometimes an installation is meant to actually create music! This was the first interactive multi-user instrument for Microsoft Surface, a grid sequencer that let up to four people play music.

Of course, sometimes an installation is meant to actually create music! This was the first interactive multi-user instrument for Microsoft Surface, a grid sequencer that let up to four people play music.

These considerations require equal parts composition and sound design, and a pinch of human-centered design and empathy. It’s a fun challenge, different than sound design for traditional linear media, which usually focuses on being strictly representative or on re-contextualized sounds recorded from the real world. Listen to devices around you in real life and see if you notice the frequency (pun intended) with which musical interface sounds are commonplace. If you have experiences and lessons from doing this type of work yourself, please share in the comments below.

Tags: , , , | No Comments »

New EP Released: “Dissolved” Remix EP

Posted: April 19th, 2016 | Author: | Filed under: music, news, sound design, synthesis

nm_dissolved_frontCover_1500px

Following 2015′s full-length album, Dissolver, I’m happy to announce the release of Dissolved, an EP with remixes by musicians from around the world. The US is represented by A Box in the Sea (WA), The Sight Below (NY), and r beny (CA); other contributors include The Heartwood Institute (aka Jonathan Sharp, UK), Hainbach (DE), and Fake Empire (NZ). The remixers’ techniques were as varied as their locations, from DAW-based arrangements to use of vintage hardware to recordings using dictaphones. The pieces exhibit a similar range of moods and styles as the original Dissolver LP, from lilting to tense, ambient to percussive, experimental to melodic.

Mastered by Rafael Anton Irisarri at Black Knoll Studio, Dissolved is available now via Bandcamp as a pay-what-you-like release.

| No Comments »

New Album Released: DISSOLVER

Posted: September 14th, 2015 | Author: | Filed under: music, news, sound design, synthesis
My first full-length album is available today.

My first full-length album is available today.

I’m thrilled to announce that the first album I’ve released under my own name, Dissolver, has been released. It is available now as a digital download on Bandcamp (with PDF booklet with additional artwork and liner notes, exclusively available on Bandcamp). You can also buy it as a digital album on iTunesAmazon, and Google Play.

I produced all the music and artwork, and it was mastered by Rafael Anton Irisarri at Black Knoll Studio. You can read more about this release at this blog’s sister site, music.noisejockey.net.

Work is already afoot on another releases, so stay tuned here, BandcampTwitterSoundcloud, and Instagram. Until then, please enjoy the noise, and reach out with what you think of Dissolver.

| No Comments »

Metallic Convolution

Posted: July 17th, 2015 | Author: | Filed under: music, sound design
Sure, it's fun to use long, non-reverb sounds as impulse responses...but what about short, percussive ones?

Sure, it’s fun to use long, non-reverb sounds as impulse responses…but what about short, percussive ones?

Convolution reverbs have been a staple of audio post-production for a good while, but like most tools of any type, I prefer to force tools into unintentional uses.

While I am absolutely not the first person to use something other than an actual spatial, reverb-oriented impulse response – bowed cymbals are amazing impulse responses, by the way – I hadn’t really looked into using very short, percussive impulse responses until recently. I mean, it’s usually short percussive sounds you’re processing through the convolution reverb. I found that it can add an overtone to a sound that can be pretty unique. Try it sometime!

(Coincidentally, today Diego Stocco’s is promoting his excellent Rhythmic Convolutions, a whole collection of impulse responses meant for just these creative purposes. Go check it out!)

Today’s sample is in three parts. First, a very bland percussion track. Then, the sound of a rusty hinge dropped from about one foot onto a rubber mat, recorded with my trusty Sony PCM-D50 field recorder. Then, the same percussion track through Logic Pro’s Space Designer (Altiverb or any other convolution reverb will do, of course) using the dropped hinge sound as an impulse response. It adds a sort of distorted gated reverb, adding some grit, clank, and muscle to an otherwise pretty weak sound.

Tags: , , , , | No Comments »

Why, and How, I Went Modular

Posted: July 11th, 2015 | Author: | Filed under: gear, music, sound design, synthesis
The joys of knobs. And patch points. And empty bank accounts.

The joys of knobs. And patch points. And empty bank accounts.

While this may be old news to my followers on SoundcloudTwitter and Instagram, I’ve configured my first Eurorack-format modular synthesizer. These cabled amalgamations of faceplates, cables, circuits, and glowing LEDs are desirable, fetishized, addictive, and steeped in history. But, really, they’re just tools.

nj_modular003But what tools they are. Modular synthesizers are no longer relegated to the dustbin of history, nor an underground elite (as well documented in the excellent documentary, I Dream of Wires). They have come roaring back, arguably leading the way in technical synthesis innovation, and are a commonplace instrument in many studios. This boom has even gotten the heavyweights of mass market synthesizers, like Roland, to (re)release Eurorack modules, and pop musicians like Martin Gore to release all-modular electronic albums.

Everyone’s path to modular synthesis is different, as is mine. But why did I go modular? How did I even know where to begin? And how can I hope to stem the addictive nature of constantly adding low-cost modules, which leads it to be known as “Eurocrack?”

Embrace Limitations

It’s tempting to just buy flavor-of-the-month new products, but that way lies financial ruin and a studio full of stuff you don’t use. The way to stem the financial bleed and random module selection is to place limitations on the process. For me, the limitations were as follows.

  • I’ve got a significant investment in existing software and hardware that I want to honor and leverage, not duplicate. I’m designing an additional instrument, not building a new studio.
  • I have limited physical space in my home studio. Therefore my case will be on the small side, and that will enforce limits on the number of modules I can purchase.
  • I will “version” the modular synth and roadmap it, as if I was designing an actual instrument or a piece of software. I will buy modules in two initial rounds: v0.5 to instantiate the most basic system to ensure that the workflow and gestalt of modular synthesis does actually speak to me, and then a v1.0 that I will live with for a year. Only after user testing – my own, of course – can I roadmap a meaningful path to a v1.5, v2.0, and so on.

Everything’s a Design Problem

I’ve spent my career breaking down everything, from human relationship challenges to sound design, as a set of design problems. This helps frame the real problem so that solutions are more meaningful. So, I asked myself: What’s the problem I’m trying to solve, or am I just lusting after gear? (Spoiler: It’s both!)

  • My current system lacked in two key areas: complex modulation options and the ability to support serendipity. My existing tools didn’t have much in the way for allowing for happy accidents, randomness, and cross-modulated signals and patterns of control. When my most interesting and complex synthesized rhythms and timbres I was creating were coming from Propellerhead Reason during my morning bus commute, I knew something was missing in my main studio.
  • Software is an expense, hardware is an investment. Software suffers from instability and, over the long haul, the danger of becoming incompatible that many hardware units do not.
  • I’ve already been enjoying workflow of using external hardware as sound sources and then post-processing them digitally, or the other way around.

With the above considerations, the idea of a flexible, modulation-rich instrument to add to the stable seemed to make sense.

Plus: Blinky lights.

Create Rules of Engagement

Modular synths are, well, modular: Flexibility is what they’re all about. But you are building your own instrument. Without a sense for what you want to accomplish, you’ll overspend and not get what you really need…and, more dangerously, you won’t know when you should stop buying modules. Most of us don’t have the disposable income to buy modules willy-nilly.

Here were my rules of engagement for assembling my modular synth. These will change over time, but it helped me understand what the first iteration of this instrument would be. I wrote these down and re-read them any time I started to think about adding a new module.

  • No analog oscillators. While that may seem against conventional modular wisdom, I have a total of ten analog oscillators across four other devices. I’ve got this covered. Go for something really unusual as a sound source.
  • No effects. I know that even if I monitor a track with effects on, I always record dry and have effects as plugins or rendered to separate tracks. I use tons of plugins and stompboxes: I have effects covered already.
  • Go nuts with modulation. Having enough tools to generate and modify clock signals and control voltages will be critical, because I don’t have digital tools that excel at this. Get more modules that control modulation than produce sound (or, ideally, ones that can do both).
  • Don’t forget the DAW. I’ve got a significant investment in a computer-based audio workstation that should be leveraged, so ensuring that modulation and clock signals can drive the modular was critical.
  • Embrace multi-tracking. Look at the modular as a sound design station, instrument, or voice, not as a complete studio. Get enough expressive options to do drones, melodies, and unusual percussion…but I don’t have to do all these things at once. That also means no more than 2 channels in or out of the modular synth.

The Result

nj_modular02

You can read all the key specs on my modular synth, its output and effects subrack (told you I’d break some rules!), and its “controller skiff” on ModularGrid.net, so I won’t geek out about it here. So far, though, so good.

  • There’s nothing mysterious about putting a modular together or how it’s used, as long as you have a good grasp of signal flow inside a typical synthesizer. It doesn’t really take any other technical skills other than using a screwdriver, reading directions, and doing simple math around power consumption.
  • I’ve got solid sync with my DAW.
  • I’ve got an instrument that can do things none of my other instruments can, and vice-versa.
  • I’ve got methods to interface with effects pedals, external semi-modular instruments (even with different interconnects), my DAW, and even my iPad. It’s deeply integrated into the rest of my studio.
  • It’s small. Full, but small. It’s even able to be self-contained if I decide to embrace limitations and create sounds or music only with this instrument outside of my studio or otherwise away from my DAW, even with my vintage Roland TR-606 drum machine.
  • It’s capable of percussion, melody, and drones that can modulate in complex and random ways over seconds or many minutes.
  • Modular users have a reputation for noodling and sound designing, but never actually completing songs or projects. It’s like an aural sandbox. The satisfaction of signal routing is autotellic: It’s its own reward, constant discovery and following or rejecting conventional wisdom. It’s also extremely meditative once you’re past the initial learning curve.
  • I’ve already broken the “no effects” rule, but only with modules that can be “self-patched” and act as sound sources in their own right.
  • Even having only purchased digital oscillator modules, analog modulators like LFO’s can often be used as analog oscillators when they are pushed into the audible range, as can filters that self-oscillate when their resonance is set high. I even wound up with four analog oscillators without knowing it.
  • Once you realize that anything can be routed into anything, all synthesis rules go out the window. LFOs and filters can be oscillators, as mentioned above, but clocks can be triggers, envelopes can be clocks, envelopes can be LFOs, audio amplitude can modulate anything…that’s the mind implosion and creativity that modular synthesis brings.

Over time, will I jettison older gear and go all Eurorack? Will I dispense with the computer entirely for making music? Probably not. But I’m sure my system will slowly expand, change, and evolve with my interests, just as I’ve shifted from oils to acrylics to pastels to pencils to pixels in my visual arts career. The initial rules I started with will morph, change, get relaxed, and get updated. My initial configurations has gaps and weaknesses, but nothing’s perfect. And now I’m good to go with a new palette of sonic colors.

Now, if you’ll excuse me, I have field recordings to run through my modular.

| 5 Comments »

Lighthouse Winds

Posted: April 22nd, 2015 | Author: | Filed under: field recording, gear, nature recording, sound design

lighthouseBaja

My past winter holiday involved a sea kayak crossing to Las Islas de Los Todos Santos, a pair of islands four nautical miles offshore of Ensenada, México. We were greeted – and partied with – a nearly toothless lighthouse keeper, and slept in an old lighthouse built in the 1930′s.

We had two days of 15-25 knot winds, and as you might imagine, a lighthouse is a roughshod place. The winds were howling through the old windows and making amazing sounds.

Only one problem: I had a small sea kayak with no room to even pack a handheld field recorder. As I’ve said many times before, the best field recorder is the one you have with you, and this case, my only option was my iPhone. In glorious, shimmering mono.

Today’s sound are of these howling winds, recorded with the Voice Memos app on iOS. I’m not about to make a habit of using my iPhone as a field recorder, even with aftermarket microphones, but hopefully this goes to show that sometimes you do the best with what you have. Especially if the sounds and location are literally once-in-a-lifetime events.

Tags: , , , , , | No Comments »

More Antler Shenanigans

Posted: December 20th, 2014 | Author: | Filed under: found sound objects, music, sound design
antlersII

A lush nest of sonic discomfort.

Following on my last post, I’ve continued to play around with my recordings of deer antlers through a contact microphone. Today’s sound is almost entirely from that session, with only a handful of synthesized sounds, all triggered by LFOs and other random modulations. The manipulations of the deer antler sounds were done in the very weird, pretty unstable, and utterly unique Gleetchlab application, as well as iZotope Iris, which did an amazing job of figuring out the root frequency of the flute-like and cello-like bowed resonances.

Tags: , , , | No Comments »

Deer Antlers as Instrument

Posted: December 15th, 2014 | Author: | Filed under: found sound objects, music, sound design

aantlers

Most years I host a “white elephant” party: Bring gifts you were given that are pretty bad, re-wrap them, and then you pick from the pile and laugh at the bizarre stuff you unwrap. Last year, I wound up with a pair of deer antlers.

I don’t hunt, yet I have a thing for taxidermy. I have no idea why.

As they sat in my studio, I thought back to an interview I did with Cheryl Leonard for the Sonic Terrain blog a few years ago. I remember her making instruments from limpet shells and other organic objects. Why was I not exploring the sonic possibilities of this strange object on my shelf?

Deer antlers are bone, not hair, so they are riddled with hollow channels, and are extremely tough. The main thing I tried was to explore their resonance, with a cello bow. I had to lay a good amount of rosin on the bow, but they did resonate. The sound is hissy, atonal, but with some pronounced fundamentals and overtones…just not in relationships that one usually considers musical. I used a Barcus Berry 4000 contact microphone and recorded onto a Sound Devices 702 field recorder.

When I hear an interesting sustained sound with too many frequencies, or odd frequency relationships, I usually go to one place to create something musical out of it: iZotope Iris. It’s a very creative tool for making playable virtual instruments out of pretty much any sound. In this case I also used New Sonic Arts’ Granite granular synthesis plugin for several layers. It all sounded very breath-y, like a somewhat melodic whisper. I mixed it with some LFO-driven rhythms in Reason and a bassline and drone from Madrona Labs’ Aalto. It was all put together in Logic Pro X with very few effects, lightly compressed by Cytomic’s The Glue.

Deer antlers, even processed through modern software, aren’t the most flexible or sonically soothing instruments around, but this article can at least serve as a reminder to explore everything around us for its interesting sonic possibilities. You never know what you’ll find.

Tags: , , , , , | No Comments »

Samplr As Mellotron

Posted: November 26th, 2014 | Author: | Filed under: music, sound design, synthesis
samplrtron

Letting my fingers do the walking, and talking, in Samplr for iOS.

A while back, Sonic State did a neat profile of Alessandro Cortini’s live synth setup for Nine Inch Nails, in which he described his use of a four-track cassette player as a mellotron. He’d record many, many repeating loops, one loop per track, and then use the mixing faders to fly out certain chords or drones. Pretty fun use of old technology.

In playing around with Samplr for iOS, it struck me that it could behave like a mellotron of sorts, too. Sure, Samplr (and other similar apps, like Curtis, csSpectral, and Sector) is great for mashing stuff up in an extreme way, but I decided if I could play it a bit more like a piano.

Since Samplr only slices samples into increments of four, I output a little more than two scales of a synth sound in G# minor. Then it was a simple as loading that sample in, selecting 16 slices, and then playing Samplr as a keyboard.

The results were odd, glitchy, loose, and interesting. I liked that I could hold chords while also dialing in reverb, delay, even playback loop length. The loop points were obvious, but at the right lengths and tempos, they can become rhythmic or simply textural. When playing single melodies, possible by dragging my finger between slices simulated piano runs. Then, inspired by Mr. Cortini’s solo albums, I decided to make a track with samples just from one single synth, played back from Samplr.

Today’s audio clip features a live recording in multiple passes. All of the sounds are from the TAL Bassline101, an amazing emulator of the Roland SH101. I output only two audio files, each with a unique patch but in the same pitch range and scale, and the rest of the variations are from Samplr.

Tags: , , , | No Comments »

Angrient!

Posted: October 1st, 2014 | Author: | Filed under: gear, music, sound design, synthesis
angrient2

Hand-built for drone-y aggression.

I love handmade soundmaking devices, but outside of my beloved Grendel Drone Commander, a lot of the weird noise boxes and effects I have are, well, noisy. They tend to be aggressive, loud, and blippy. Some accept MIDI, some accept CV, some accept no sync signal at all.

One evening I wondered if I could coax them into some semblance of ambient drones, to loosen myself up and not record to a fixed tempo, and to not get too “precious” with editing in post. Somehow the angry nature of these devices just seems to bleed through anyway. Or is that my angry nature?

So, the result of this cathartic experiment was “angry ambient.” Or, angrient.

angrient1

This track features the following:

  • All takes recorded live into Logic Pro X: No sync to anything, no MIDI, no CV.
  • One track of a Bleep Labs Nebulophone, with its alligator clip clamped onto a key for a sustained drone, recorded through a Red Panda Particle pedal set to Reverse, both tweaked live. The dry and effected track were tracked simultaneously.
  • Another droned Nebulophone track went through the Particle set to Delay, and then through a Seppuku Memory Loss pedal, with its clean microchip inserted, all three tweaked live. The dry and effected track were tracked simultaneously.
  • One track of the RareWaves Grendel Drone Commander, recorded 100% dry. That thing needs no love, especially when its bandpass filters gets overdriven at low frequencies. Yummy.
  • One track of the Bleep Labs Bleep Drum, played live in Noise mode, but then run through Glitchmachines’ Fracture plugin first, and the Michael Norris Spectral Partial Glide filter. That’s what generates the bright, granulated shimmers. These are the only digital effects plugins on any channel.
  • Volume automation was done in one pass, “live.”
  • The whole thing is run through U-He’s Satin tape emulator plugin for some additional harmonics and mid-high sweetening.

It is what it is.

Tags: , , , , | No Comments »