sound

Hello?

A common thing to happen in a play is to have a telephone that rings. There are many ways to ring telephones onstage, but in the current show that I'm running, our telephone, a standard desk telephone with bells inside of it, must ring through a wireless speaker. So, that means playing a sound of a telephone ringing through the speaker. Easy, right? But what if they pick up the phone in the middle of the ring? You can't just stop the sound cue, because in a telephone the clapper would stop hitting the bells, but the bells would naturally ring out. So, you split the sound into two parts, the ringing, and the ring-out. So, you play the ringing, then when the actor picks up the phone, you play the ring out. But now you have to carefully listen to the phone ringing, and NOT play the ring-out if the actor has picked up the phone in between rings! ACK. That sounds like stressful work to me, the intrepid sound operator.

Computers to the rescue!

I wrote a little AppleScript for QLab 3 which listens to the phone ringing for me, and determines if the phone is mid-ring or between rings!  If you've read this far, I'm going to assume you've had to deal with this before, so I'm going to post the script here.


tell application id "com.figure53.qlab.3" to tell front workspace
  set cueTime to action elapsed of cue "1" -- change "1" to target your repeating full ring cue
  if (cueTime mod 6.0) < 1.9 then -- 6.0 is the length of a full ring, and 1.9 is just before the ring-out: edit as necessary
    start cue "2.1" -- in this case, cue "2.1" is a quick (0.1s) fade out of the full ring
    start cue "2.2" -- "2.2" is a cue with only the ring-out portion
  else
    start cue "2.3" -- cue 2.3 is a devamp cue, to stop the full ring cue when it completes
  end if
end tell


So, the entire sequence in the QLab Cue List would look like:

sample QLab Cue List (click for full-size)

I'll have more from the theatre realm soon, so stay tuned!

Another chance to hear my work!

Hi folks! Coming up in September, people in the Ann Arbor area can grab tickets to see Liberty's Secret at the Michigan Theater! I really wish I could be in town to attend this screening.

A candidate you can laugh at without crying for America •&nbsp; photo by  Tripp Green  courtesy of Liberty's Secret

A candidate you can laugh at without crying for America • photo by Tripp Green courtesy of Liberty's Secret

For those that don't know or haven't been following, Liberty's Secret is a film written, directed, and composed by U of M School of Music, Theatre and Dance professor Andy Kirshner, who I met while I was getting my BFA. Andy and I have collaborated several times, and when he approached me to help him with the post-production sound on his film, I of course agreed! I worked on the sound edit while still in Los Angeles, and shortly after moving to Indianapolis, traveled up to Ann Arbor to mix the film in the Performing Arts Technology department's excellent facilities.  It also gave me a chance to work with fellow UM grad Dave Fienup, all-around nice guy (and Detroit-area sound guy, for all your sound production and post-production needs)! Dave provided foley for Liberty's Secret, and did a bang-up job.

If anyone is able to attend this screening on my behalf, please give my regards to cast and crew, and let me know how it sounded!

New content!

Hey folks! Just wanted to draw your attention to a new space I've created on this site - called "MUSIC & DESIGN" - which is a place for me to share some of the things I've worked on. Squarespace, my elegant and gracious web creation and hosting platform, has a template for a "music album," so I'm trying it out as a format for sampling my work. Unfortunately, I haven't yet figured out how to display all the metadata that I've so lovingly typed into each track, but I'll keep you updated if/when I do. In the meantime, check it out!

Audio Levels and Metering: Part 1 | Art of the Guillotine

This article presents the whole loudness and metering issue from the very basics. I've found using the meters in iZotope's InSight to be indispensable to me while editing and pre-mixing projects in my small home studio, before either delivering to clients, or going to a larger studio that's going to cost me (or my clients) much more money. I'm looking forward to part 2.  Thanks to Designing Sound for linking to this article.

The Audio Producer's Guide to Loudness

I've been dealing a lot with this topic lately, as I'm working on my second feature-length film mix in as many months. When you're working on mixing levels for a project that's 90 to 100 minutes long, over the course of several days or even weeks, it's easy to lose track of the loudness of the mix over time. Having a properly calibrated listening environment is key, but having tools to help you keep track of your overall loudness levels over time can be a great tool.

Today as you mix audio productions you most likely monitor levels with a peak meter — those two little bars that jump up and down in tandem with the waveform — and you know those meters don’t always line up with what you hear! You look at two very different pieces of tape on the meter (say, your studio-recorded voice against an interviewee on the phone), tweak those two voices until they appear the same on the meter, and your ears tell you they play back at quite different volumes. You might decide to forego the peak meter for the RMS meter, which can provide a small advantage over a peak meter, but they too do not take perception into account. This is a problem that a new audio measurement method, loudness, can help you solve. Finally there’s a way to simplify levels.
— Rob Byers

Using Logic Pro to generate "air"

A while back, I linked to a Designing Sound article by Doug Murray (whom I later would go on to work for on Dawn of the Planet of the Apes!) about using convolution reverb to generate room tone to fill holes in dialog tracks.  At the end of that linked post, I speculated on using Logic Pro's Space Designer plug in, since I didn't (and still don't) own a convolution reverb for ProTools.  Well, I finally had a reason to sit down and try it out, and the results were pretty great.

Using Space Designer to generate endless room tone!

Using Space Designer to generate endless room tone!

I was really pretty happy with the results. The general process:

  • I cut in ProTools, so when I needed a piece of fill, I'd copy-and-paste a clip of room tone onto a new track that I had labeled "FILLSeed", and consolidated it (OPTION+SHIFT+3) into a new file, and named it according to the character, room, and reel, i.e. "FILLSeed_Chris_BR_R1". I called it "FILLSeed" because I didn't want to confuse this short clip of room tone with the synthesized version that will come out of Logic later.
  • Switching over to Logic, clicking on the disclosure triangle next to the IR Sample label in Space Designer brings up a menu for importing a sound file as your new impulse response.
  • Make sure to set the Dry level to "0", which in less confusing terms would be -∞, and setting the "Rev", or wet signal, to "max." Space Designer is also a multi-channel plug-in, so it will always come out as stereo, so I set the Input slider to the mono setting, over on the left side of the window.
  • Turning on the Test Oscillator insert plugin (with white noise, output at around -50 to -60 dB) on my AIR SOURCE track in Logic, white noise starts pouring into Space Designer, which gets convoluted with the impulse response of room tone that I just imported, and sweet magical room tone comes pouring out!
  • Space Designer does have built-in EQ, so if you need to tweak it a little bit with some high or low-pass/shelf, it's really easy to do that right in the plugin window.
  • I set the input of a second track to be the bus output of my SOURCE track, put it in Record Mode, and record a chunk of fill! Drop that new recording into your ProTools session, and cut it in. Huzzah!

So, there you have it. For about 1/5th the cost of Altiverb, you can buy a copy of Logic Pro and have your own capable convolution reverb. And, you get a pretty nice DAW with some great features of its own, to boot! With the improvements to Core Audio in OS X, having two DAWs open at the same time, using the same hardware, is actually possible, making this type of workflow far less painful.

 

POST SCRIPT (11/09/2015): While this process has been rendered less useful due to features in software like iZotope's RX and the Ambience Match algorithm, this is absolutely still relevant if you a) don't own RX or 2) don't have it available immediately.

Pro Tip: Comb Filtered Audio

I'm not going to name names, but I'll just say that what I'm about to show you is actually destined for a real live actual television show, with actual famous people on said show. However, if you've ever wondered why sound people are special, it's because we do our best to avoid things like this:

This is bad.

This is bad.

This is a spectrogram of a clip of dialog. See all those shadowy spaces that make lots of horizontal lines? That's what audio people call "comb filtering," and it sounds terrible. This is most likely the result of a mic going into a mixer (or camera) and being combined with itself, but the 2nd signal had a slight delay - like a millisecond or less. Headphones, people! Listen to your sound before you hit record!

Not good.

THE NIGHTMARE at Sundance!

So, I'm wrapping up the dialog and ADR edit on The Nightmare this weekend, while my cohorts are finishing up music and sound design cues.  Monday morning, we start the mix process, wrapping up (hopefully) on the 10th or 11th of January.  We'll have to be efficient, as The Nightmare is going to be shown at one of the world's most popular film festivals - The Sundance Film Festival!  Check it out on the schedule, and if you're in Park City, UT at the end of January, go see it!

http://www.sundance.org/projects/the-nightmare

Catch up

Oof. So... October happened.

Quick recap:

  • The play for which I did sound design and original music has opened at the VS. Theater in Los Angeles. It's called Completeness, by Itamar Moses, and it's a Los Angeles premiere! Go check it out, it runs until Dec 7th.
Believe it or not, there are 6 speakers hiding in this set!

Believe it or not, there are 6 speakers hiding in this set!

  • I'm just over halfway through my first quarter of teaching an Audio Production class at Cal State University, Los Angeles. It's going pretty good so far, at least based on the test and homework scores I'm seeing.
  • I'm getting back into some more freelance editing for ASAP (Amalgamated Sound And Picture), cutting FX and dialog for animated shows, in particular an educational web series called "ABC Mouse."
  • This month, I'll begin working on a documentary feature called The Nightmare, directed by Rodney Ascher (directed Room 237). I'll be co-supervising the sound post production with Jonathan Snipes, who you may remember helped me complete the post on Excess Flesh, my last indie feature.
  • Also, looking for projects that are getting started in late December/early January. Let's talk!

More to come soon. Stay tuned...

Walter Murch on Dense Clarity - Clear Density

In conversation with Andy Kirshner, friend and U of M Professor, and his wife, the topic of mixing came up, and how modern films have hundreds, sometimes literally 1000+ tracks available, and how these colossal projects get mixed, and this article from Walter Murch at transom.org came up, which I had never read. I'm so glad I have, now. I'm going to quote a rather large section from the article, which I hope will lead you to click through and read more. It's certainly worth it:

The general level of complexity, though, has been steadily increasing over the eight decades since film sound was invented. And starting with Dolby Stereo in the 1970’s, continuing with computerized mixing in the 1980’s and various digital formats in the 1990’s, that increase has accelerated even further. Seventy years ago, for instance, it would not be unusual for an entire film to need only fifteen to twenty sound effects. Today that number could be hundreds to thousands of times greater.

Well, the film business is not unique: compare the single-take, single-track 78rpm discs of the 1930’s to the multiple-take, multi-track surround-sound CDs of today. Or look at what has happened with visual effects: compare King Kong of the 1930’s to the Jurassic dinosaurs of the 1990’s. The general level of detail, fidelity, and what might be called the “hormonal level” of sound and image has been vastly increased, but at the price of much greater complexity in preparation.

The consequence of this, for sound, is that during the final recording of almost every film there are moments when the balance of dialogue, music, and sound effects will suddenly (and sometimes unpredictably) turn into a logjam so extreme that even the most experienced of directors, editors, and mixers can be overwhelmed by the choices they have to make.

So what I’d like to focus on are these ‘logjam’ moments: how they come about, and how to deal with them when they do. How to choose which sounds should predominate when they can’t all be included? Which sounds should play second fiddle? And which sounds – if any – should be eliminated? As difficult as these questions are, and as vulnerable as such choices are to the politics of the filmmaking process, I’d like to suggest some conceptual and practical guidelines for threading your way through, and perhaps even disentangling these logjams.

Or– better yet — not permitting them to occur in the first place.
— http://transom.org/2005/walter-murch-part-1/

Oh, and by the way, in this one instance, READ THE COMMENTS. Murch himself engages the readers and goes into more detail about many points. Who knew a comments section could be not only readable, but informative and enjoyable!


*article thumbnail photo of Walter Murch mixing Apocalypse Now from rogerebert.com

The Details That Matter | Designing Sound

Randy Thom (you know of him, I guarantee it) wrote a guest post for the website Designing Sound (click the title link above), in which he discusses the art of sound design, and how practitioners of this art have to make choices when it comes to how much, or how little, detail to provide with sound.

I find this an interesting topic of discussion given my main source of work the last 6 months or so: animation. In animation, the sound editor has to provide all the sonic details, as there is no production audio that was recorded along with the images. Therefore, it's a continuing set of choices regarding what sounds need to be there to make the story clear and focused, what should be there to make the "world" a lively and active place, and what sounds might be there to highlight and enhance the mood, action, or other emotional elements.  Working in animation has definitely improved my decision-making skills in this area, and I will be the first to admit that I'm still developing, learning, and honing my skills as a sound editor and designer with the help of my employers and fellow editors.

Creating a Unified Voice | Designing Sound

This article from designing sound.org (click on the post title), is a good starting position for this conversation. I've worked in a lot of different places, with different people, doing different things: video games, multimedia presentations, plays, musicals, television shows, and films (both indie and major studio).  In none of these capacities have I worked alone - always as part of a team. Working as part of a team will always have unique challenges, be they working conditions, personalities, quality of tools or material,s and size of budget.  We, as sound people, always seem to be fighting for elbow room at the creative table, fighting for more input and respect from older more established disciplines, like lighting, costumes and scenery.

There’s been a push to give sound a better seat at the creative table in each of our respective mediums. It’s not a new idea; it’s been sought for a long time. We seek to assert ourselves as story-tellers and artists, not mere “technicians”…to serve another’s vision without being subservient to the visual. We know that we can add to, and drive, the story…and we want that opportunity.
— Shaun Farley, designingsound.org

I like the point of this article. The way I see it, we have to first and foremost do our jobs to the best of our abilities, and let our work speak loudly. The quality of our work, of our storytelling, will get us into the conversation with those who recognize its power. We also have to remember that it's not our story, but the one that we're helping to tell.

In addition to designingsound.org, the source for this post, hat tip to the Tonebenders (tonebenders.net, @thetonebenders, and a great podcast) for pointing out the article. Check them out, too!

Back in the Freelance Saddle

Faithful Readers! 

As an audio professional, I suppose this day was inevitable, but after nearly 2 years as a veritable staff sound editor, I am back on the market as a freelance sound editor-slash-mixer-slash-designer-slash-engineer. 

If you, or perhaps someone you know, has any audio related needs, please feel free to pass on my name, address, phone number, email address... Check out the Credits page you might have noticed is linked there at the top of this page... Let's talk. I can help you. Really. Seriously, I can.