10.2 Redux

This is a continuation of the story last time: recording a symphony orchestra in 24-track for 10.2-channel playback systems. Remember that what we do may be specialized, but I think it has implications for how many other recordings might be made, so that’s the reason to report this. Last time we covered the original recording. When it came time to mix the captured channels, several things became of importance.

Spot Mic Time Delay
The first and foremost of these is the ready availability of time delay on spot mics. While we tried mixing first, things were vague. Imaging was imprecise and difficult. Timbre wasn’t great, and mics needed EQ’ing. Not as good as we hoped. Then we got the bright idea (duh!) that we’d better put in the time delays, delaying the spot mics to somewhat after the main mic pickup, by 15–20 ms, before equalizing. Yeow. You’d have thought we’d re-equalized things for the better, moved mics on the original recording date, etc., all to the better. It makes that much difference: always, always delay and pan first, then you will probably find, as we did, much less need for EQ’ing, gain riding solos, etc. Here it is folks, a real reason you can justify that new digital console in the budget. Show your boss this column: I say it makes that much difference.

I first learned more about this several years ago at AES Copenhagen. There was an 8 AM Sunday morning workshop on Delaying Spot Microphones. Now I don’t know about you, but I’m not at my best at 8 AM on a Sunday, that being just about the only time of the week typically with a bit of leisure. So I arrived a little bit late, expecting the panel to outnumber the attendees. Was I wrong: the room was packed and I had to stand for the whole workshop. This is a hot topic, at least in classical music recording circles, and there’s plenty of other related material recorded for which it can be of use.

There is debate. If the delay is set exactly to the actual physical delay, there are problems since small
perturbations in the delay (for, say, an instrument at a different distance than the exactly calculated one) will comb filter badly, and be more trouble than it’s worth. The trick is to get the spot mic in slightly late so that the precedence effect won’t take over and dominate as you bring up the spot mic fader. Without delay, the spot mic seems to have an extremely critical level, since you are adding in a earlier signal, so both level and time are changing at one and the same time. With delay, the level seems more naturally applied, and as you bring it in you add clarity due to the spot mic, rather than making the instrument snap to the foreground.

Scaling Functions
We mix in a fairly small space, and the mix was destined for a fairly large auditorium, about 80 feet deep and with front speakers left and right around 30 feet apart. How do you do this mix? If you’re Disney, you build a stage that matches the venue you’re going to play, and/or wait to mix in the final venue. We could do neither.

There are four things that are different principally between the small mix room and the large theater.

1.Since the distances are all larger in the big theater, the time delays are longer. This affects phantom images between left and center, say. For a listener near the centerline in the larger theater, the center channel sound is going to be earlier than left by more than it is in the small room, so phantom images half way panned between left and center will get sucked into the center through the precedence effect. How to overcome this? How about delaying the small room monitor system loudspeaker feeds so that it matches the time delays in the big room? I’d say patent pending, but then you couldn’t do it, so I don’t, and you can because you read this magazine! By the way, it works.
2.The physical sound pressure level isn’t the same in the large vs. the small room for the same impression of loudness. The correction depends on the difference in room volume and on reverberation time. It changed for us in the large room when an audience was present than when it wasn’t because the large auditorium was fairly reverberant, and adding people changed its acoustics (unlike a movie theater that is more constant from empty to full). So we use a combination of running the larger room at a higher level for calibration, and then subjective judgment of a number of experts to get the level right, and matched the small room loudness.
3.Something similar happens for the “house curve” equalization. Everyone competent who has tuned up a large room sound system to flat knows this and thus uses some kind of high-frequency rolloff (there’s a lot behind this, including measurement technique, so I’ll have to leave it there for now). The question for us is, “What is the difference in HF rolloff for the mix room vs. the large room to get the same perceived balance?” The answer to that is complicated, but we think we’ve got a handle on it that keeps the original mixers happy with the translation, and that’s no mean feat.
4.That reverberation time bugaboo should be an issue separated from those above. It really affects things such as loudness and timbre judgments. Perhaps we’ll take this up in later columns when we’ve had more time in more sites of 10.2.

Arbitrary Channel Panner
Our early work was difficult. With only a 2-channel panner, a lot of bussing and editing was necessary to move something around the room. (Pan center to left, cut, pan left to left wide, cut… whew. And a lot, really a lot, of bussing.) So we signed on to Digidesign development and rolled our own arbitrary channel panner. Johnny Lee, a student in the Immersive Audio Lab of the Integrated Media Systems Center at USC, made our lives a whole lot simpler. Maybe this will show up some day as a commercial product. Who knows?

String Timbre
One of the hardest things to get as many of you probably know is correct timbre on violins. This is the subject of ongoing research. Why should this be? We know the timbre of other instruments well from live concerts, so why is it that the massed strings show up honkiness or other problems so well? What we know for now is that this is a problem, but our scaling functions seem to work, and our well-equalized lab puts out a mixed product that works on a well-equalized large scale system, with the corrections mentioned above, but it is a continuing avenue of investigation. We just seem to have a better sense memory for the sound of strings in a good orchestra than for other musical sounds.

Surround Array
In small scale systems we found that if we portray reverberation or ambience over 10 channels, and allow the listeners the freedom to more their heads around, they can locate the channels. So, besides imaging all round, one would like a system that can produce a completely immersive experience, too, that is, one where you cannot localize the loudspeakers. Otherwise, just what is the meaning of “surround sound?” So we have used the dipole radiating surround as a choice made by the program producer/engineer as an alternative to direct radiators all round, and items that you do not want to localize such as ambience are directed there.

But this use of dipoles breaks down in large rooms where people sit at many angles to the loudspeakers. Furthermore, there are no really high-powered dipoles with which to drive a large space. For such spaces we fall back on the motion-picture solution, arrays of smaller loudspeakers. In our particular case at hand, we had an advantage. This particular room was designed as a live theater and had multiple lighting coves over the audience. We were able to use these to good effect to bounce sound around and off the ceiling from the coves. While this seems perhaps ugly at first glance (all those comb filters!), it actually sounded quite good as we were able to produce quite enveloping and nondirectional sound from an array of loudspeakers in the ceiling and side-wall lighting coves, the base of the back wall, and a projection port. The thing to watch out for is calculating the sound power carefully so that each of the arrays of speakers can keep up with the headroom of one screen channel, so there is constant headroom among the channels.

Subwoofer
One key to good sound reproduction is obviously frequency range. By using a common bass subwoofer, crossed over below 40 Hz, the range was extended downward for all of the channels by using bass management. This was truly impressive as the only place for the 800-lb. box was visible on stage to the audience. Four 15-inch drivers in a Whise 616 provided the bass from 14 to 40 Hz for all of the channels, and up to 120 Hz for the LFE channel. The LA Times reported it as an 800-lb. amplifier!

After the Internet2 conference for which this was done in October, I rolled the 800-lb. gorilla across to the cinema theater in which I teach, now renamed for Frank Sinatra. I made measurements on it and on the built-in conventional system (four 18-inch drivers in their own boxes, flush-mounted in a large baffle front wall). The Whise was about equal in sensitivity and in other ways, except for frequency range. The conventional system had a 3 dB corner around 30 Hz, the Whise at 14 Hz. I measured these each at eight points in the room with a high-resolution system, and found them to actually be pretty flat in their passbands. What I didn’t get to do was a blind A/B subjective comparison — now that would have been interesting. Is extending the bandwidth to infrasonic frequencies worth it? I’d like to know. I know it was certainly impressive reproducing the major bass drum hits of the Copland fanfare.

Derattling
We did something that might not be too common but which I do on every job I can, and that is to derattle the room. The bass definitely sounds cleaner after doing so because some of the rattles are so bad, typically, and not too hard to fix. In this case, we ran an about 90 dB SPL sine wave from perhaps 50 to 400 Hz from a high-frequency resolution oscillator (read Analog!). The reason that you need such high-frequency resolution is that you have to hit every single frequency because the “q’s” of the mechanical resonances are so high. We spent an afternoon with a roll or two of duct tape and the rattle generator and got lots of rattles out of the theater, and the bass was cleaner thereafter than before.

New Work
My colleagues have just done the first 8-channel recording remotely over Internet2 from Miami to USC. Currently this is linear PCM at 48 kHz sample rate and 16-bit resolution, but the resolution is to be upgraded to 24-bit as the software develops. This solves the problem of what to do for a multichannel control room nicely: bring the sound to you! To get from the eight transmitted channels to something approaching our on-location 24-track recording, the Virtual Microphone principles are used, as has been described in these pages in the past.

Wrap Up
These two columns have described ongoing work in “pushing the limits of the envelope” in making software. Let’s hope the work continues and that some of these techniques can be useful to you in your multichannel pursuits. Next time, bringing it to a theater, the Paradiso in Memphis is equipped with an 18-channel sound system (73 power amp channels, 60,000 watts, in a movie theater!) to accommodate all formats from mono-era tracks to emerging 10.2-channel ones.

Surround Professional Magazine