by tomlinson Holman
- The wheels sometimes seem to take a long time, but actually they are grinding exceedingly finely. It wasn’t so many years ago that the first International Alliance for Multichannel Music was held. At that time, new forms of discs were being considered as far in the offing by the major companies, and would offer better audio quality. The definition of better quality at the time only considered increases to sample rate and word length, with perhaps a notion that coding schemes other than PCM might be considered. Stereo ruled. But IAMM raised consciousness about spatial sound, with attendees from major hardware companies and smattering of record company people.
Interestingly, that balance of personalities still holds today: the most interested are the hardware people, the next most are the production people and early adopters at home (elsewhere they’re called consumers, but isn’t that just too capitalistic!), and, trailing behind, are the record companies, except for a few small companies who are, once again, out in front. The CD was made in the stereo store by Telarc Records and Frederick Fennell and the Cleveland Symphonic Winds. Ask anyone who was there at the time. A 1979 review in Fanfare said: “In most cases, no recording can come anywhere close to Telarc’s digital process for impact and glory of sound.” The startling improvement over LPs sold the format to thousands of people.
Time passes, and in its fullness we see the limitations, the largest of which is to our way of thinking the number of channels. The AES set up a Task Force on High-Capacity Audio that came to some conclusions about what was needed to drive the future of audio. It was downloaded hundreds of times, and may have had some impact on the DVD and SACD camps. Hard to say what, except for this: those “higher quality” media quickly evolved to multichannel status. Blame home theater, blame IAMM, blame the constraints imposed by stereo, but they’re here. Now, the primary reason they will sell to large numbers of people is their multichannel capability.
Everybody’s jumped in, and it’s a free-for-all that keeps life lively. One of the most interesting things going on has been in the watermarking area. Now we all earn our living, directly or indirectly, from sound sold to the public (or you wouldn’t be a surround professional, right?). And we’ve got to be worried that a completely transparent medium, with lots of capacity in all ways, won’t be instantly copyable. On the other hand, many of us are crazed that watermarking not affect audio quality, and we’ve had to trust industry golden ears invited into listening tests under the control of proponents.
You might have heard about it over the last year. The SDMI Challenge was set out for a three-week period last October. Six sub-methods were put up on a Web site, with techniques known to coding and security experts. Five Princeton University professors, along with others from Rice and Stanford, met the challenge. They broke all the codes that were available to break. But their results were suppressed by a threat of a lawsuit. Well, the dam broke in August, when their paper went public at a Usenix conference, and they published it with the foreknowledge of the proponents. Now the professors are suing to be sure that their freedom of expression won’t be suppressed in the future: the tables seemed turned. Princeton backs them, just as a free-speech-loving institution should.
Here’s what they reported. One scheme involves adding echoes into the signal. They may vary around in time, but from the responses shown in the paper, and guessing a little about scales that were neglected, the levels would be plainly audible. How did they get past the listeners? Probably by careful selection of material; a pink noise test alone would have showed this scheme up. The next scheme involves notches in the frequency response that move around. The National Institute of Science and Technology ruled these out in the Copy Code days. The third scheme adds noise at a specific frequency: moving the frequency by de-tuning by a small fraction kills that one. And so forth. It seems anything anyone can construct can be broken, and relying on the kindness of proponents to keep their algorithm secret doesn’t work. Information was even gleaned once the researchers were on the right track from the proponent patent. Whew.
It is maybe the central conundrum of technology in our times.