Zoom!
Jack Brummet brought this amazing movie to my attention.
Zoom blends seamlessly between complex images. The idea is simple (and clever), which is to place a series of images, one inside the next, and zoom in, expanding each new image as you go.
The thing is, to make the Zoom look good, you have to filter the "unzoomed" or small images and connect them seamlessly to the larger, containing image. Since this doesn't make any sense without a picture, look at this credits page for Zoom, and click on the X's in the boxes - they show the pieces that make up the larger Zoom picture.
This got me thinking.
Graphics programmers have to worry about “Level of Detail” (LOD), which means creating different shapes and textures for objects depending on how far away they are from the camera.
The trick is to blend between the different models without anyone noticing, so that as one approaches an object, an increasing amount of detail is visible.
Zoom does a terrific job of matching the smaller, filtered images to the larger, original images.
As I considered the problem of managing the level of detail of graphics, it occurred to me that the same principles - filtering and visibility control - could be mapped onto sound.
Does your stereo have a "Loudness" button? You're supposed to turn that on when your stereo is playing at low volume, to increase the bass levels, so you can hear them better. (Why is it called "Loudness" instead of "Quietness"? I don't know, but this article tells you more than you want to know about it.)
Basically, the Loudness button changes the way the music sounds at low volume. Sound at low volume is akin to graphics that are far away.
Since graphics people generate LOD (Level of Detail) models that differ in the distance (both in terms of shape and texture) and carefully blend between them, it makes sense to me that if someone truly wants to make great audio, they should mix several versions of their music at differing audio levels and then the hardware - your stereo - should blend between these versions to present the best possible sound, depending on the volume setting of your stereo.
There are new audio formats that have high bit rates, like 192 kHz, which most people think is overkill. (Yes, SACD discs sound better, but this is probably because they were mastered with great precision instead of the normal crap you hear. After all, very few SACD discs come out each year, and Sony insists that they be mastered with great precision. A smart guy in the AES of the PNW told me this – but I can’t remember his name.)
Instead of making the music higher resolution, how about recording multiple takes at different volume levels - I think that could be a much better way to use all those extra bits.
I think this is a frickin' brilliant idea. Of course, no one will do it, because it would be a ton of work. Still, it could be cool in special circumstances, for instance in some kind of art installation.
This article © 2004 by Stephen Clarke-Willson. All Rights Reserved.
I think Stephen's idea of LOD for audio would be good for a master bus application only, as audio enters airspace at varying volumes to reach the listener. I personally don't like the sound of my "loudness" button on my stereo, but I'll admit I use it in the car where the noise floor drowns out the quieter sounds.
ReplyDeleteAs part of the audio team at Amaze Ent., we've dreamed of audio LOD-ing for a while now -- but in a different application: for individual sound sources within a 3D game. Here, I think a different approach is needed. Currently, game engines use volume and panning only to simulate 3D position and distance. But our ears do more than that. When a sound is nearby, we hear a full-bandwidth sound. When sound sources are far away, we hear less high-end and less (or delayed) low end. So, to emulate the way the ears perceive distance, we need to add dynamic EQ curves/filters based on the distance from the camera/player. Unfortunately, this wouldn't make any game run more efficiently; in fact it would probably be pretty expensive. But it would be much more realistic, and the process would seem to be similar to graphics LOD.
Thanks for the thoughts, Stephen!
-Mark Yeend
Thanks for your comment, Mark. I think the next generation of hardware will support the kind of LOD you want to do - where you can apply filter settings based on distance to the sound. Creative already has the EAX sound processing for PC, although I don't know of anyone that uses it.
ReplyDeleteFor graphics, we have automatic LOD, which basically filters the visuals (with MipMaps) in the same way you are suggesting for sound, but we could also manually generate the LOD, which sometimes you needed to do to get good enough quality. It would be a lot of work to remix music and/or sound effects based on distance, but sometimes it might really make a difference.
I agree with you - just getting the automatic filtering tools available would be good. Let's hope the next generation of engines and consoles supports it and the programmers expose that functionality.