All about Headroom ..

Share your favorite Ableton Live tips, tricks, and techniques.
supster
Posts: 2133
Joined: Mon Sep 20, 2004 6:26 am
Location: Orlando FL

Post by supster » Sun Jul 10, 2005 4:21 pm

DJRetard wrote:
Electronic music isnt the most dynamic of music but compression is used all the time as an effect, or for a punchy sound. It has nothing to do with the dynamics of the audio.
sure it does, when you compress the signal you're effectively pushing down the maximum volume and raising the volume of the lowest part of the signal relative to that .. which reduces dynamic range.

as far as "normalizing" ... what i'm talking about .. not sure about anyone else ... is getting the max value of individual elements as close to 0db as possible ... in the sample itself, through recording or raising the gain or rms if necessary.

NOT neccesarily on the mixer channel, where whats coming out of the master is the most important thing.
--
NEW SPECS: Athlon 4200+ dual; A8N-SLI m/b; Win XP Home SP2; 1 GB RAM; 2x 7200 RPM HDD: 1 internal, 1 Firewire 800 (Firewire is project data drive); M-Audio Triggerfinger

josh 'vonster' von; tracks and sets
http://www.joshvon.com

Harris.Andrew
Posts: 164
Joined: Mon Oct 04, 2004 4:50 am

Post by Harris.Andrew » Sun Jul 10, 2005 5:50 pm

"Im not against normalising audio, but I really see little point in doing it. If some audio is to low I dont normalise. I import in to sound Forge and add gain."

See, this is where I would just skip the handy-work and a lot of jumping around between sequencer, editor, mixer or whatever and just normalize. If I normalize and then lower the fader in the mixer it's the exact same thing as opening it in wavelab and adjusting the gain, but much easier to tweak.

Anyway, d/l'd the NIN demo . . . seriously, it's post-production, it's packaged, it's bounced down from another program where I imagine they actually used the faders rather than having every fader set at 0. And I bet they normalized.

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Tue Jul 12, 2005 9:10 pm

I didnt want to force my opinion heavily on this normalising issue. I tried to point out the Nine inch nails track on this forum as an example of this issue being irrelevant and it fell on deaf ears.

If you normal a kik drum for example and then you want to eq it your going to have to lower the input of the plugin to allow some breathing space. If all you raudio is hitting maximum DB then your just giving yourself problems. Is this concept difficult to grasp?

If you normalise all your audio then your individual faders are going to HAVE to be low, and all your plugs will input gain stage will probably have to be reduced. OR, your master fader will be very low

if you think normaling improves the sound quality or allows for a better end result, this again is wrong.

Furthermore what is the point in shoving your audio through another conversion process. OK, I agree the consensus these days is normaling is non damaging to the auido. But still, its another process which is not needed.

I think this is an issue becasue Ableton have an option to normalise when rendering your master. This is fine and can be used without any worries. However I still wouldnt do it.


My rule of thumb is this:

recording at 16 bit = record at high levels
recordings at 24 bit = record at mid levels

The bottom line is normalising WONT IMPROVE your audio quality. Anyone who says it does is talking drivel.

The only time I might use it is on a stereo master before burning to CD.

Come on guys, this is basic recording knowledge isnt it?

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Tue Jul 12, 2005 9:32 pm

Supster,

trust me on this one. YOU dont need to normalise your audio.

Take impulse for example. That thing can go seriously loud. If I drop a kik drum from a smaple CD recorded at -0,01db I have to bring the level of that thing down a lot.

:)

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Wed Jul 13, 2005 3:08 am

Harris.Andrew wrote:"
Anyway, d/l'd the NIN demo . . . seriously, it's post-production, it's packaged, it's bounced down from another program where I imagine they actually used the faders rather than having every fader set at 0. And I bet they normalized.

I bet Each track was soloed and bounced down at the level it was mixed. I highly doubt all the audio was normaled either, and furthermore I seriously doubt they mixed the track completely in a DAW. Find me a pro engineer who is normalising all the audio? This is just crazy to beleive that imo.


Heres a quote from Bob Katz. In case you dont know anything about him, hes a very well known mastering engineer and a true audiophile type who definitely knows his stuff, and far more than anyone on here.

BOB KATZ
Do not change gain (changing gain deteriorates sound by forcing truncation of extra wordlengths in a 16-bit workstation). Do not normalize (normalization is just changing gain).


If you want to read the whole thing go here

http://www.digido.com/portal/pmodule_id ... page_id=27

And another quote from bob Katz
The Myth of "Normalization"
Digital audio editing programs have a feature called "Normalization," a semi-automatic method of adjusting levels. The engineer selects all the segments(songs), and the computer grinds away, searching for the highest peak on the album. Then the computer adjusts the level of all the material until the highest peak reaches 0 dBFS. This is not a serious problem esthetically, as long as all the songs have been raised or lowered by the same amount. But it is also possible to select each song and "normalize" it individually. Since the ear responds to average levels, and normalization measures peak levels, the result can totally distort musical values. A compressed ballad will end up louder than a rock piece! In short, normalization should not beused to regulate song levels in an album. There's no substitute for the human ear

Harris.Andrew
Posts: 164
Joined: Mon Oct 04, 2004 4:50 am

Post by Harris.Andrew » Mon Jul 18, 2005 10:27 pm

DJRetard wrote: [1]If you normal a kik drum for example and then you want to eq it your going to have to lower the input of the plugin to allow some breathing space. If all you raudio is hitting maximum DB then your just giving yourself problems. Is this concept difficult to grasp?

[2]If you normalise all your audio then your individual faders are going to HAVE to be low, and all your plugs will input gain stage will probably have to be reduced. OR, your master fader will be very low

[3]I bet Each track was soloed and bounced down at the level it was mixed. I highly doubt all the audio was normaled either, and furthermore I seriously doubt they mixed the track completely in a DAW. Find me a pro engineer who is normalising all the audio? This is just crazy to beleive that imo.

[4]
BOB KATZ
Do not change gain (changing gain deteriorates sound by forcing truncation of extra wordlengths in a 16-bit workstation). Do not normalize (normalization is just changing gain).


If you want to read the whole thing go here

http://www.digido.com/portal/pmodule_id ... page_id=27

And another quote from bob Katz

[5]
The Myth of "Normalization"
Digital audio editing programs have a feature called "Normalization," blah blah blah

[6] Take impulse for example. That thing can go seriously loud. If I drop a kik drum from a smaple CD recorded at -0,01db I have to bring the level of that thing down a lot.
I don't know why I have to argue this . . . it bugs me though, gotta scratch the itch.

[1] Jesus, dude, you mean i'd have to lower the input gain on the plugin to compensate for the plug-in adding gain? Hey wait . . . doesn't that . . . make . . . sense? Doesn't that . . . give more control and knowledge about what the plugin does to the sound?

well, i love to cut. hate to boost too much. part of the reason i like normalizing - can always cut->normalize.

If the sarcasm seems harsh . . . you started it :D

[2] That's the point. The faders are going to be low, but accurate. -18db means exactly that: -18db. If the faders are all at 0, they're high but innacurate; with the same sound, it's like, 0db means -18db, and how would I ever know it's -18db?

[3] - Ya, that's what I said . . . bounced down out of something else and has no relation to an actual, production-level mixing environment. you had directed me to this as an example of a different workflow without normalization . . . this isn't a workflow. I seriously doubt they mixed this without a DAW, especially considering:

Quote from the NIN website (Mr. Reznor himself, I guess?):

All of "with teeth" was recorded using Pro Tools.  This file differs from the others in that it is doesn't start
out "mixed" in any way.  We (Digidesign and I) decided that this format would be the appropriate one for
you to try your hand at mixing, so the session "comes up" pretty raw.  I've also included some alternate
parts and takes (in playlists) that were not included in the final version for you to experiment with.

I think this supports my previous conjectures on the topic.

[4] Have read it before. He's right about truncation, but . . . what Bob doesn't tell you is that quantization error will always be present regardless of gain changes or normalizing, and that normalizing will only add noise below the quant. error already inherent. (There's may be a contrived counter-example, but any time you're normalizing something that peaks -3db, this is true, and in general it's true for something that peaks above this). Also this is relevant in a lot less situations than he implies.

[5] What's with Bob? He hates normalzing I guess. well, he's right in this pretty narrow context, which is: normalization should not be used to regulate song levels in an album. Thanks for the tip Bob!

[6] Actually you have to lower the Kick to -6db, or the output on Impulse to -6db, and you'll be back at 0 on the channel. IIRC they did this on the NIN demo.

supster
Posts: 2133
Joined: Mon Sep 20, 2004 6:26 am
Location: Orlando FL

Post by supster » Tue Jul 19, 2005 12:59 am

DJRetard wrote:Supster,

trust me on this one. YOU dont need to normalise your audio.

Take impulse for example. That thing can go seriously loud. If I drop a kik drum from a smaple CD recorded at -0,01db I have to bring the level of that thing down a lot.

:)
well i heard the track you posted recently and the sound quality is dope.

i still cant help reasoning this way along with .andrew:

1 - most of what we are doing is digital audio created with digital means. and / or the samples are pre-recorded and already close to 0

means the noise floor is relatively low or nothing anyway, or you are already close to 0 to begin with

so its not so much of an issue to raise the entire peak level of the samples to some benchmark level - ... as it would be, say, for a vocalist into a mic or a mic'ed guitar amp ... where you are also raising audible mic noise etc ..so

2 - we are saying that - given this is the case - nothing to say you shouldnt make that close to 0db. espeically if half of your samples are already there anyway. because ...

3 - this gives you an accurate reading for all of the gain changing knobs in your chain ... from the gain on the clip, the gains on your plugins, the gains on the tracks, and the gains on the master. so ...

4 - impulse as an example: BD sample is close to 0db. you want room for EQ and compression and other drum elements and all the rest of the track in your mix

so keep the gain of the cell at 0 - and gain of the Impulse master to below 0 so you have room to add your other kit elements ... sum totall of these reading close to 0 on the impulse out ...

then bring the gain down on that Live channel below 0, to leave headroom for additional EQ and effects and compression if desired

also, sum total of this impulse channel plus all the other elements in your set should total somewhere below 0db at the master (for an original track) to leave room for any multiband EQ and compression on the master channel

so you are visually and audibly working with some standard that you can get an easier grip on. you're still leaving headroom, but you're just not too concerned with that at the sample level, for the reasons given above.

.. correct? this makes sense to me, but if you are saying that you will get better - more authentic and less botched - results by keeping that initial gain level of your samples at a lower value ..

then what value do you use? and why? forgive me if youve already said, because if you did i didnt quite follow the explaination ... its probably me, not you ;)

(and yes mad props to bob katz, but isnt he coming from a heavily analogue background .. so i would think his way of thinking would be influneced by that ... plenty of us are .. like i said ... almost purely sythnetic and / or using pre-recorded samples that are already normalized anyway)
.
--
NEW SPECS: Athlon 4200+ dual; A8N-SLI m/b; Win XP Home SP2; 1 GB RAM; 2x 7200 RPM HDD: 1 internal, 1 Firewire 800 (Firewire is project data drive); M-Audio Triggerfinger

josh 'vonster' von; tracks and sets
http://www.joshvon.com

koshak
Posts: 5
Joined: Tue Jul 19, 2005 12:46 am

Post by koshak » Tue Jul 19, 2005 7:42 pm

great debating in this forum. Talk about perspective.

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Wed Jul 20, 2005 12:48 pm

All of "with teeth" was recorded using Pro Tools.  This file differs from the others in that it is doesn't start
out "mixed" in any way.  We (Digidesign and I) decided that this format would be the appropriate one for
you to try your hand at mixing, so the session "comes up" pretty raw.  I've also included some alternate
parts and takes (in playlists) that were not included in the final version for you to experiment with.

I think this supports my previous conjectures on the topic.
As an ex pro tools user I know how digis PR machine operates. if you read the digizine carefully, you will notice that the bands and artists obviously use pro tools and usually they wil mention "we mixed through an ssl" or some other high end console.
The NIN track could very well have been recorded and mixed in the box, but I can hear theres some tasty sounding compression going on in that track which sounds nothing like plugins.

Look, im no expert on any of this at all. In not even a talented amateur when it coems to stuff like this. But if were still talking about normalising all audio Im simply saying you or supster dont need to do this.

No offence, but Im going to take Bob Katz word over yours. teh man has a wealth of experience in this field. Theres also another guy called 'Nika' who has a forum over at PSW. Go there, PM him or post and ask about normaling audio.

The point is, what is the point in doing it?

And form my experience plugins sound better when they have room to breathe just like outboard gear.

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Wed Jul 20, 2005 12:50 pm

koshak wrote:great debating in this forum. Talk about perspective.

yeh, well your comment helped. I learned much from that. Why not debate?

DJRetard
Posts: 473
Joined: Fri Jun 17, 2005 8:48 am

Post by DJRetard » Wed Jul 20, 2005 1:22 pm

supster wrote:
DJRetard wrote:Supster,

trust me on this one. YOU dont need to normalise your audio.

Take impulse for example. That thing can go seriously loud. If I drop a kik drum from a smaple CD recorded at -0,01db I have to bring the level of that thing down a lot.

:)
well i heard the track you posted recently and the sound quality is dope.

i still cant help reasoning this way along with .andrew:

1 - most of what we are doing is digital audio created with digital means. and / or the samples are pre-recorded and already close to 0

means the noise floor is relatively low or nothing anyway, or you are already close to 0 to begin with

so its not so much of an issue to raise the entire peak level of the samples to some benchmark level - ... as it would be, say, for a vocalist into a mic or a mic'ed guitar amp ... where you are also raising audible mic noise etc ..so

2 - we are saying that - given this is the case - nothing to say you shouldnt make that close to 0db. espeically if half of your samples are already there anyway. because ...

3 - this gives you an accurate reading for all of the gain changing knobs in your chain ... from the gain on the clip, the gains on your plugins, the gains on the tracks, and the gains on the master. so ...

4 - impulse as an example: BD sample is close to 0db. you want room for EQ and compression and other drum elements and all the rest of the track in your mix

so keep the gain of the cell at 0 - and gain of the Impulse master to below 0 so you have room to add your other kit elements ... sum totall of these reading close to 0 on the impulse out ...

then bring the gain down on that Live channel below 0, to leave headroom for additional EQ and effects and compression if desired

also, sum total of this impulse channel plus all the other elements in your set should total somewhere below 0db at the master (for an original track) to leave room for any multiband EQ and compression on the master channel

so you are visually and audibly working with some standard that you can get an easier grip on. you're still leaving headroom, but you're just not too concerned with that at the sample level, for the reasons given above.

.. correct? this makes sense to me, but if you are saying that you will get better - more authentic and less botched - results by keeping that initial gain level of your samples at a lower value ..

then what value do you use? and why? forgive me if youve already said, because if you did i didnt quite follow the explaination ... its probably me, not you ;)

(and yes mad props to bob katz, but isnt he coming from a heavily analogue background .. so i would think his way of thinking would be influneced by that ... plenty of us are .. like i said ... almost purely sythnetic and / or using pre-recorded samples that are already normalized anyway)
.
Thanks for the props on my tune. Much appreciated :)

Like I said to harris andrew in no expert on this by any stretch of the imagination. Perhaps if Robert heinke popped in and gave his opinion I think that would be great. The tune I posted had no normaling at all. Theres no reason for this, its just my way of working. Ive simply never normaled audio while multi tracking

Bob katz background is analog and digital. But he's very pro digital in fact (check his website). Ive been through his site and read a lot of it but its way over my head. But clearly he says that any conversion process you put your audio through can degrade the quality. Obviously this is a debateable point.

There was a post on the PSW where he was saying this exact thing and a software developer came on all confident and told him he was talking turd. Bob Katz is no fool and by the end of the debate the software developer had to admit that there are things that go on with DAW software that even the devleopers dont understand. It was all about numbers, missing bits and so on. Over my head again.


QUESTION
Supster, why is normaling such a big deal for you. has someone told you quality of the recording will be better?

Andrew, Do you feel the same as supster?

Harris.Andrew
Posts: 164
Joined: Mon Oct 04, 2004 4:50 am

Post by Harris.Andrew » Wed Jul 20, 2005 6:59 pm

Just one note first, regarding normalizing and sound quality: If you're using the Ableton normalize when rendering, you will not have the same truncation issues as rendering it and then normalizing outside of Live. Search the forums, Abes have posted on it. It really is a no-worries kind of thing.

I do highly agree with Supster that, I have the best time mixing individual elements when they're normalized.

I don't normalize for sound quality directly, but my personal experience is that the convenience, control, and freedom added - it's really had an impact on the mix at the end of the day for me.

BTW, found that bobkatz debate i think - the 48bit dithering one? IMHO run for the hills, lock yourself in a bomb shelter and wait for the storm to pass, because unless you're recording Yo-Yo Ma in outer space and want to hear his eyebrows rustle as he plays . . . :D

supster
Posts: 2133
Joined: Mon Sep 20, 2004 6:26 am
Location: Orlando FL

Post by supster » Wed Jul 20, 2005 7:44 pm

DJRetard wrote:QUESTION
Supster, why is normaling such a big deal for you. has someone told you quality of the recording will be better?
well, its important because volume level of your individual samples is the basis your entire mix starts from.

after that point you're changing the gain thru any number of means: eq, compression, effects, individual track volumes ... then at the master: potentially eq and compression *again*

so - setting a baseline to start from seems to be ultimately really important to the end result, just like .. what kind of bricks and how large are they when you build the foundation for your house.

where you start the samples from affects where you set the gains on every subsequent element, just like i described a few posts back.

not having a good handle on this is what i think screws up the end result on a lot of peoples stuff: its distorted, or too crowded, or just lacking in space and / or definition

the original post that started the thread really was an open question to explore the issue - not a call to arms on any particular point .. thanks .. got a lot of good feedback
--
NEW SPECS: Athlon 4200+ dual; A8N-SLI m/b; Win XP Home SP2; 1 GB RAM; 2x 7200 RPM HDD: 1 internal, 1 Firewire 800 (Firewire is project data drive); M-Audio Triggerfinger

josh 'vonster' von; tracks and sets
http://www.joshvon.com

supster
Posts: 2133
Joined: Mon Sep 20, 2004 6:26 am
Location: Orlando FL

Post by supster » Wed Jul 20, 2005 7:47 pm

DJRetard wrote: Perhaps if Robert heinke popped in and gave his opinion I think that would be great.

and yes, it would :)

really curious about that, the monolake stuff is a benchmark on sound quality for minimal arrangements, for sure ... in my world anyway
.
--
NEW SPECS: Athlon 4200+ dual; A8N-SLI m/b; Win XP Home SP2; 1 GB RAM; 2x 7200 RPM HDD: 1 internal, 1 Firewire 800 (Firewire is project data drive); M-Audio Triggerfinger

josh 'vonster' von; tracks and sets
http://www.joshvon.com

leisuremuffin
Posts: 4721
Joined: Tue Apr 06, 2004 12:45 am
Location: New Jersey

Post by leisuremuffin » Wed Jul 20, 2005 8:04 pm

supster wrote: sure it does, when you compress the signal you're effectively pushing down the maximum volume and raising the volume of the lowest part of the signal relative to that .. which reduces dynamic range.
If you're trying to use a compressor to add punch, i suggest you set it up to *expand* the dynamic range. Of course the intent of a compressor is to limit the dynamic range, but you don't have to use it that way. Try using a longer attack than release with a lot of make up gain and a high ratio on something percussive.


sorry, off topic, but i like to share.


-lm
TimeableFloat ???S?e?n?d?I?n?f?o

Post Reply