Page 2 of 3

Re: quality loss when rendering

Posted: Fri Jan 29, 2010 7:10 pm
by Captain Comeback
leedsquietman wrote:No-one should be clipping the master buss by +6dB, even listening in 32 bit float, it's a bad habit. Using normalisation has sonic consequences too, it's better to just turn your faders down so you're not clipping, IMHO

People in general need to learn to turn things down, leave proper headroom for cleaner mixes and address the overall gain/loudness issues at the mastering stage.

There are limiters with multisampling, Voxengo's Elephant 3 being a great example as it has up to 8x oversampling to reduce intersample peaks (btw - don't use 8x oversampling mode while you are tracking, or just checking a mix, it will burn your CPU and have latency issues. I always run mixes at 2x oversampling and then just turn it to 8x when rendering). The best way to avoid clipping is still to turn down your levels but sometimes a super fast transient could shoot by and cause intersample clipping.

Captain Comeback - like Tarekith, I do demo mastering. If you want to send me an example of your work PM me, I will use my ears (primarily) and visual analysis to determine how I would approach your issues, and send you back a remastered version with a note documenting the changes I made for free, as a gesture of goodwill to another forum user (I usually charge for this type of service). If your .wav file is recorded at 32 bit / 96 this will take an age to upload, so hopefully the track won't be too long !! Don't dither or normalize the file. No obligations. (Maybe also send me the previous 16 bit .wav file you processed so I can compare).
Alright I guess. I can send you both of the files if you want....the 32bit/96khz undithered file and also the 16bit/96khz dithered one. The tune is about 7:45 in length. You're also not too far away from me if you're in the GTA

Re: quality loss when rendering

Posted: Fri Jan 29, 2010 8:29 pm
by leedsquietman
If you want to send me the files you can, you can use something like mailbigfile or whatever, if the file size is too much send me a 3 or 4 minutes excerpt from the track. 32/96 is your source file, have you made a mix in 16/44.1 for CD/mp3 or just have a 16 bit/96 Khz file *a bit confused on that*)

e-mail : leedsquietman@hotmail.com

Re: quality loss when rendering

Posted: Fri Jan 29, 2010 9:05 pm
by Captain Comeback
I made a 32/96 source file and then re-imported it back into appleton life to master it and dither it down to 16/96 with normalization

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 1:31 am
by TRS80
I like to wait to dither until the final mixdown.

In my case, I render the track in Ableton with 24bit depth and dithering off. Further, I do not try to optimize levels in any way while in Ableton; I just try to make a mix that sounds nice, and I leave plenty of headroom for processing at the mastering stage.

Then I go to soundforge (insert any editing app here) and process with some Wave plug-ins.

My main point is do not dither (and do not "master") until you are in the final mixdown- the mastering part of the process.

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 2:08 am
by Captain Comeback
TRS80 wrote:I like to wait to dither until the final mixdown.

In my case, I render the track in Ableton with 24bit depth and dithering off. Further, I do not try to optimize levels in any way while in Ableton; I just try to make a mix that sounds nice, and I leave plenty of headroom for processing at the mastering stage.

Then I go to soundforge (insert any editing app here) and process with some Wave plug-ins.

My main point is do not dither (and do not "master") until you are in the final mixdown- the mastering part of the process.

Ya man that's what I already do. Why would anyone dither something that wasn't the final product?

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 9:27 am
by SubFunk
TRS80 wrote:I like to wait to dither until the final mixdown.

In my case, I render the track in Ableton with 24bit depth and dithering off. Further, I do not try to optimize levels in any way while in Ableton; I just try to make a mix that sounds nice, and I leave plenty of headroom for processing at the mastering stage.

Then I go to soundforge (insert any editing app here) and process with some Wave plug-ins.

My main point is do not dither (and do not "master") until you are in the final mixdown- the mastering part of the process.
that is the right way to go.

and leeds is absolutely correct to say to turn things down, concentrate on the quality and the harmony and level interaction of a mix, not primarily the volume, leave room for the mix to 'breathe' and being dynamic.

leave the volume issue for the second stage and in an ideal for another person alltogether :wink:

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 11:41 am
by Khazul
leedsquietman wrote:Using normalisation has sonic consequences too
You mean beyond just changing levels??

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 11:57 am
by SubFunk
Khazul wrote:
leedsquietman wrote:Using normalisation has sonic consequences too
You mean beyond just changing levels??
yup, in theory it is not supposed to have any sonic effect and any coder / mathematician / technician / etc. will tell you it only raises the level, because technically it is blah, blah... proven.

in reality it does always has a very minimal negative affect on the sonic quality of the material (but usually still strong enough to be audible), normalisation is a NO GO! at least for me and any other decent audio engineer i know of.

that was actually one of the first things i learned using DAWs, to stay away from normalisation.

p.s. and to be really honest, i don't know why that is, because it really should not have any effect. but it does.

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 3:17 pm
by cutoutcollective
Khazul wrote:
Post subject: Re: quality loss when rendering Reply with quote
Khazul wrote:
leedsquietman wrote:
Using normalisation has sonic consequences too


You mean beyond just changing levels??


yup, in theory it is not supposed to have any sonic effect and any coder / mathematician / technician / etc. will tell you it only raises the level, because technically it is blah, blah... proven.

in reality it does always has a very minimal negative affect on the sonic quality of the material (but usually still strong enough to be audible), normalisation is a NO GO! at least for me and any other decent audio engineer i know of.

that was actually one of the first things i learned using DAWs, to stay away from normalisation.

p.s. and to be really honest, i don't know why that is, because it really should not have any effect. but it does.
Yes, of course it "changes the sonic quality".... it changes the volume :roll: . Nothing more, nothing less. Anyone who tells you different is talking crap. Sorry, no offense man - its a common misconception, but I'm just so tired of reading endless forum post about made up differences because people aren't prepared to test stuff for themselves.

Again, its a simple test. Make a saw wave in operator (or anything else - saw just makes sense because there are harmonics all the way up the spectrum) and set the output of operator to -6 dB. Check that Live's master output level to make sure that it is outputting at -6dB. Now export one version where you use normalizing and one where you don't. Import both into a new project, put a utility with +6dB on the track that wasn't normalized and phase invert them. And then see how they completely phase cancel.

You can repeat the above with any song you like by dropping its level by a set number of dBs and then doing the same thing.

Like I said in my post above, the only time normalization does anything to the "sonic quality" other than changing the volume level is when your master is going into the red. Because then it prevents it so you don't get digital clipping in your exported file. Please people - test things before you go around repeating what "expert sound engineers" say.... its not hard.

By the way, I'm not saying you should be normalizing - you should be mixing with a decent amount of headroom and then mastering (or FAR better, have someone else mastering). The above was just to show what actually happens. There are far too many myths in the audio world... please, lets at least debunk the "normalizing changes the sounic quality" and the "rendered sound is different from the sound in live" ones. :roll:

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 3:35 pm
by jhartford
shit... posted as cutoutcollective again... damn using multiple browsers and forgetting to log out...

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 3:56 pm
by Sage
cutoutcollective wrote:
Khazul wrote:
Post subject: Re: quality loss when rendering Reply with quote
Khazul wrote:
leedsquietman wrote:
Using normalisation has sonic consequences too


You mean beyond just changing levels??


yup, in theory it is not supposed to have any sonic effect and any coder / mathematician / technician / etc. will tell you it only raises the level, because technically it is blah, blah... proven.

in reality it does always has a very minimal negative affect on the sonic quality of the material (but usually still strong enough to be audible), normalisation is a NO GO! at least for me and any other decent audio engineer i know of.

that was actually one of the first things i learned using DAWs, to stay away from normalisation.

p.s. and to be really honest, i don't know why that is, because it really should not have any effect. but it does.
Yes, of course it "changes the sonic quality".... it changes the volume :roll: . Nothing more, nothing less. Anyone who tells you different is talking crap. Sorry, no offense man - its a common misconception, but I'm just so tired of reading endless forum post about made up differences because people aren't prepared to test stuff for themselves.

Again, its a simple test. Make a saw wave in operator (or anything else - saw just makes sense because there are harmonics all the way up the spectrum) and set the output of operator to -6 dB. Check that Live's master output level to make sure that it is outputting at -6dB. Now export one version where you use normalizing and one where you don't. Import both into a new project, put a utility with +6dB on the track that wasn't normalized and phase invert them. And then see how they completely phase cancel.

You can repeat the above with any song you like by dropping its level by a set number of dBs and then doing the same thing.

Like I said in my post above, the only time normalization does anything to the "sonic quality" other than changing the volume level is when your master is going into the red. Because then it prevents it so you don't get digital clipping in your exported file. Please people - test things before you go around repeating what "expert sound engineers" say.... its not hard.

By the way, I'm not saying you should be normalizing - you should be mixing with a decent amount of headroom and then mastering (or FAR better, have someone else mastering). The above was just to show what actually happens. There are far too many myths in the audio world... please, lets at least debunk the "normalizing changes the sounic quality" and the "rendered sound is different from the sound in live" ones. :roll:
The only issue I've found with normalizing is when it renders to 0.0dB, so any peaks hitting that are technically clipping.

Sometimes it's nice to let people believe the myths because they are too lazy to test them out and see the look on their face when you've done something that goes against whatever myth and got something amazing sounding.

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 4:39 pm
by SubFunk
_____________________________ peep.

Re: quality loss when rendering

Posted: Sat Jan 30, 2010 10:25 pm
by leedsquietman
Cutout Collective - chill.

Not every normalization process and algorithm works the same. If it did, then there would be no issues.

For a start, a good normalization algorithm will let you choose the level to which it normalizes at.

Normalizing to 0dB can introduce intersample clipping. This is when the normalization process is BOOSTING, rather than CUTTING levels (from some noob who had +6 dB clipping going on). Cubase, Soundforge and most other programs allow you to normalize to any level you choose, 0db, -1dB, -6dB, -9.45678dB etc

Voicing an opinion is fine. Voicing an opinion in a derogatory way for no good reason, while trying to rubbish competent and experienced engineers such as Khazul, SF and myself, who all have double decade experience is something else. Now if you were Bob Katz, I would disappear back up my own ass and feel pwned. But you're not ...

When working with 24 bit audio files, no mastering engineer I know uses normalization, nor any responsible mix engineer. They mix properly with headroom in the first place and use compression/limiting judiciously to raise gain levels. If you recorded something too hot, the best answer is TURN IT DOWN - rather than normalize. This develops good mixing habits.

Re: quality loss when rendering

Posted: Sun Jan 31, 2010 3:28 pm
by SubFunk
leedsquietman wrote:When working with 24 bit audio files, no mastering engineer I know uses normalization, nor any responsible mix engineer. They mix properly with headroom in the first place and use compression/limiting judiciously to raise gain levels. If you recorded something too hot, the best answer is TURN IT DOWN - rather than normalize. This develops good mixing habits.
well, you as usual, nailed it.

nothing to add, except that it really is one of the first ever things you learn (if you learn the trade properly, that is) to not use normalisation ever, only because DAW manufacturers thought it's a good idea, it's not, it unfortunately doesn't work like it is supposed to.

Re: quality loss when rendering

Posted: Sun Jan 31, 2010 3:50 pm
by jhartford
Right, I'm coming off as a dick and I don't mean to be (I accidentally posted as cutoutcollective...)- I have no doubt you guys loads of experience and probably way better ears than I do... so let me explain myself.

First off - the normalization thing was brought up by me (yes - I was that "noob" who talked about the +6dB clipping). The reason I brought it up was NOT because I thought it was good mix practice but because I was trying to hazard a guess at why the OP was hearing something different once he rendered his file (he spoke of dramatically less bass presence or something). I brought up normalization because it was the only thing that could potentially explain the difference he was hearing. I'll say it again - IF he was clipping the output AND he was normalizing there would no longer be that clipping in his rendered file and so there would be a difference. People were talking about dither being the difference. I'm yet to meet someone who can tell the difference between the difference between whether a track has been dithered or not (and again that's not to say it shouldn't be done), so its unlikely that that is what was causing the difference. As the saying goes "you never hear anyone saying - great mix, but pity about that dither"

I then went on to say that if this was not the case (ie there was NO normalization taking place) there was an easy way to show yourself that there was no difference between the unrendered and rendered versions of the song. ie simply render the song reimport it and phase invert it and see if they phase cancel. This is a far more accurate method than trying to hear the differences between the two as you overcome all potential psycho-acoustic and volume difference problems you could encounter.

In response, all I got was a bunch of posts attacking normalization. Again, I wasn't suggesting that that was the solution. In fact I was more say that that was potentially the OP's problem - ie why it sounds different after rendering.

Now, I stand by my post on normalization (though point taken - I could have said it in a nicer way - for that I'm sorry... no offence meant - I was more taking a swipe at the many "sound engineers" I know who have taken a short course, barely mixed an album and then go around saying things like "I only mix in protools because the summing engine sounds so much better"). The point I was trying to make was that its easy to test the effect processes have on sound and yet so few people actually bother to test the many myths that are around. Even if I was Bob Katz - don't just accept what I say... prove me wrong (though if I was Bob Katz that would be rather difficult).