Post
by Tarekith » Thu Jan 12, 2012 4:06 am
OK, well more than a few times now I've sworn that I would not be drawn into this topic again, yet I can't resist, especially since now people are referencing my tests. So, this is the LAST time I'm going to share my own PERSONAL views on the subject, since obviously so many other people are sharing nothing but their opinion as well.
When I posted my recent findings about Logic 9 and Live 8, it was mainly due to the fact that I thought I had finally found a situation where they cancelled completely in a null test. Some people might remember it, most won't, but I did this exact same test with Live 7 and Logic 8 (posted here on the Live forums) and found that from an audible standpoint, I could not hear a difference, yet in a bit for bit comparison, there were some small differences in the lowest 3 bits.
Fast forward a couple of years, and I've had this same conversation with some other producers I respect recently, and I think to myself "why not run the test again, let's see if something has changed in the newer versions of both apps". This time I decide to not only make the project files of both apps available for others to use, but also use a metering plug-in that's free so anyone can repeat my test and find any flaws with it. So I chose the Sonalksis Free-G plugin, since it's free for both Mac and PC users.
At the time the test was only for my own knowledge, and I wasn't at all looking to prove the issue one way or the other. I was just curious, nothing more. Despite what people think, I like Live, but it's not the only DAW I use, and I stand to gain nothing by standing up for them in these matters.
So I was quite surprised when I ran the test this time, that Free-G was showing me total cancellation of both the Logic and Live renders. Not at all what I had discovered with Live 7 and Logic 8 (from a purely analytical standpoint, again I thought both results were audibly the same). So based on this testing, I decided to repost my testing, and see what other people thought, or if there was some error in my test I overlooked. I even repeateded the results using more tracks, and 3rd party plug ins for the test, based on feedback from other users on how I could modify the test.
And lo and behold, there was a difference.
So I decided to post the results, since they contradicted what I had found previously. When other, more accurate (apparently) tools were used to compare the renders, it was shown that the lower 3 bits differed. I admitted that, and posted the corrections to both my blog and the original post here on the Ableton forums.
Here's where the science for me differs from personal opinion.
Despite this difference in raw digital signals, under no listening environment at my disposal can I, or anyone else who's listened, hear any difference in the resulting files. For reference, I think it's important to point out that the lowest three bits, indeed the lowest 8 bits, of these signal are going to be discarded or rewritten by dither noise when converted to CD quality wav files, converted to MP3, or even burned to CD. Even accounting for the effects of dither to extend our perception of dynamic range in the best of circumstances, any differences in the signal will be discarded during the truncation from 24bit to 16bit.
That's fact. Pure digital signal processing and math fact.
BUT…
I admit, that perhaps there are things we can't measure in science about digital audio that might outlay this fact. With that mind, I'm 99.99999% sure that no one on this planet or any other can reliably pick out the differences in my test renders in a reproducible way. I'd stack every last bit of music gear I own on that statement. Theoretically different yes, audibly different no. Not even close.
And yet, despite all this testing and admissions to the possibilities to the contrary, I still firmly believe believe it's not only a moot point, but a dumb argument. I don't care what professional musicians think (in this scenario), or what the rest of the forum thinks even. To me, what matters is what we hear.
Specifically, what comes between the speakers and our ears.
If Live was to have some sort of sound, or bias towards any audio attribute (IE, it's too dull, too mono, too shallow, etc), we would still hear that from the output of our soundcards to our monitors to our ears, and compensate appropriately. We'll use any and all audio references we have to make any small corrections as we write and produce our music, so that at the end of the day, REGARDLESS OF WHAT TOOLS WE USE, our music sounds on par with anything else we hear on the same signal chain.
Part of me is extremely loath to use that statement, because I know how easily the naysayers will pounce on it as a signal of defeat or admission of Live's short-comings, which I do not at all admit or believe. But for me the simple fact of this argument is that both sides may be right (and both might be wrong!), and regardless it doesn't matter. A good, experienced producer is going to work based on what they hear, and make production decisions accordingly. If something sounds dull to you, you boost the highs, if it's too bright, you boost the lower mids (or whatever), etc.
My point is simple, it really doesn't matter what the lower 3 bits are doing (for instance), because we all use our ears to make these decisions anyway.
Maybe some day I'll be proven wrong, and there is something that Ableton is doing differently that makes it sound different (and I truly believe this is not the case). But even if that is fact, it doesn't matter because I'm writing and producing based on what I hear, and in that scenario this is all a pointless argument.