theophilus wrote:therefore, i'm a little confused... i did a similar test to others, where i put a long latency effect (in my case, liquidsonics reverberate - it has selectable latency between 0 - 8192 samples, i chose 4096) on a simple 4/4 kick beat. i had two identical tracks (using duplicate) except one had reverberate on, in 100% dry mode, and long latency. the audio output was exactly on top of each other, so ableton was compensating that long latency. that answers one question i had - at least it looks like live is compensating the audio latency on effects too.
Now place a timing dependent effect after that Liquidsonics reverb (i.e. a synced delay, filter, noise gate, arpeggiator or anything that polls Live's timing clock), and you'll notice that the audio itself gets out of sync as well. This was confirmed by Ableton back in Jan 2011 (
see this comment), after the community was pulling its hair out trying to figure out what was going on.
I often encounter sound design situations where a timing dependent effect (or several) inserted after a latency inducing effect (or instrument) is desirable. Often the argument in response is a suggestion to just now work that way. Yet in each of the other DAWs that I work with for production, both the audio and automation are compensated for when plugins are used in this manner. The issue isn't with how one works creatively, it's with whether Live is the right tool for the job.
Over time, my opinion has drifted toward Live just not being the right tool for certain kinds of work. What it does well, manipulating audio, it excels at. And as a basic midi tool for composing and getting ideas down, it offers an intuitive and efficient workflow. Depending on the style of music, one can do an entire album project or a score for video entirely within Live, from concept to mastering, and I have. But it just doesn't handle advanced midi or precision automation and timing all that well. And I genuinely feel that Ableton doesn't have a strong enough desire to see it happen yet, not at this time anyway (
see this comment).
However, I still feel that it is important and worth being addressed, which is why I'm participating in the discussion. Quoting my post from page 14...
Akshara wrote:Akshara wrote:Ghost Mutt wrote:how many is a 'few' and what are they?
1)
Any timing dependent VST or AU (i.e. tempo synced delays, filters, gates, arps, etc.) placed in a series
after any other plugin
with latency;
2)
Any VST or AU plugin with sufficient processing latency (i.e. high-end EQs and dynamics processors);
3) As of Live 8.3, any of the built-in Live Devices in
this list.
There are two issues at play here:
1) Automation being moved audibly out of grid sync due to inserting
latency inducing plugins. The return
audio is compensated by Live's PDC. (2 and 3 above)
2) Automation
and return audio being audibly out of grid sync due to inserting
timing dependent plugins in series after
latency inducing plugins. The return audio
is not compensated by Live's PDC. (1 above)
The following is a clear and succinct explanation from Ableton from Jan '11.
viewtopic.php?p=1247289#p1247289
Hope that clarifies things, some.
*Edited the first and last entry for accuracy. Timing dependent effects only lose polling of the transport timing when inserted after plugins which induce latency.
Following upon this, I wanted to just add a personal thought, as to why this is so important.
Because this issue only happens when using very specific types of processing in a certain order, the micro-management to avoid the issue is overly complicated. If the issue itself were less complicated, we'd be able to work around it and manage it ourselves in a more reasonable manner. Once things like automation nodes get visibly off the grid, it becomes very difficult to keep track of large projects in one's mind, or on paper, of what needs to be manually compensated for, and where.
Finding the latency for every plugin in a project is not a casual process. It is possible to create a database of every plugin's latency, and which plugins poll the timing transport, and what they will do when combined together in series. Then we could take that information and delay tracks forward, or manually scoot them, to line up with any delays in audio from other tracks. However, there is still the problem that moving an entire track of automation in Live is not handled in the same manner as it is with moving audio, and often has to be done on a per node level.
The point is that managing all of this requires a complicated process with a considerable amount of time and effort. And when other applications are able to make this a complete non-issue, or at least bring the complexity down to a more manageable level, then that has a noticeable effect on the project. Time and resources go into other areas of the production, and it's not something one worries about it. Having trust in one's tools is a big deal, and can influence confidence and trust in other areas of the process, such as the composition and performance. For some artists, creation can be a delicate process.
That is why I feel that this is important, and is why I would like to see this addressed in a future version of Live.