Logic correctly transmit midi clock infos to vsts even if the chain is altered by latency, Pro tools now fully compensate automations, that's more than enough, and even if no challenger would do better, it's not an excuse to not show the exemple to get better sound and innovate, being a market leader in ableton leage with such price tag. Man could notice they were very shy in their manual until the big threads about that, since then they added those small lines 'they had forgotten to mention'. They seem to be pro musicians with descent musical tastes but don't seem to be concerned that much by this problem wich is here from v5 and has been reported many many times...ask a guitarist if he would accept a 512 sample offset between the vibrato he his doing and the corellation to the notes he his playing, he will cerainly feel it's weird...iam very curious about if bitwig made same mistake or took the time to implement proper remote compensations at the same time as PDC, i do hope they made the smart choice... compensated autom to session would be a win-win...pencilrocket wrote:I think this is inevitable in any host. Tell us what sequencers have its solution and how they would avoid. Just shift the automation to meke it sound fine. Use your ear as DJs doing. Nothing difficult.
As for the dj comparaison.. imagine a dj would have his crossfader affected by various random evolving latencies, would that be pro, musical or fun to play with?
sure you can easy manual adjust with the dj trick equivalent i mention by ears as your are doing it, shifting each time you automate. but what about later evolves maintenances?, do you mean the chain has to remain frozen and never being further altered?
Actually to make the track he is djing he has to go in every SINGLE lane he carefully automated in ALL clips of a track to keep/restore that time balanced,fitting,groovy subbtle thing (but that makes the difference) he probably took loong time to search and write, cause that subtle thing got slowly but surely drifted, scrambled, cancelled and destroyed over time ie, just by the fact he just added or removed a single VST effect afterwards.. would he do this each time he changes a vst? is that seriously a good and doable descent workflow?
the facts are no one is noticing enough or ready to make such weird manual insane corrections jobs, so everybody don't care, let it go and end-up with a more or less sloppy liveset that's still ok, and just may wonder why it doesn't seem to sound as tight as hardware indeed. people mostly prefer to do basic summing null test between daws to compare a 'sound engine' (every binary computer does same basic math float32 add operation with same result.. awesome!!.. daws sounds all the same blabla). PDC clearly mess the sound, there are plenty of simple tests to prove this...Some may be not concerned depending on how they are demanding about tighness, musical style, their vst uses and other stuff, but the problem is technically and proven there, and for lot of users it's a very very serious problem. concerning the 'MUSICAL ENGINE' it's even the highest priority problem to be fixed imo, get it atlast fixed and tight... without pain.