PDC: we are talking about a lot of things here.
1. Live tries to make sense of the routing and the (reported) latency of all devices ( live's and VSTs) and adds delays in order to keep things as much as possible in sync. Ideally sample accurate, but this depends on some factors:
- Live gets the right latency reported from the VSTs, which is not always the case and beyond our control
- our own devices report the right latency. this is supposed to be the case, but people do make mistakes and the combination of all possible sample rates and settings sum up to a lot of possible scenarios. However, I think there is no dramatic issue here, or it would be quite obvious.
- MIDI to Audio in synthesizers is again a different beast, it is not said that an instrument gets a MIDI note and starts the sound at the same instant. Actually this is quite rare, but it does not matter, unless there is a huge fluctuation, which again is not the case at least for our own devices. You can add two instances of Operator in two tracks and they cancel out if no random is involved. And, surprise! the same goes for synthesizers built in Max4Live.
Now there are two other things:
We decided a long time ago that the default behavior of the PDC is to create the shortest possible latency when a track is armed, and accept the fact that in this case other things might get out of sync, especially if there are devices with huge latencies in the sends. There are good arguments for it, and there are valid ones against it. I personally dislike it, and therefore use the "-StrictLatencyCompensation" option in the options.txt file to avoid it. However, this does not at all affect anything I would call "sound quality".
The other thing is the fact that automation currently ( if I remember correctly ) is not correctly taken into account by the PDC, and might be slightly too late if there are devices which have high latencies.
But the question here is: does it matter that much? If the effect is not audible, it does not matter. If you want to have a hard cut on volume, you'd rather cut the audio file anyway. If you want smooth fades it does not matter. I work a lot with automations all the time, and I never found this particular issue to be a 'real' problem. If something happens too late, well, I move it a bit earlier.
There are a lot of things which can be a lot better in Live, but nothing that has to do with "Audio Engine Quality". From a very personal music producing perspective the whole sound quality discussion is super odd. I have more headroom, lower noise-floor, higher precision in timing, more voices, hundreds of EQs if I need them, more instruments than I could ever dream of, and all this runs on my laptop. I have also a studio full of lovely early digital synths, and some analog veterans. They all sound very different than any software I am aware of, and this is why I still sample or record them - into Live.
Their "sound quality" totally sucks. They hum, they are noisy, they are out of tune, they distort like hell, they add all kinds of artefacts, they crash, they sometimes do not even boot, I can only run one instance at a given time and so on. My point is: if I record them with a good soundcard @ 96k / 24 bit in Live or any other software out there, I have a perfectly fine ideal recording of them. The problem, if there is "a problem" of DAWs is that they sound way too perfect. So, instead of being concerned about the technical quality of the DAW, I'd rather research what effect it could have to re-record my material thru a bunch of analog boxes. Because those analog pre-amps, compressors, what ever indeed do something with the material. Which is not some magic esoteric "it cancels out 100% but i still think it's wrong" stuff but can be measured e.g. with the Spectrum device in Live. It is only very hard to emulate it digitally.
Ah, I write way too much. I think Live sounds fine.