Rendered Latency
Rendered Latency
So I'm exporting my stems to be mixed by an engineer but on certain stems that have a lot of plugins on them, after their rendered if I bring them back into the project they are not in sync with the original stems. There is some latency on the rendered stems. I've tried turning the buffer down all the way to 32 and it doesn't make a difference. On other stems that have no plugins there is no latency. Is this normal?
Any way to fix this or should I freeze all stems with plugins before rendering? Seems really time consuming. I thought live had a way to offset plugin latency especially when rendering
Any way to fix this or should I freeze all stems with plugins before rendering? Seems really time consuming. I thought live had a way to offset plugin latency especially when rendering
Re: Rendered Latency
Make sure you turn off warping on the clips when you bring them back into a new project, also that the start marker for each clip is all the way to the beginning of the clip.
tarekith
https://tarekith.com
https://tarekith.com
Re: Rendered Latency
The audio driver buffer has no effect on freezing/rendering - those operations are done internally by Live and don’t use the audio driver. As Tarekith says, the problem might be Live re-warping the tracks when loaded into a new project and not quite getting it right.
Live 10 Suite, 2020 27" iMac, 3.6 GHz i9, MacOS Catalina, RME UFX, assorted synths, guitars and stuff.
-
- Posts: 902
- Joined: Sat Dec 30, 2017 3:36 am
Re: Rendered Latency
Exactly how are they not in sync? Are you aligning the beginning of the waves? There’s also a track offset function in each tracks mixer section.
Re: Rendered Latency
Yes I made sure warping is not on (I always have auto-warp long samples unchecked) and I've made sure that they start at the right spot. After some more experimenting it seems like the rendered stems are slightly ahead when lining them up with the original stems. The waveforms are just slightly off but when I play both stems at the same time (for example Vocal 1 original and Vocal 1 rendered) there is no audible phasing, it just gets louder like a double even though I can see that the waveforms are not perfectly aligned with each other. I tried a null test and it just gets quieter, I feel like it would null perfectly if it wasn't for the difference in volume from the original and rendered file
So it seems like even though they look like they are not in time with each other (meaning the waveforms of original vs rendered don't match 100%) that they are? Which is really strange because I never seen this before and other stems with no processing on them do not have this problem. The vocal stems I'm rendering do have a lot of processing on them and even with freezing those stems the wavesforms come out the same.
I'm attaching an image so you can see what I mean. The bottom file is the original stem that's frozen and the top is that same stem rendered and brought back into the project, no warping, right where it should be. The wavesforms are slightly different and the top one is slightly ahead in time even though it doesn't sound like it is?
https://imgur.com/Fw7xTlA
Also from my testing last night I discovered that freezing a stem has some sort of impact on the sound of the rendered file. Meaning a frozen stem and a non-frozen stem (when rendered and brought back in) of exactly the same channel do not null with each other. So there's some slight differences in freezing an audio track before exporting vs not freezing and exporting. From my testing it seemed like it was better to freeze those tracks with heavy processing on them before exporting and not because of CPU
So it seems like even though they look like they are not in time with each other (meaning the waveforms of original vs rendered don't match 100%) that they are? Which is really strange because I never seen this before and other stems with no processing on them do not have this problem. The vocal stems I'm rendering do have a lot of processing on them and even with freezing those stems the wavesforms come out the same.
I'm attaching an image so you can see what I mean. The bottom file is the original stem that's frozen and the top is that same stem rendered and brought back into the project, no warping, right where it should be. The wavesforms are slightly different and the top one is slightly ahead in time even though it doesn't sound like it is?
https://imgur.com/Fw7xTlA
Also from my testing last night I discovered that freezing a stem has some sort of impact on the sound of the rendered file. Meaning a frozen stem and a non-frozen stem (when rendered and brought back in) of exactly the same channel do not null with each other. So there's some slight differences in freezing an audio track before exporting vs not freezing and exporting. From my testing it seemed like it was better to freeze those tracks with heavy processing on them before exporting and not because of CPU
Re: Rendered Latency
Here’s what I think (famous last words) is going on.
The original audio track wave form shows the audio as recorded - that is, before any processing done to that audio by plugins on the track.
When the track is rendered/frozen the wave form shown is for that track as it now is - i.e. after the plugins/processing have been applied.
Anything that doesn’t quite null out could be for a few reasons. One is that one or more plugins have some random functioning - e.g. they emulate hardware that doesn’t always do exactly the same thing twice or some low-level noise is being added in by the plugin. Waves’ emulations of vintage hardware can do this, for example.
Or a plugin is being pushed into emulated overdrive/distortion - a lot of plugins are intended to do this if the input gain is high enough.
Another reason might be that the plugin doesn’t come up with exactly the same result every run through because of rounding errors, something in the code that doesn’t always return the same result etc.
A very likely reason is imperfect gain matching, so that after processing the output gain of the processing chain is not the same as the gain of the audio before it is processed. That can be fixed by being very careful to check the input and output levels of every stage of processing or by ignoring it if it’s not causing an audible negative issue.
The original audio track wave form shows the audio as recorded - that is, before any processing done to that audio by plugins on the track.
When the track is rendered/frozen the wave form shown is for that track as it now is - i.e. after the plugins/processing have been applied.
Anything that doesn’t quite null out could be for a few reasons. One is that one or more plugins have some random functioning - e.g. they emulate hardware that doesn’t always do exactly the same thing twice or some low-level noise is being added in by the plugin. Waves’ emulations of vintage hardware can do this, for example.
Or a plugin is being pushed into emulated overdrive/distortion - a lot of plugins are intended to do this if the input gain is high enough.
Another reason might be that the plugin doesn’t come up with exactly the same result every run through because of rounding errors, something in the code that doesn’t always return the same result etc.
A very likely reason is imperfect gain matching, so that after processing the output gain of the processing chain is not the same as the gain of the audio before it is processed. That can be fixed by being very careful to check the input and output levels of every stage of processing or by ignoring it if it’s not causing an audible negative issue.
Live 10 Suite, 2020 27" iMac, 3.6 GHz i9, MacOS Catalina, RME UFX, assorted synths, guitars and stuff.
Re: Rendered Latency
Yeah I totally agree with that and everything you're saying makes sense. I am using some Waves vintage emulations as well as distortion and saturation plugins. However I feel like if they were completely level matched they would null completely, that's why on my null tests it just got a lot quieter but didn't null completelyTLW wrote: ↑Wed Sep 18, 2019 11:56 pmHere’s what I think (famous last words) is going on.
The original audio track wave form shows the audio as recorded - that is, before any processing done to that audio by plugins on the track.
When the track is rendered/frozen the wave form shown is for that track as it now is - i.e. after the plugins/processing have been applied.
Anything that doesn’t quite null out could be for a few reasons. One is that one or more plugins have some random functioning - e.g. they emulate hardware that doesn’t always do exactly the same thing twice or some low-level noise is being added in by the plugin. Waves’ emulations of vintage hardware can do this, for example.
Or a plugin is being pushed into emulated overdrive/distortion - a lot of plugins are intended to do this if the input gain is high enough.
Another reason might be that the plugin doesn’t come up with exactly the same result every run through because of rounding errors, something in the code that doesn’t always return the same result etc.
A very likely reason is imperfect gain matching, so that after processing the output gain of the processing chain is not the same as the gain of the audio before it is processed. That can be fixed by being very careful to check the input and output levels of every stage of processing or by ignoring it if it’s not causing an audible negative issue.
However what I'm not understanding is why it appears to be off time and not synced with the original file's waveforms even though it sounds like it is. I've never seen this before but I've also never done this much processing and then reimported the stems to compare with the originals. I just wanted to make sure everything was in time before I send the files off to the engineer. So when I saw that it appeared like the rendered files were out of sync it really threw me off
I guess if it sounds alright then it's just what you're saying, I just thought even with the processing that the waveforms would match up completely.
It's also weird that the same stem when rendered frozen vs unfrozen doesn't null with each other