MIDI delay recording

UHE is now closed. For Technical Support from Ableton, please go here: http://www.ableton.com/support
Locked
Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Tue Apr 10, 2007 4:40 pm

Synthbuilder wrote:To illustrate this further - use a VSTi with a compressor immediately after it and play a few notes. The compressor's look ahead will be compensated by the recording process and the midi data will be recorded sufficiently 'late'. In playback the notes will sound as they were when you recorded it because PDC will compensate for the inherent delay in the VST instrument and effect.

But now remove the compressor. Does the midi note data move forward in the track to compensate for the lack of compressor. In other words, does the track now sound as if you recorded it without the compressor. Currently in Live it does not. The notes will sound late since it is assuming the compressor is still there.
Hi,

No, that is not how it works. Once the notes have been recorded, say, if a note is on the grid on the second beat of the bar, whatever you put on the track, device-wise, you'll hear the sound on the second beat of the bar, thanks to PDC. Or am I understanding you wrong?

Regards,
Amaury
Ableton Product Team

Synthbuilder
Posts: 523
Joined: Thu Nov 10, 2005 8:42 am
Location: Cumbria, UK
Contact:

Post by Synthbuilder » Tue Apr 10, 2007 5:06 pm

Amaury wrote:Once the notes have been recorded, say, if a note is on the grid on the second beat of the bar, whatever you put on the track, device-wise, you'll hear the sound on the second beat of the bar, thanks to PDC.
Hi Amaury,

Yes, this is the case. PDC works to make the note sound at the same time as the note is recorded.

But the note is recorded later than you hit the keyboard because Live prints the note data when the note is heard. The amount of lateness it uses is dependant on the VSTi and VST used.

Now remove or alter the track's devices and the midi data will stay in the same place. But remember that the note data was placed late because of a certain number of devices - remove those devices or use an external midi output and the sound plays out of time.

Yes Live's PDC will ensure the midi data is played back correctly, but that note data has been put in the wrong place for the new choice of channel devices. It is only correct for the original choice of instruments.

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Tue Apr 10, 2007 5:20 pm

Synthbuilder wrote:
Amaury wrote:Once the notes have been recorded, say, if a note is on the grid on the second beat of the bar, whatever you put on the track, device-wise, you'll hear the sound on the second beat of the bar, thanks to PDC.
Hi Amaury,

Yes, this is the case. PDC works to make the note sound at the same time as the note is recorded.

But the note is recorded later than you hit the keyboard because Live prints the note data when the note is heard. The amount of lateness it uses is dependant on the VSTi and VST used.

Now remove or alter the track's devices and the midi data will stay in the same place. But remember that the note data was placed late because of a certain number of devices - remove those devices or use an external midi output and the sound plays out of time.

Yes Live's PDC will ensure the midi data is played back correctly, but that note data has been put in the wrong place for the new choice of channel devices. It is only correct for the original choice of instruments.
Hi,

As it is implemented now, it works like this: the MIDI notes you record fall where you hear the sound, while you are recording. (talking about recording a software instrument). So, if you record for instance a Kick sound, the MIDI notes will be recorded where you hear the sound while recording. So later on, when you listen to your take, you hear the kick sound where you heard it while recording. Than, replacing the instrument with anything else, or removing or adding effects, won't change the thing: you'll still hear the sound where you've recorded it.

Let me know if that helps.

Regards,
Amaury
Ableton Product Team

Synthbuilder
Posts: 523
Joined: Thu Nov 10, 2005 8:42 am
Location: Cumbria, UK
Contact:

Post by Synthbuilder » Tue Apr 10, 2007 6:59 pm

Image

This shows several tracks all recorded from one source at the same time. The master keyboard is also producing a sound which is recorded. Look at the differences in the midi timing compared to the audio output of the master synth.

As expected the channel with OFF selected is the only one on time with the audio.

But note the two VSTi tracks at the bottom. Look at the Imposcar one with the Live Compressor II on it. It has a 1mS look ahead enabled - see how the midi is late to correspond to the lateness of the audio.

If I remove the compressor the Imposcar now plays later than it should since originally when I recorded it, the audio output is 1mS behind.

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Tue Apr 10, 2007 7:06 pm

Hi,

These screenshot are somehow right. But we have to separate the cases.

-Working with external synth: We need to provide a good solution that allows to use external synth without added latency. It is worked on, will hopefully make but I can't promise when. It is not yet ready.

-working with soft synth: the whole thing, and discussion above, is that you are hearing the sound delayed while recording it. So we assume that the human mind is able to adapt to that, and adapt the gesture in order to play the sound on time. In which case the notes are written in the right place.

Doing silent tests don't mean a lot in a real world. Your test is valid, but as said in point 1, it is aknowledged that there can be improvements for working with external devices.

Do a simple test, as proposed before in the thread, in a real-world situation. using a soft synth. Record yourself playing a percussion sound, on the metronome. After the fact, place as many plugins as you want on the track, the notes will still sound where you heard them while recording.

The biggest problem is that one must keep the latencyas small as possible,

Let me know if it makes sense, and thanks a lot for your contribution.

Regards,
Amaury
Ableton Product Team

Synthbuilder
Posts: 523
Joined: Thu Nov 10, 2005 8:42 am
Location: Cumbria, UK
Contact:

Post by Synthbuilder » Tue Apr 10, 2007 7:21 pm

Amaury wrote: working with soft synth: the whole thing, and discussion above, is that you are hearing the sound delayed while recording it. So we assume that the human mind is able to adapt to that, and adapt the gesture in order to play the sound on time. In which case the notes are written in the right place.
Hi Amaury,

I don't think my main point is being understood. I appreciate why Live records the notes in the way that it does. The screen shot was simply there to indicate a frame of reference for the notes being played by the two VSTi.

Look carefully at the two VSTi. One is ahead of the other by 1mS. This is because the compressor is being compensated for in the Imposcar track. Take the compressor away or use the midi track to now play an external instrument and the midi data is not valid for the new instrument.

Live compensates for the playing of a particular combination of instruments when you recorded them, but this then does not allow new conbinations to be used and have the same timing.

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Tue Apr 10, 2007 7:38 pm

Synthbuilder wrote:
Amaury wrote: working with soft synth: the whole thing, and discussion above, is that you are hearing the sound delayed while recording it. So we assume that the human mind is able to adapt to that, and adapt the gesture in order to play the sound on time. In which case the notes are written in the right place.
Hi Amaury,

I don't think my main point is being understood. I appreciate why Live records the notes in the way that it does. The screen shot was simply there to indicate a frame of reference for the notes being played by the two VSTi.

Look carefully at the two VSTi. One is ahead of the other by 1mS. This is because the compressor is being compensated for in the Imposcar track. Take the compressor away or use the midi track to now play an external instrument and the midi data is not valid for the new instrument.

Live compensates for the playing of a particular combination of instruments when you recorded them, but this then does not allow new conbinations to be used and have the same timing.
Hi,

I understand what you are saying but I think we don't really are on the same line! :)

Sure, the more you have devices on a track, the more there is latency. But again, let's talk real world situation. If you record a track, either with or without the compressor, you;ll play in a certain manner, and will hear the sound more or less delayed, depending if there is a compressor or anything. The thing is you will adapt your playing, or at least that's how I see things.

The key thing to understand, is that nobody records with monitor ON, if not listening to what he records. Silent tests don't demonstrate anything, but the fact there is more latency if there are more devices.

Now, whatever the latency on a track, when you record it, in a real-world situation, you're trying to put the notes in a manner that the sound is satisfying, means in time. This is nearly impossible if the latency is too big, we all agree about that. In that logic, the notes gets recorded exactly where you hear the sound. And on play back, you can add or remove devices, it does not matter.

I feel like I repeat myself and that's not what you're after. Sorry about that. But I can't express it in different words for now.

Live does not compensate for the latency if you are listening to what you are playing while you are recording, at all. We assume the human mind does, as a pianist does when playing his instrument. I am really, really sure that a pianist listens to what he is playing, and doesn't even notice he is hitting the keys earlier than the sound comes. that's the whole point of discussion with popslut.

So, if you have 5 ms latency for the instrument, you'll have a tendency to adapt, play the notes 5 ms early, so they will effectively be recorded on time, 5 ms later than you hit the keys (but where you heard the sound, thus were a hapy musician giving a good performance). It you have 7 ms on another track, you'll adapt and play around 7 ms early. Same, the notes will be 'on time'.

That's at least how it works and is thought now.

Would you mind performing a real-worls experiment, as suggested before?

Regards,
Amaury
Ableton Product Team

Tarekith
Posts: 19121
Joined: Fri Jan 07, 2005 11:46 pm
Contact:

Post by Tarekith » Tue Apr 10, 2007 8:02 pm

Amaury wrote:Well, now it seems Ableton sees it is a problem. But I still think it is worth a 'real-world' approach, and not a pure intellectual one. Have you tried the little test I proposed? I would really love to know what people are more comfortable with in a real-world. I would love to know if you are able to hit the keys on time while you hear the sound coming late, or if your brain is able to adjust to hit the keys in order to hear the sound at the right place.

regards,
Amaury
Sorry Amaury, since selling my TI last week, I've been without a controller keyboard to use with Live, thus I could no tdo the test. I have a Novation SL37 coming later this week though, and I'll try it then. Though having to get used to a new keyboard might throw things for me too.

popslut
Posts: 1056
Joined: Sun Oct 22, 2006 4:58 pm

Post by popslut » Tue Apr 10, 2007 9:24 pm

Amaury wrote:I am really, really sure that a pianist listens to what he is playing, and doesn't even notice he is hitting the keys earlier than the sound comes. [...]

So, if you have 5 ms latency for the instrument, you'll have a tendency to adapt, play the notes 5 ms early, so they will effectively be recorded on time, 5 ms later than you hit the keys (but where you heard the sound, thus were a hapy musician giving a good performance). It you have 7 ms on another track, you'll adapt and play around 7 ms early. Same, the notes will be 'on time'.

That's at least how it works and is thought now.
Amaury - are you telling us that you are right and every user of every other DAW on the planet is mistaken?

Despite the indisputable fact that;
Popslut wrote:No other DAW exhibits this behaviour. Not one.

Logic, Pro Tools, Sonar, Cubase, Nuendo, Digital Performer, Sequoia, Reason, Fruityloops.

Every single one will have those midi notes more or less on the beat markers/grid.

Furthermore, if you go to their user forums, you will not find ONE post about the way they deal with latency compensation because they deal with it silently, invisibly, logically and satisfactorily.

They just work.

Nobody writes 13 page threads about how they prefer to compensate manually by playing ahead.

With every other DAW it simply isn't an issue.

Synthbuilder
Posts: 523
Joined: Thu Nov 10, 2005 8:42 am
Location: Cumbria, UK
Contact:

Post by Synthbuilder » Wed Apr 11, 2007 7:50 am

Amaury wrote: Sure, the more you have devices on a track, the more there is latency. But again, let's talk real world situation. If you record a track, either with or without the compressor, you;ll play in a certain manner, and will hear the sound more or less delayed, depending if there is a compressor or anything. The thing is you will adapt your playing, or at least that's how I see things.
Ah I see what you mean. Yes, this is where I have mis-understood.

I will be compensating while I play, but the notes should land where I want them so long as I play with my ears. Adding and changing VSTi shouldn't make a difference because Live will compensate for their delays on playback. But Live is compensating while we play, since it puts the midi where you hear the note and not where you actually play it. This isn't PDC but its still of form of latency compensation. You can see why we need some explanation about all this.

But then there are reasons behind me being more confused than I should be. I often record onto two tracks at once, each one with a different VSTi or external synth. This allows me to build up composite sounds which are rich and dynamic. Perhaps one will be a sampler, the other a synth. I'm simply not used to my DAW software putting the midi data in different places on tracks that should be playing the same thing.

Then there's my other source of confusion. I play a lot with a loop midi sequencer. This I drive off Live's midi clock. All is fine with playback although I know I'm going to get latency.

But when I go to record several tracks at once, each with different VSTi on different midi channels, the midi data is all over the place, even though the sequencer is chucking out midi data where I want it. Fortunately, most of what I play is quantised to 16ths, but not always.

I tried some real world tests. I would indeed bring forward my playing to compensate for the latency in most cases. However, I noticed when I tried to do some more complex pieces where the instrument I was playing wasn't so loud in the overall mix. I tended then to go with the beat and not the sound of the piano sound I was playing. Then my notes were appearing late.

I guess if you can hear yourself well, and you are listening to what you are playing, then you will compensate. But if you are not listening and playing by the beat alone, then the timing is off.

This sure makes the case for making the record compensation that Live adds an optional feature.

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Wed Apr 11, 2007 8:04 am

popslut wrote:
Amaury wrote:I am really, really sure that a pianist listens to what he is playing, and doesn't even notice he is hitting the keys earlier than the sound comes. [...]

So, if you have 5 ms latency for the instrument, you'll have a tendency to adapt, play the notes 5 ms early, so they will effectively be recorded on time, 5 ms later than you hit the keys (but where you heard the sound, thus were a hapy musician giving a good performance). It you have 7 ms on another track, you'll adapt and play around 7 ms early. Same, the notes will be 'on time'.

That's at least how it works and is thought now.
Amaury - are you telling us that you are right and every user of every other DAW on the planet is mistaken?

Despite the indisputable fact that;
Popslut wrote:No other DAW exhibits this behaviour. Not one.

Logic, Pro Tools, Sonar, Cubase, Nuendo, Digital Performer, Sequoia, Reason, Fruityloops.

Every single one will have those midi notes more or less on the beat markers/grid.

Furthermore, if you go to their user forums, you will not find ONE post about the way they deal with latency compensation because they deal with it silently, invisibly, logically and satisfactorily.

They just work.

Nobody writes 13 page threads about how they prefer to compensate manually by playing ahead.

With every other DAW it simply isn't an issue.
Hi,

No please, don't put these words in my mouth. I've written a thing I'm certain about, which is that pianists adpapt to the latency of their instrument without even thinking of it. I'm explaining then how it is thought for now. I'm asking the person to try and tell me. Nothing else.

The first 5 pages of that thread are about someone who thought something was done in ive 5 and was removed in Live 6, which was wrong. Then it's been a converstaion between 5 people at the most. That's why I'm asking people for their thoughts, too.

I'm not right, I'm not wrong, I'm asking more people views, and be assured we take that very seriously.

Regards,
Amaury
Last edited by Amaury on Wed Apr 11, 2007 8:14 am, edited 1 time in total.
Ableton Product Team

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Wed Apr 11, 2007 8:13 am

Synthbuilder wrote:
Amaury wrote: Sure, the more you have devices on a track, the more there is latency. But again, let's talk real world situation. If you record a track, either with or without the compressor, you;ll play in a certain manner, and will hear the sound more or less delayed, depending if there is a compressor or anything. The thing is you will adapt your playing, or at least that's how I see things.
Ah I see what you mean. Yes, this is where I have mis-understood.

I will be compensating while I play, but the notes should land where I want them so long as I play with my ears. Adding and changing VSTi shouldn't make a difference because Live will compensate for their delays on playback. But Live is compensating while we play, since it puts the midi where you hear the note and not where you actually play it. This isn't PDC but its still of form of latency compensation. You can see why we need some explanation about all this.

But then there are reasons behind me being more confused than I should be. I often record onto two tracks at once, each one with a different VSTi or external synth. This allows me to build up composite sounds which are rich and dynamic. Perhaps one will be a sampler, the other a synth. I'm simply not used to my DAW software putting the midi data in different places on tracks that should be playing the same thing.

Then there's my other source of confusion. I play a lot with a loop midi sequencer. This I drive off Live's midi clock. All is fine with playback although I know I'm going to get latency.

But when I go to record several tracks at once, each with different VSTi on different midi channels, the midi data is all over the place, even though the sequencer is chucking out midi data where I want it. Fortunately, most of what I play is quantised to 16ths, but not always.

I tried some real world tests. I would indeed bring forward my playing to compensate for the latency in most cases. However, I noticed when I tried to do some more complex pieces where the instrument I was playing wasn't so loud in the overall mix. I tended then to go with the beat and not the sound of the piano sound I was playing. Then my notes were appearing late.

I guess if you can hear yourself well, and you are listening to what you are playing, then you will compensate. But if you are not listening and playing by the beat alone, then the timing is off.

This sure makes the case for making the record compensation that Live adds an optional feature.
Hi,

thanks for your detailed description.

It is true that it is not suitable for recording multiple tracks at once, as Live shuts off Delay Compensation on the recorded track for the instrument, so that the instrument is not subject to the whole latency of the set, but only to its own latency. That is to ensure the minimum latency for the instrument you're playing.

One advice when recording textures that need more than one instrument, is to use a Rack instead, so that you have to record only one MIDI track containing all the instruments.

For the external sequencer, you should record the MIDI with monitor set to OFF. We'll try to provide a suitable solution to work with external instruments though, but I can't tell when it will be finished.

As for 'not hearing well' the instrument, my take on it is that anyone would like to hear the instrument he is recording well, as in a studio situation, but your point is taken.

I would say: raise the volume for the recording, to be sure to record what you want to hear. Otherwise, if you don't need to hear it, just record in deaf mode, with no monitoring. Or in that special case, you can set another MIDI track with monitor set to off, so it records the notes where you play them and you can put the resulting MIDI clip on the right track afterwards.

That is only describing how to do things right now, but be sure we'll consider your matters.

Regards,
Amaury
Ableton Product Team

Synthbuilder
Posts: 523
Joined: Thu Nov 10, 2005 8:42 am
Location: Cumbria, UK
Contact:

Post by Synthbuilder » Wed Apr 11, 2007 11:02 am

Thanks Amaury for you help thus far. It is appreciated.

With regards to the work arounds for the way I work, I had pretty much come up with those too.

Racks are great, but they don't allow you to set individual effects sends on each parallel pathway. So I still tend to use two individual channels if I need more than one synth.

Amaury
Posts: 5884
Joined: Mon Mar 20, 2006 6:59 pm
Location: Ableton Headquarters
Contact:

Post by Amaury » Wed Apr 11, 2007 11:13 am

Synthbuilder wrote:Thanks Amaury for you help thus far. It is appreciated.

With regards to the work arounds for the way I work, I had pretty much come up with those too.

Racks are great, but they don't allow you to set individual effects sends on each parallel pathway. So I still tend to use two individual channels if I need more than one synth.
I see. One thing to keep in mind though: the more you use effects (insert channels, on the same track as the synth, or on the Master track) on the signal path of your synths, the more you'll have latency while recording, thus making is unsuitable for performing.
On the other hand, if the track you are monitoring goes to a return track via a send, the delay compensation is shut off on the track monitored, containing the synth, but not on the return track, so the sound arrives late to the return track. So it is best to either turn off Delay Compensation (only if you rely on sends) completely while recording, or turn down the send temporarily.

Regards,
Amaury
Ableton Product Team

michkhol
Posts: 29
Joined: Wed Oct 11, 2006 3:05 pm
Location: MD, USA

Post by michkhol » Wed Apr 11, 2007 7:03 pm

Amaury,

I will do your test tonight with my Axiom 49, sorry I didn't have time yesterday. However I tried it with a computer keyboard. Man, I cannot help but hit the keys to the metronome, not to the sound I heard (the Impulse All purpose kick drum). Without the metronome I noticed, I was compensating to have the kick sound come in time with the imaginary beat in my head.

As for your favorite pianist, was he playing solo or with an orchestra? It seems, it makes a big difference.
MacBook 1.83, G5 2.3 Dual-core 1.5G RAM, M-Audio Axiom49, MOTU Ultralite, Live 6.0.7, DP 5.12

Locked