When monitoring directly from an external sound source (p.ex. analog synthesizer), i.e. anything you're hearing directly from outside ableton while recording.
Why: Since you're playing along with the remaining tracks and hearing the source directly (with no latency), the recorded clip needs to be shifted back by the incurred latency at that specific buffer size.
Use Case for IN/AUTO monitoring modes:
When recording a guitar though a VST amp, a virtual instrument though MIDI keyboard, or any sound source that is being heard through an Audio/MIDI Instrument track.
From Live's manual, the philosophy was: "If you are using playthrough while recording, you will want to record what you hear — even if, because of latency, this occurs slightly later than what you play."
...
"When monitoring is enabled during recording, Live adds an additional delay to the timestamp of
the event based on the buffer size of your audio hardware. This added latency makes it possible
to record events to the clip at the time you hear them — not the time you play them."
Consider why this makes sense - imagine you're recording some VST drums with an unusual high delay, say 200 ms.
If you play a few bars, your brain naturally adjusts, playing ahead of the time to compensate, so what you hear effectively matches the exact timing of the song (this happens also with smaller delays).
Now, how do you want things to be recorded?
As you are performing them, of course - which means the notes should be registered according to how you're playing to hear them.
If Live wasn't adding this latency, all your notes would actually land earlier in the grid, and not as you heard them by playing a little ahead.
See what happens to a metronome click when looping back the output, with the 3 modes:
