OK. I don’t have either a TR or a Behringer interface which limits my experimenting, but I think there’s at least two different things going on in that project. I hope my analysis is correct, but fi it isn’t I’m more than happy for someone to correct me. Sorry this is so long, but there’s a lot involved.
How Live decides where to place audio recordings on the timeline when you are monitoring through the audio track you are recording to isn’t exactly intuitive.
Live assumes that if we are monitoring via an audio track then we want the audio placed as near as possible to when we heard it.
We will hear it only after the audio has gone through the interface into Live and back out through the interface - after the time dictated by the size of the audio buffer.
To compensate for that latency Like shifts the recorded audio back along the timeline by the audio latency setting in preferences. It’s not an exact process because it doesn’t take account of the time it takes for sound to get from monitors to our ears, but let’s keep things simple and pretend we’re wearing headphones.
The idea is to make things consistent between audio monitored through an audio track and other audio monitored direct via an interface or heard as an acoustic instrument, guitar amp, whatever, at the same time.
You have a fairly high audio latency setting showing a round trip of 25.3ms and are recording into tracks with the monitoring set to "in", so Live should be shifting the audio by the length of the audio buffer. I’m usually running a buffer of around 6-7ms and I see Live apply that amount of shift when recording while monitoring through the same track.
You also have a variety of MIDI clock adjustments set and you are using two different audio interfaces, which can make for unpredictable results. Especially if either of the drivers involved don’t report their latency correctly or either device has "hidden" internal buffers Core audio doesn’t know about (a surprising number of interfaces have a small internal buffer the driver doesn’t report).
May I make a suggestion? Start with a clean slate. Switch on delay compensation. Clear the MIDI clock adjustments. Open the Live lesson called "audio i/o" and run that test on the Behringer. If the TR8s can send audio from a DAW through its outputs and accept an incoming audio signal also run that test on the TR8s. Then run it again using the TR8s as the input and Behringer as output interface. See how things line up with that - let’s be certain there’s no audio driver/interface hardware issue going on to affect things.
The way Ableton suggest to monitor through audio tracks without having the audio latency automatically applied to recorded audio is a bit messy, but works. If anyone has a better idea for how to do this I’d be pleased to hear it.
First create the required tracks as usual and set their monitoring to "in" - the "external plugin" instrument can be used for this as well. Then duplicate those tracks and set the monitoring to "off". You monitor via the tracks set to monitor their input and record into the tracks with monitoring off. That way Live assumes you are hearing the sound from a source other than Live and the automatic compensation for audio latency making you hear things "late" isn’t applied. The downside is you get a bunch of duplicated tracks, but once recording is done the tracks used just for monitoring can be deleted.
Or just record on the same track as the monitored one and shift the audio the required amount afterwards.
As for MIDI, hardware MIDI is very rarely exactly spot-on in timing - polyphonic MIDI can’t be because chords sent over MIDI are always slightly staggered because MIDI is a serial protocol so sends one note after another. MIDI clock isn’t always absolutely perfect, it can wander by small amounts and computers/DAWs often aren’t perfect sources of MIDI clock.
If a hardware synth receives MIDI - even an internal MIDI instruction to play a note sent to the sound engine by it’s own sequencer - it takes some time for it to turn the MIDI into audio. Unfortunately that amount of time varies from synth to synth. Zooming in to tracks and searching for ways to configure software to automatically correct for a couple of milliseconds can drive people mad. Well, it has me before now on PCs and Macs and using more than two DAWs.
Fixing that kind of slop is what audio quantising and warp are about. Or if comparing the synth’s recorded audio with the MIDI on the time-line shows a consistent shift between the two, using track delays or manually shifting the audio the required amount of time fixes things.
I hope this helps.