one to many mapping (Live 6): draw cc curves

Share what you’d like to see added to Ableton Live.
Post Reply
ak balance
Posts: 7
Joined: Tue Aug 05, 2003 5:56 pm

one to many mapping (Live 6): draw cc curves

Post by ak balance » Fri Sep 08, 2006 4:01 pm

From what I understand of Live 6 advertising, it will be possible to use one to many -mapping (one controller to many parameters) in live. This is very nice because this way you can really start controlling the software intuitively, not just fiddling with all the different effect parameters that you would need to adjust at the same time to make a certain change in sound.. so thanks.

But.. If you want to really map controls efficiently and to use simple gestures to make versatile musical changes in performances, you need to be able to draw freely the control value scale for each parameter. What I mean is to freely draw to a x/y -table (with or without envelopes) all the parameter curves after they have been split from input signal in a one to many -configuration. So this way, when I turn one knob on my controller, this linear value moves the reading position in every parameters tables that it has been assigned to. And every parameter is changed accordingly to the table output values. Every effect or mixer parameter needs to have its own table. This is not very confusing when you use it in practice. You can check for it by trying Audiomulch (PC only) or by checking the picture below from the Reaktor ensemble where I use this kind of scaling. The simpler way to think of it is just that every mixer/effect parameter has its own freely adjustable and dynamic "sensitivity" control. The main advantage of this comes clear when you adjust these differently sensitive effect parameters in relation to each other for use with one physical controller. This would not have to change in any way Lives current behavior because you could always leave the parameters linear and unaffected.

So why tables? Live has already a option to scale a parameters value to a certain range and to invert it. The answer is because it is not always useful if the reverb mix opens linearly with the cutoff. Or the delay feedback with the distortion. The acoustic instruments do not react linearly either when you play them. They are rather dynamic and changing simply your hands position (one controller) can change many different things (parameters) in sound in very unlinear ways. With these tables you could adjust live to react reasonably and in many ways for your own needs .

So using this with the effect combining possibilities in Live you could make sound devices (effects or instruments) that behave dynamicly exactly the way you want with just few controls. I have for example used this technique in Reaktor to make a device which by turning only one knob cuts slowly the low and high frequencies towards the center, fades the dry sound out, increases reverb and at the same time increases the late reflections and decreases the early ones. The result is a simple device that will "fade" the sound out very naturally instead of just decreasing its amplitude. Which just sounds like a boring fade out... :) This kind of combined effect is often used in films. So all this with just one controller for one effect. I can use it in live performances or in arrangement mode without drawing all those painful automation curves every time I need this kind of sonic gesture. But the parameter curves have to be set right for it to sound good. And when using ordinary MIDI resolution for controller input I need to use a small smoothing delay for the table output because the limited 128 step resolution can be too small when scaled in this way. But if these would be implemented you could easily do and share these kind of "smart" combined effects with Live.

We often assume that controlling different parameters in electronic instruments at the same time by hand is the equivalent for musical expression with acoustic instruments. I think there are many controller values you can tie together in your own sets or instruments because of their dependencies in our personal expression and needs. And at that point the software can start to behave more like an instrument because you do not have to constantly think all the knobs and parameters you need to mechanically adjust. You can use more learned gestures and possibly be more expressive. And possibly work more like the conductor or performer in your own performances and less like a mixing guy.

Oh well, I stop preaching now. I am just working on my artistic doctoral studies on digital sound and how to use it more expressively and interactively in performing arts. And I really feel that there is a big gap named MAPPING between all these beautiful MIDI multicontrollers and current live oriented audio software.

Btw if you red this far check also Ross Bencinas Metasurface -interface in Audiomulch. That is one interesting solution for one to many -mapping.

Peace.

Antti Nykyri
MA, Research Associate, Sound Designer
Theatre Academy of Finland


Image[/img]

Angstrom
Posts: 14975
Joined: Mon Oct 04, 2004 2:22 pm
Contact:

Post by Angstrom » Fri Sep 08, 2006 4:16 pm

Long post
my reply ...
"Macros"


we don't get curves, but ranges is reasonably good

Post Reply