But.. If you want to really map controls efficiently and to use simple gestures to make versatile musical changes in performances, you need to be able to draw freely the control value scale for each parameter. What I mean is to freely draw to a x/y -table (with or without envelopes) all the parameter curves after they have been split from input signal in a one to many -configuration. So this way, when I turn one knob on my controller, this linear value moves the reading position in every parameters tables that it has been assigned to. And every parameter is changed accordingly to the table output values. Every effect or mixer parameter needs to have its own table. This is not very confusing when you use it in practice. You can check for it by trying Audiomulch (PC only) or by checking the picture below from the Reaktor ensemble where I use this kind of scaling. The simpler way to think of it is just that every mixer/effect parameter has its own freely adjustable and dynamic "sensitivity" control. The main advantage of this comes clear when you adjust these differently sensitive effect parameters in relation to each other for use with one physical controller. This would not have to change in any way Lives current behavior because you could always leave the parameters linear and unaffected.
So why tables? Live has already a option to scale a parameters value to a certain range and to invert it. The answer is because it is not always useful if the reverb mix opens linearly with the cutoff. Or the delay feedback with the distortion. The acoustic instruments do not react linearly either when you play them. They are rather dynamic and changing simply your hands position (one controller) can change many different things (parameters) in sound in very unlinear ways. With these tables you could adjust live to react reasonably and in many ways for your own needs .
So using this with the effect combining possibilities in Live you could make sound devices (effects or instruments) that behave dynamicly exactly the way you want with just few controls. I have for example used this technique in Reaktor to make a device which by turning only one knob cuts slowly the low and high frequencies towards the center, fades the dry sound out, increases reverb and at the same time increases the late reflections and decreases the early ones. The result is a simple device that will "fade" the sound out very naturally instead of just decreasing its amplitude. Which just sounds like a boring fade out...

We often assume that controlling different parameters in electronic instruments at the same time by hand is the equivalent for musical expression with acoustic instruments. I think there are many controller values you can tie together in your own sets or instruments because of their dependencies in our personal expression and needs. And at that point the software can start to behave more like an instrument because you do not have to constantly think all the knobs and parameters you need to mechanically adjust. You can use more learned gestures and possibly be more expressive. And possibly work more like the conductor or performer in your own performances and less like a mixing guy.
Oh well, I stop preaching now. I am just working on my artistic doctoral studies on digital sound and how to use it more expressively and interactively in performing arts. And I really feel that there is a big gap named MAPPING between all these beautiful MIDI multicontrollers and current live oriented audio software.
Btw if you red this far check also Ross Bencinas Metasurface -interface in Audiomulch. That is one interesting solution for one to many -mapping.
Peace.
Antti Nykyri
MA, Research Associate, Sound Designer
Theatre Academy of Finland
