Angstrom wrote:Yep, the Convolution Pro reverb is very good but it's a real shame it's not a native device because convolution is notoriously resource intensive ... and there's surely some additional M4L resource overhead. Surely a good Reverb unit is a core music making tool and so worthy of a dedicated device - it's not well suited to being created in an open and resource heavy IDE whose stated purpose is hacking bespoke devices.
I don't often say much on these forums, preferring to read and to learn, but I do find myself now and then having to comment when one of the usual suspects says something dodgy—all too often, it seems, when they say something to denigrate M4L.
Convolution Reverb and Convolution Reverb Pro both use a 'native' Max external called multiconvolve~.mxo/mxe/mxe64 (depending on your OS). Externals like this are written in C, and they are as fast performance-wise as anything baked into Live (or any other DAW or VST or whatever).
Moreover, the comment about Max being a 'resource heavy IDE' belies a basic misconception about both IDEs and about Max itself. IDEs are 'Integrated Development
Environments'. The 'development' part of this is important. Yes, Max provides a wonderful but possibly resource-intensive environment while developing
, but when the resulting patch is running none of this matters. In a similar vein both Xcode on the Mac and Visual Studio on Windows are unbelievably resource-intensive during development—far more so than Max will ever be—but this has nothing whatsoever to do with the performance of the program/app you develop with it.
Max itself is one of the Music-N series of computer music languages, the first of which was written by Max Matthews in 1957. (Hence the name Max.) Once a patch is running, Max uses the same underlying architecture as Csound, Supercollider, Pure Data, ChucK and pretty much everything else commonly used for computer music these days (except Extempore). This architecture has survived for 60 years (!) because it does exactly what it needs to do and does it very efficiently. Believe me, in the 1950s and 1960s your code needed to be efficient! The basic idea is that you take a bunch of 'unit generators' or UGens and connect them together. These UGens are anything from 'sum two signals' to anti-aliased oscillators to FFTs. Most of the basic UGens have been around since time immemorial, and their algorithms are well-tested and well-understood. New UGens, such as the 'multiconvolve~' used in the Convolution Reverb (Pro) are, in Max, written in C and just as efficient as the native, built-in UGens (which are themselves externals written in C that happen to be distributed with Max so you don't need to reinvent the wheel).
The performance of the audio DSP is the same regardless of whether you hook these UGens together with a text-based language such as Csound or a visual-based patcher as with Max or Pd. Once you start the thing running and processing audio, the environment you happened to use to develop
it does not matter.
All that said, yes, there is a minimal overhead in the interface between Live and Max, but it is a lot less than one might imagine. If it were anything but minimal and insignificant, it wouldn't be possible to send signal-rate audio back and forth between the two. (Latency is another issue, but it is not due to the interface between Live and Max, it is due to the block size used for audio and DSP and whatever latency is inherent in the algorithm.) In any case, the overhead within
Max is no more than it is within Live. The performance of good Max devices is the same, if not better than, native Live devices. And Convolution Reverb (Pro) with its use of a C-coded external is definitely what I would call a 'good' Max device.