you can do mod unlearn - by setting the modulator to zero...
mod learn on
touch parameter to unlearn
turn modulator source (e.g. knob) to zero
mod learn off
modulators - so to learn a new modulator (when you have others) simply stop modulating existing ones before you try to learn a new one (easy way is to turn amount % to zero) .
its a bit of fiddly when you first start, but once you've done it a few times it was fine...
notecv (make sure your using my new bela.pd as above)
put note cv in a1, synth in a2 - use t1 as trigger, cv1 as v/oct input.
mid-term, im going to replace it with a mod matrix type system, which allows for mixing of modulations etc.
but that requires a bit more underlying work, and on the UI.
so i wanted a quick solution for 2.0, so i didn't waste too much dev effort on something that will be replaced, and this was quick, as was the same as midi learn - and required minimal 'ui'.
yeah, i saw the comments about optimisation, but as far as i can tell thats just not practical in this case...
i cannot optimise libpd, and it would be a huge amount of effort to optimise existing PD externals i didn't write....
(can you imagine trying to go through elements or braids from MI?)
... and my externals/code are non-dsp, so don't have the maths/vfp issues anyway.
of course if there is some way i can compile existing code, and get better results (I think im using all the compiler options discussed above), or some how tweak libpd - Im all ears
(one option discussed before is recompiling PD to allow for bigger buffers sizes that 128, not sure if thats particularly desirable though)
at the end of the day it does work reasonably, well if you don't go over board with fx and synths.
...and part of the reason its fails more dramatically that on a rPI, is that Xenomai will really let you take virtually the entire CPU, so when you overstep the mark - you tend to get 'locked out'.
(this is not unique to pd, or orac... its generally true .. ive done the same regularly with supercollider )