ryjobil On Bela, I think you will have to compile lv2 plugins (maybe @giuliomoro can give more detail), but if you evaluate them on another system then you know what they sound like and which plugins are worth your time to configure for use on Bela.
Yes you can build lv2 plugins on the board, or install them through
apt-get if they are available there (e.g.:
calf, and more are available through
chris Is there any way whatsoever to have a play with the code without having a Bela? I am currently remote working in Vietnam and if I buy a Bela and try and fly with it I think I will have a lot of problems getting on a plane!
I have been flying with one or more Belas (up to 20) in my hand luggage for over 4 years now, to and from Europe (Italy, Germany, UK, France), Australia, US, China. I think I got asked some questions only once in London Stansted (but I fly from it 5-10 times a year). So I don't think you will have troubles flying with it, as long as you take off all those suspicious wires (though I have done it with wires plugged in, too).
chris For example is the framework or underlying OS available on a Docker image?
That wouldn't work, as it uses some very specific hardware that is only on the BeagleBone.
Really, just get any framework/boilerplate code of choice that gives you an audio callback (I heard novocaine is fairly painless on macos). If you are on Linux I would just get hold of a basic ALSA or Jack or Lv2 example, and use it as boilerplate for your DSP code. Actually, Juce could be the best choice, as it is well documented and cross-platform. A bit overkill, perhaps, but if you start from their "stand-alone audio app" example, you are pretty much set.
Then just write all of your code in a separate file, where you implement your effect as
class MyClass, and in such a way that in your wrapper (whichever you choose from above), all that happens is that you call, e.g.:
myObj.setup(numChannels, sampleRate) in some sort of initialization function , and
myObj.render(inputBuffers, outputBuffers, numSamples) from the audio callback. Then porting this to Bela, once you get one, will be straightforward.
Bela code to wrap the above would look like:
bool setup(BelaContext* context, void* userData)
bool render(BelaContext* context, void* userData)
// this assumes interleaved channels, which is default on Bela.
// Minor modifications are needed if your code takes an array of non-interleaved samples,
// or pointers to individual buffers
myObj.render(context->audioIn, context->audioOut, context->audioFrames);
void setup(BelaContext* context, void* userData)