Hello,
While learning how to code in C++ with Bela ( thanks a lot for these wonderful videos ! ), I was wondering to which extend the code I am writing for Bela can be reused ( as such) on my RPI. My understanding is that Bela provides its own API and therefore, would it be possible to use this API on a RPI?
If not, which C++ DSP library can be build / used on Bela and on a Linux-elsewhere ( CSL6? , STK? ,...)?

Thank you a lot for help

    well, Bela's API is minimal and no-frills, so your code will be maximally portable, in general.

    That is to say : basically anything you write that doesn't use context does not rely on the Bela API. A typical Bela processing loop which operates on an input to generate an output looks like this;

    void render(BelaContext* context, void*) {
      for(unsigned int n = 0; n < context->audioFrames; ++n) { // << replace `context->audioFrames` with your audio API's "frames" 
        for (unsigned int c = 0; c < context->audioInChannels; ++n) { // << replace `context->audioInChannels` with your audio API's "channels" 
           float in = audioRead(context, n, c);; // << replace with your audio API's "read" function
           // here do all your DSP on `in` without using `context`
           // ......
           // and produce an output value
           float out = ....; // whatever comes out of your DSP
           audioWrite(context, n, c, out); // << replace with your audio API's "write" function
        }
      }
    }

    I am highlighting the lines that are Bela-specific. As you can see, the DSP code is not (at least in principle) Bela-specific. So, porting this to a different API should be straightforward. The way I tend to write maximally reusable code within a project involves having all the DSP separated out to a dedicated function or class. This then turns out in an almost empty render() function, which simply reads data into an array, passes it to a processing function and takes the output array and audioWrite()s it. If the DSP function or class do not include any Bela... stuff, they are platform-independent. See for instance the Audio/convolver example, where all the DSP is concentrated in the line

    	convolver.process(outBuf, inBuf, context->audioFrames);

    and this comes from the Convolver class from the Convolver library, which is entirely Bela-independent (as denoted by the fact that none of the files in the library even includes Bela.h).

    JMC64 If not, which C++ DSP library can be build / used on Bela and on a Linux-elsewhere ( CSL6? , STK? ,...)?

    You can use STK's or CSL6's DSP functions on Bela. Bela is an audio I/O API, not a DSP library (although it does provide some platform-independent DSP libraries, as shown above). The point is more: which audio I/O API can you use on the Pi? That is: what API gives you a memory buffer from which to read your inputs and where to write your outputs? Valid options are Raw ALSA , ALSA zita, Portaudio, RTAudio (RTAudio is also used by STK for their stand-alone programs) ...

    6 days later

    Thank you a lot for your answer. I thought it would much easier...

      Sorry for not replying earlier : for me portable means "copy / paste" from one platform to another. No need to worry which platform I am on and to readapt the code.
      If I develop something in csound, or in python... I don't have to deal with platform dependent piece of code. This is dealt with by the low level calls.
      My hope was : If I learn to use STK or any other API and that this API would compile on Bela (taking the best of Bela as it does on the RPi or on a PC), then "portable" would simply mean recompile the code somewhere else.
      As far as I understand your say, the optimized API calls to Bela hardware are in the Bela C++ API which is not portable to another platform ( yet ?) and it is not sure other API if they compile on Bela.. will get the best of the hardware.
      Or am I wrong ? ( Thank you for your patience)

        I guess it was not clear from what I wrote above:

        large part of the code that you write on Bela can (and often is) be portable. In the example above the code that would go in the lines

               // here do all your DSP on `in` without using `context`
               // ......
               // and produce an output value
               float out = ....; // whatever comes out of your DSP

        would in fact be platform- and API- independent and can be basically copied-and-pasted across platforms. The rest of the code I wrote out above would, instead, be non-portable as it depends on the Bela API. If you use STK and you use its DSP functions, those could be called in the platform-independent part and you could copy-paste them across platforms and audio APIs. If you use the audio I/O API provided by STK (RtAudio), then those few lines of code you won't be able to move in and out of Bela so easily (at the moment ... at some point I would like to add Bela support to RtAudio, but time is never enough ...).

        JMC64 : If I learn to use STK or any other API and that this API

        I make a distinction between "DSP" and "Audio I/O" API. In general, a pure DSP library (or the DSP part of a library) should be portable. STK is a DSP library that happens to have RtAudio as the Audio I/O API some of its examples ship with. I see that CSL says:

        It is implemented as a C++ class library to be used as a stand-alone synthesis server, or embedded as a library into other programs.

        Without reading further, I think this means that the DSP part could be embedded in Bela without modifications and the stand-alone is provided by some other Audio I/O backend. In this case it seems that it uses JUCE, which does support Bela (see e.g.: here), so you should be fine to compile a CSL Juce project (or any stand-alone JUCE application that doesn't require graphics) and it should run on Bela just fine.

        JMC64 If I develop something in csound, or in python... I don't have to deal with platform dependent piece of code.

        because each of them provides a platform-independent Audio I/O API (in fact Csound uses RtAudio as well, as does ChucK), but if it didn't.

        I guess what I am trying to say is: yes, on Bela you have to deal with a relative small amount of platform-specific code, but most of your DSP code (which is often the one that one spends more time with) is platform-independent.

        5 days later

        What I'd suggest in this case is to analyze the different platforms available and the different ways they present and accept data. Then write a common API or façade that your platform-independent code can use. Then you can write and maintain your platform-independent code separately, while all you need to do for each platform you use is to write an implementation of that façade for the platform. Since a lot of DSP algorithms are array or block-oriented, the façade will mainly be concerned with moving blocks (or pointers to blocks) of data around in memory.

        Thank you guys.. but I doubt Orac ( which is fantastic BTW) is the solution for C++ portability. In that case, using Csound, Pure Data, SC and all these high level languages can do the stuff but not for learning C++ and porting the code from one platform to another.
        Already, hardware calls are quite painful with pure data. Replacing inputs by "dummy" metro, osc~ when developing on the PC to check if the patches work , then replacing these by the real hardware calls for Bela (provided no mistakes are made), then sending everything to bela... is from time to time unpleasant.
        So, my question was "is it possible to make you forget that you are developing FOR bela and have a totally transparent C++ API ?"
        My understanding at this stage is that the answer is NO.

          Come to that there are solutions like FAUST which will compile to C++, but I sense that's too far away from what you're aiming at. With decades of multiplatform experience, a lot of which involved code compatibility, I don't think the kind of transparency you seek is available, or even desirable.

          C++ isn't like Pure Data. It's perfectly possible to produce source code, sometimes even compiled binary, that will let you hide the implementation details, or in some cases even defer them until runtime. It's a matter for software engineering.

          JMC64 So, my question was "is it possible to make you forget that you are developing FOR bela and have a totally transparent C++ API ?"

          It is possible, but there is need for an API on top of Bela that is hardware-independent and can be ported directly to other platforms. The issue is that by making a hardware-independent API you may end up ignoring some of the characteristic of some hardware (by taking only the minimal common denominator among all possible hardware), or you end up writing an API that only applies to a narrow subset of hardware platforms with advanced characteristics, or you write something that turns the former into the latter. For instance, ALSA and CoreAudio (or PortAudio or RtAudio that build on top of them) are APIs that are hardware-independent, however they assume that you only have audio channels. Something like Axoloti or Teensy or Daisy or Owl have audio channels at audio rate and separate analog ins and digital I/Os that are sampled at block rate (I think). Bela has (kinda) audio-rate analog ins and audio-rate tri-state digital channels.

          If you want an API that deals with all of these, then you basically need the Bela API, or something equivalent. If you only care about audio I/O, you could use ALSA/Portaudio/RtAudio (yes: the Bela cape can work as an ALSA soundcard, though it loses real-time capabilities). I suppose one could also write an ALSA plugin that exposes all the Bela sensors as audio channels and retains real-time capabilities. Coding something that can handle audio-rate or block-based inputs/outputs transparently is not by itself an easy task. Check out some of the ugliness that is needed to deal with Supercollider UGens that have to handle audio- or block- rate inputs and outputs (e.g.: GlitchRHPF_next() vs GlitchRHPF_next _1() in here or poll_next_ak() vs poll_next_kk() vs poll_next_aa() here). One could engineer a more elegant solutions where you can decide platform-wide at compile time whether you deal with block-rate or audio-rate controls and/or add transparent smoothing, but again it does require some effort.

          I seem to be going round in circles here, failing to mention the key point in a succinct enough way: you are writing code for platforms with different hardware characteristics. One can write an API that allows to write code that runs on all of them and they can decide how high-level or low-level the API is depending on a tradeoff between what specificities of a given platform they are willing to leverage/ignore, how much code is needed for the API backend and the overall performance.

          For me, the easiest thing to do is to write DSP code that does not depend on the platform so that then the platform-specific code is limited to preparing the I/O buffers for audio and controls that the DSP code acts on. For my needs, the effort to write a generic-yet-specific-enough API would be a massive time sink yielding limited rewards.