There are a few issues here. First: a regular Bela program doesn't include a main()
function. There is one hidden in the backend that will be used to initialise the audio and call setup()
, render()
and cleanup()
. If you include a main()
function, then that becomes the entry point of the program and if that doesn't call the Bela API, the program will not do any of the Bela stuff.
Second: the code you wrote is invalid: you cannot have function definitions inside another function ( you have setup()
, render()
and cleanup()
inside of main()
).
Third: Bela processes audio in real-time. It means that the timing with which render()
gets called is directly connected with the passing of time. In other words, it will only take exactly 1 second to process 1 second of audio, unless [cit] you are requiring more CPU than the board's capabilities in which case you will get dropped blocks and timing becomes unpredictable and the whole program becomes unusable.
Now to your question: if you want to measure how long the code in render()
takes to run, what you do is what you'd normally do in any other context: you run the function several times in a row and measure the time it takes, then divide by the number of times you ran it and you get the time it took to run. You can do this from within setup()
, which comes with a valid BelaContext*
argument that is suitable to be passed to render()
. Note that clearly this will not process any actual input audio or generate any output audio. It's just burning CPU cycles.
Here's an example: