Just started looking into block sizes of Pure Data when running under the Bela, and I'm somewhat unclear in what to expect.

For instance, https://forum.bela.io/d/101-compiling-puredata-externals/47 notes (post is from this year, 2018, Mar):

the difference between platforms could be due to the fact that on Bela Pd's internal blocksize is 16, while in Pd it defaults to 64. Your code does an FFT of the size of the incoming buffer, so maybe mayer_reallfft() does not like an FFT of size 16?

Note that you cannot change this parameter at runtime: you would need to recompile libpd editing

#define DEFDACBLKSIZE 16

to

#define DEFDACBLKSIZE 64

in

 libpd/pure-data/src/s_stuff.h

So, this tells me I cannot change the block size for PureData/libpd on Bela without recompiling.

On the other hand, there is https://github.com/BelaPlatform/Bela/wiki/Running-Puredata-patches-on-Bela :

Another thing that changed is the minimum block size, which is now 8 samples per block (vs the 64 of stock Pd/libpd). Actual block size can be adjusted at runtime using the -C command line parameter. Accepted values are 8, 16, 32, 64, 128, default is 16.

Ok, so this tells me I can change the libpd block size at runtime, with a -C command line parameter.

Finally, I've opened one of my PD patches in the Bela IDE, and changed the Project Settings tab, and set:

  • Block size (audio frames): 64
  • Analog channels: 4

... and when I inspect the corresponding settings.json file, I get:

root@bela:~/Bela/projects# cat TestPD/settings.json 
{"fileName":"_main.pd","CLArgs":{"-p":"64","-C":"4","-B":"16","-H":"-6","-N":"1","-G":"1","-M":"0","-D":"0","-A":"0","--pga-gain-left":"10","--pga-gain-right":"10","user":"","make":"","-X":"0","audioExpander":"0","-Y":"","-Z":"","--disable-led":"0"}}

... which tells me that the -C command line parameter is not for block size, but for analog channels instead - and the block size command line parameter is -p?

So I have a hard time understanding how is it supposed to work: is it possible to change libpd block size at runtime, and if so, is the -C or the -p proper command line parameter to use?

    libpd's "logic" block size on Bela is decided at compile time and it is always going to be 16 (just like it is 64 on regular Pd). The blocksize you change at runtime (using -p, or the IDE) is the "hardware" blocksize (similar to the effect of the "blocksize" parameter in Pd).

    sdaau Another thing that changed is the minimum block size, which is now 8 samples per block (vs the 64 of stock Pd/libpd). Actual block size can be adjusted at runtime using the -C command line parameter. Accepted values are 8, 16, 32, 64, 128, default is 16.

    That is outdated, I just fixed it: the minimum block size is 16, as you found out from DEFDACBLKSIZE

      If you want a "logical" block size of 64 in your libpd patch on Bela, you need to have a [block~ 64] in a subpatch. You normally want to also change the hardware blocksize to 64, or you will end up having dropouts with low CPU usage

        Many thanks @giuliomoro :

        giuliomoro libpd's "logic" block size on Bela is decided at compile time and it is always going to be 16 (just like it is 64 on regular Pd). The blocksize you change at runtime (using -p, or the IDE) is the "hardware" blocksize (similar to the effect of the "blocksize" parameter in Pd).

        Ah, so there are two of these! Good to know, thanks for mentioning this...

        giuliomoro That is outdated, I just fixed it: the minimum block size is 16, as you found out from DEFDACBLKSIZE

        Thanks for that - however, I just want to make sure - the doc says Actual block size can be adjusted at runtime using the -C command line parameter, however, if I run a PD executable from the command line, I get:

           --period [-p] period:               Set the hardware period (buffer) size in audio samples
           --analog-channels [-C] val:         Set the number of ADC/DAC channels (default: 8)

        So, shouldn't the doc state Actual block size can be adjusted at runtime using the -p command line parameter?

        giuliomoro If you want a "logical" block size of 64 in your libpd patch on Bela, you need to have a [block~ 64] in a subpatch. You normally want to also change the hardware blocksize to 64, or you will end up having dropouts with low CPU usage

        Thanks for this too - one of the difficulties I find with the PD documentation, is that when I make a [block~], and I right-click for help, the help patch mostly talks about [switch~], not about [block~] - and it is never mentioned explicitly that [block~] takes an integer argument. Nor that it should be placed in a subpatch, which I still don't understand why (probably as a consequence of "You may have at most one block~/switch~ object in any window.", and "block~/switch~ and dac~/adc~ are incompatible", from the help file).

        In any case, I did some experiments with this, and it seems the -p setting takes precedence over whatever number [block~ N] might be set to - I'll post about that in https://forum.bela.io/d/716-help-with-underruns-using-sdt-for-puredata-on-bela ...

          sdaau Thanks for that - however, I just want to make sure - the doc says Actual block size can be adjusted at runtime using the -C command line parameter, however, if I run a PD executable from the command line, I get:

          -p, that was another error in the wiki, now fixed.

          sdaau n any case, I did some experiments with this, and it seems the -p setting takes precedence over whatever number [block~ N] might be set to

          that should not be the case. Again, one sets the "logic" blocksize, and the other one sets the hardware blocksize, but you the one that will make you avoid dropouts is the hardware size.

          also, libpd does not know about the -p setting, so if you have any internal measurement in Pd, such as the patch below, you would see no changes by changing -p.

          alt text

          2 years later

          Hi,

          I want to change the internal block size and tried this:

          git clone https://github.com/BelaPlatform/libpd.git
          git submodule init
          git submodule update --recursive
          .... do your change to pure-data/src/s_stuff.h
          make -f Makefile-Bela
          make -f Makefile-Bela install

          but the pure-data folder is empty - no src directory and no s_stuff.h file.

          Help please!
          Thx in advance, Klemenz;

            Hi,

            another question. Why does the block duration takes 1.437 ms with a block size of 16 samples? In my case it's even 1.542 ms. Shouldn't it take just 0.363 ms with a sampling frequency of 44100 Hz?

              hmh, i tried [block~ 64] and the block size really changed to 64 but i lost all audio, I don't hear anything..

                klemenz but the pure-data folder is empty - no src directory and no s_stuff.h file.

                this probably means

                git submodule update --recursive

                failed somehow?

                klemenz Why does the block duration takes 1.437 ms with a block size of 16 samples? In my case it's even 1.542 ms. Shouldn't it take just 0.363 ms with a sampling frequency of 44100 Hz?

                how did you measure this?

                klemenz hmh, i tried [block~ 64] and the block size really changed to 64 but i lost all audio, I don't hear anything..

                Was that in the same patch /subpatch containing [dac~]/[adc~]? That's illegal (see [block~]'s help file for clarifications).

                  I got it running on block size 64. I had to switch into directory lidpd in order to run:
                  git submodule init
                  git submodule update --recursive

                  9 days later

                  giuliomoro
                  ad block duration: Ok, when I use [bang~] to measure block duration it takes 0.363 ms. But when I measure my filter algorithm it always takes 1.542 ms for all different internal block sizes. how is this possible? Ok, how do I measure this? I send two sweeps, one direct into soundcard channel one, the other through Bela and back to soundcard channel two. The difference between the arrival time of the impulse responses is my delay. In bypass mode I only measure the delay from [adc~] and [dac~] and I get the same results as in the paper McPherson, Andrew P., An Environment for Submillisecond-Latency Audio and Sensor Processing on BeagleBone Black, Audio Engineering Society, Convention Paper, Warsaw, Poland, May 7-10 2015.
                  When I route the signal through my filter algorithm and subtract the delay from [adc~] and [dac~] I get my 1.542 ms. Why does it take so long with block size 16 and 32? It would be appropriate for block size 64 with a sampling frequency of 44100 Hz. Thanks for your help!

                  if you recompiled libpd with DEFDACBLKSIZE set to 64, then that's Pd's own buffer size. Setting smaller block sizes in Bela means that the libpd callback will effectively process data every 4 (with 16 samples per block) or 2 (with 32 samples per block) Bela audio callbacks (the render() function). This is the reason why I changed DEFDACBLKSIZE to 16 in the first place: to allow smaller latency. If you try to run Bela with a smaller block size than DEFDACBLKSIZE, you will also not be able to fully take advantage of the CPU: only 25% or 50% will be available to actually perform the computations in the two cases, respectively.

                  Ok so even with 16 samples per block the render function will be called every 4 Bela audio callbacks. That's why I get the same latency for block sizes of 16 and 64 samples. Is that correct?
                  My project uses BelaOnUrHead for head-tracking and filters the audio input with appropriate HRTFs. So why there is a delay clearly noticeable between head position and filtering with a block size of 64 samples and with a block size of 16 samples there is practically none when the filter processing time is always the same?

                  internal block size 16 samples, audio block size 64 samples = almost no delay
                  internal block size 64 samples, audio block size 64 samples = clearly noticeable delay
                  ???

                  I just noticed that these lines:

                  		// send IMU values to Pd
                  		libpd_float("bno-yaw", ypr[0]);
                  		libpd_float("bno-pitch", ypr[1]);
                  		libpd_float("bno-roll", ypr[2]);

                  are adding one (Pd internal) block delay, because they are placed after the call to libpd_process_sys();, so their effect will only manifest itself at the next Pd block. Try moving them just before libpd_process_sys(), and see if it fixes it for you.

                  No that didn't fix it. The delay is really bad. I don't have a recognizable delay with internal block size 16 and audio block size 128. Any other ideas?

                  you should verify whether this latency happens only with the belaonyourhead code or also with a simple pass through example. In the former case, the issues is probably with the code in there.

                  ok, when I send a ramp of angles through the line tool to my external I got almost no delay. so the problem must lie within belaonyourhead. pheew, how to debug this? don't know where to start..

                  Upon closer inspection it would seem that the scheduling of the read happens much less often than advertised:

                  // Change this to change how often the BNO055 IMU is read (in Hz)
                  int readInterval = 100;
                  ...
                  // in setup()
                  ...
                  readIntervalSamples = context->audioSampleRate / readInterval;
                  ...
                  // in render()
                  ...
                  for(unsigned int tick = 0; tick < numberOfPdBlocksToProcess; ++tick)
                  {
                  ...
                           // this schedules the imu sensor readings
                           if(++readCount >= readIntervalSamples) {
                                readCount = 0;
                                Bela_scheduleAuxiliaryTask(i2cTask);
                            }
                  ...
                  }

                  So reading happens every once every 44100/100 Pd blocks, instead of every 100Hz.

                  Try change the last few lines to:

                           if(readCount >= readIntervalSamples) {
                                readCount = 0;
                                Bela_scheduleAuxiliaryTask(i2cTask);
                            }
                          readCount += gLibpdBlockSize;

                  You are the man! Thanks a lot, that did the job! It is really great how you run this forum here!
                  All the best, Klemenz