the audio converters have a built-in latency due to the technology in use (delta-sigma). The latency in the case of the audio converters in use on Bela is 17 samples on the input and 21 samples on the output (see in the codec's datasheet the "filter group delay). This latency is in addition to any buffering latency and independent of the block size in use. The analog and digital I/O have no such added latency.
See more details here
Therefore, when you write
digitalWriteOnce(context, n, out);
audioWrite(context, n, out);
out is going to be propagated to the audio output 21 samples later than the digital output.
The fact that you are seeing a variability of 2-3 buffers delay in the digital is interesting: it should be always exactly 2 buffers ± 0.5 samples. However, in case you are using the analog channels at 22.05kHz, then it would be 2 buffers ±1.5 (digital)samples: as the analogs are sampled less as often, there is one extra sample jitter.
However, the fact that you are measuring the latency in integer numbers of buffers ("2-3") sounds suspicious to me: in case you are not doing it already, you should be doing frame-by-frame processing, which guarantees exactly 2buffers latency, regardless of when the edge happens in the buffer. For instance, assuming you are using 44.1kHz for the analog inputs and writing a "1" to the digital channel
pulseOutChannel when the negative edge is detected on
edgeDetectorChannel, your code could look something like:
for (unsigned int n = 0; n < context->analogFrames; ++n)
static float pastIn = 0;
if(analogRead(context, n, edgeDetectorChannel) < 0 && pastIn >= 0)
digitalWriteOnce(context, n, pulseOutChannel, 1);
digitalWriteOnce(context, n, pulseOutChannel, 0);
pastIn = analogRead(context, n, edgeDetectorChannel);
In this case you check frame-by-frame if at frame
n an edge has been detected. If it has, then you write a pulse at that same sample
n. This way you will always have a fixed latency of two buffer sizes with a jitter lower than half a sample.