I am writing a simple program as is shown in the tutorial about a simple mono delay using a live input of mic. All the process before delaying works all fine and the input signal is not distorted. However, once the live input runs through the delay, the signal is like distorted or downsampled. I currently have the dry signal on the left channel and delayed signal on the right channel. The dry signal is fine though. I have posted my code below. Really appreciated if anyone could figure out the issue or ways to improve it!
Distortion in Delayed Signal
- Edited
#include <Bela.h>
#include <algorithm>
#include <libraries/Gui/Gui.h>
#include <libraries/GuiController/GuiController.h>
#include <libraries/Scope/Scope.h>
#include <cmath>
unsigned int gAudioChannelNum;
float *gInBuffer;
float *gOutBuffer;
//Delay params
std::vector<float> gDelayBuffer;
unsigned int gWritePointer = 0;
unsigned int gOffset = 0;
unsigned int gReadPointer = 0;
float feedback = 0.0;
float gAmp = 5.0;
//Gui
Gui gui;
GuiController controller;
bool setup(BelaContext *context, void *userData)
{
gui.setup(context->projectName);
//Set up Gui
controller.setup(&gui, "delayTest");
controller.addSlider("Delay", 0.1, 0, 0.49, 0);//Delay Slider
controller.addSlider("FeedBack", 0.1, 0, 0.99, 0);//Feedback Slider
//Allocate gDelayBuffer
gDelayBuffer.resize( 0.5 * context->audioSampleRate);
//Set input and output buffer size
gInBuffer = new float[context->audioFrames];
gOutBuffer = new float[context->audioFrames];
//Check input and output number
if(context->audioInChannels != context->audioOutChannels){
printf("Input not mathcing output.\n");
}
gAudioChannelNum = std::min(context->audioInChannels, context->audioOutChannels);
return true;
}
void render(BelaContext *context, void *userData)
{
//Delay Slider
float delay = controller.getSliderValue(0);
feedback = controller.getSliderValue(1);
int gOffset = delay * context->audioSampleRate;
//Set Delay Buffer ReadPointer
gReadPointer = (gWritePointer - gOffset + gDelayBuffer.size()) % gDelayBuffer.size();
for(unsigned int n = 0; n < context->audioFrames; n++){
//Porcessing
//Read sample(cause less noise)
gInBuffer[n] = audioRead(context, n, 0) - audioRead(context, n, 1);
// //Input Buffer
float in = gInBuffer[n] * gAmp;
// //Apply Delay
float outDelay = gDelayBuffer[gReadPointer];//Read from buffer
gDelayBuffer[gWritePointer] = in + (outDelay *feedback);//Write into buffer with feedback
gWritePointer ++;//Increase writer pointer
if(gWritePointer >= gDelayBuffer.size()){
gWritePointer -= gDelayBuffer.size();
}
gOutBuffer[n] = in;
// //Write audio to output
audioWrite(context, n, 0, gOutBuffer[n]);
audioWrite(context, n, 1, outDelay);
// scope.log(gOutBuffer[n]);
//}
}
}
void cleanup(BelaContext *context, void *userData)
{
}
You seem to be updating the read pointer only once per block, whereas you should do it once per frame (i.e.: inside the for loop)
giuliomoro Hi guiliomoro! Thank you so much for always giving solutions so quickly. The problem is solved. In the example, the gReadPointer for the delay buffer is updated every block rather than in the for loop. I wonder why in this case is different.
- Edited
In the example you refer to, the read pointer is set once outside the loop to account for any changes in the delay time fro the previous block. Then, inside the loop it is increment and wrapped every frame (lines 84:86). As a matter of fact, there seems to be no reason to have it as a global variable, it could as well be a local one as it gets set at the beginning of each render() based on the write pointer
.
Thank you very much! I get it now. Really appreciate your response