ok this works (the heavy part again)!! thanks.

however as soon as i uncomment these lines

 if(!gBcf.setup((BelaContext*)&context, fifoFactor))
        {
            fprintf(stderr, "Error: unable to initialise BelaContextFifo\n");
            return false;
        }

and compile i get:

Running project ...
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted
Makefile:579: recipe for target 'runide' failed
make: *** [runide] Error 134
Bela stopped
root@bela ~/Bela#	

with these lines commented the heavy patch works fine, but if you say i need those five lines that is not worth much :-)

you need those lines for the lv2host part to work. Send the whole render.cpp file and I will try to track this down.

Ok I tested the code I posted above and fixed it. This should work:

#include <BelaContextFifo.h>
BelaContextFifo gBcf;
double gBlockDurationMs;

void longRender(BelaContext* context, void* arg)
{
	// code here runs at "long" blocksize
}

void longThread(void*)
{
	while(!gShouldStop)
	{
		BelaContext* context = gBcf.pop(BelaContextFifo::kToLong, gBlockDurationMs * 2);
		if(context)
		{
			// ((InternalBelaContext*)context)->audioFramesElapsed = audioFramesElapsed; // keep track of elapsed samples if your longRender needs it
			longRender(context, NULL);
			// audioFramesElapsed += context->audioFrames;
			gBcf.push(BelaContextFifo::kToShort, context);
		} else {
			usleep(1000); // TODO: this  should not be needed, given how the timeout in pop() is for a reasonable amount of time
		}
	}
}

AuxiliaryTask longThreadTask;
bool setup(BelaContext* context, void* userData)
{
	int fifoFactor = 8; // 16 to 128
	if(!gBcf.setup(context, fifoFactor))
	{
		fprintf(stderr, "Error: unable to initialise BelaContextFifo\n");
		return false;
	}
	longThreadTask = Bela_createAuxiliaryTask(longThread, 94, "long-thread", NULL);
	Bela_scheduleAuxiliaryTask(longThreadTask);
	gBlockDurationMs = context->audioFrames * fifoFactor / context->audioSampleRate * 1000;
	
	return true;
}

void render(BelaContext* context, void* userData)
{
// code here runs at "short" blocksize

// the below sends and receives the code to the thread running at the "long" blocksize
/// send to the "long" render
    gBcf.push(BelaContextFifo::kToLong, context);
/// receive from the "long" render
    const InternalBelaContext* rctx = (InternalBelaContext*)gBcf.pop(BelaContextFifo::kToShort);

    if(rctx) {
        BelaContextSplitter::contextCopyData(rctx, (InternalBelaContext*)context);
    }
}

void cleanup(BelaContext* context, void* userData)
{

}

@lokki namely, the changes are:

if(!gBcf.setup(context, fifoFactor))

(the earlier suggestion of simply replacing gContext with context was incorrect)

and

Bela_scheduleAuxiliaryTask(longThreadTask);

(the task was not being scheduled, so the longRender() never ran).

@lokki the above is still just a generic version. In your case, where you expect that for long periods of time there will be nothing sent to the longThread through the gBcf, you can save some CPU by increasing the timeout when reading the BelaContextFifo from the longThread():
replace

		BelaContext* context = gBcf.pop(BelaContextFifo::kToLong, gBlockDurationMs * 2);

with

		BelaContext* context = gBcf.pop(BelaContextFifo::kToLong, 100); // wait up to 100ms

and

			usleep(1000); // TODO: this  should not be needed, given how the timeout in pop() is for a reasonable amount of time

with

			usleep(10000); // sleep 10ms more if no valid context was received

perfect! that worked.

now...

i have this in my render()

void render(BelaContext *context, void *userData)
{
	int num;
       
	for(unsigned int port = 0; port < midi.size(); ++port){
		while((num = midi[port]->getParser()->numAvailableMessages()) > 0){
		 
			static MidiChannelMessage message;
			message = midi[port]->getParser()->getNextChannelMessage();
			switch(message.getType()){
			case kmmNoteOn: {
				//message.prettyPrint();
				int noteNumber = message.getDataByte(0);
				int velocity = message.getDataByte(1);
				int channel = message.getChannel();
				if (velocity > 0) gIsNoteOn = 1;
				gNote = noteNumber;
				// rt_printf("message: noteNumber: %f, velocity: %f, channel: %f\n", noteNumber, velocity, channel);
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmNoteOn], 0, "fff",
						(float)noteNumber, (float)velocity, (float)channel+1);
				break;
			}
			case kmmNoteOff: {
				/* PureData does not seem to handle noteoff messages as per the MIDI specs,
				 * so that the noteoff velocity is ignored. Here we convert them to noteon
				 * with a velocity of 0.
				 */
				int noteNumber = message.getDataByte(0);
				// int velocity = message.getDataByte(1); // would be ignored by Pd
				int channel = message.getChannel();
				// note we are sending the below to hvHashes[kmmNoteOn] !!
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmNoteOn], 0, "fff",
						(float)noteNumber, (float)0, (float)channel+1);
				break;
			}
			case kmmControlChange: {
				int channel = message.getChannel();
				int controller = message.getDataByte(0);
				int value = message.getDataByte(1);
				gControl = controller;
				gCCVal = value;
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmControlChange], 0, "fff",
						(float)value, (float)controller, (float)channel+1);
				break;
			}
			case kmmProgramChange: {
				int channel = message.getChannel();
				int program = message.getDataByte(0);
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmProgramChange], 0, "ff",
						(float)program, (float)channel+1);
				break;
			}
			case kmmPolyphonicKeyPressure: {
				//TODO: untested, I do not have anything with polyTouch... who does, anyhow?
				int channel = message.getChannel();
				int pitch = message.getDataByte(0);
				int value = message.getDataByte(1);
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmPolyphonicKeyPressure], 0, "fff",
						(float)channel+1, (float)pitch, (float)value);
				break;
			}
			case kmmChannelPressure:
			{
				int channel = message.getChannel();
				int value = message.getDataByte(0);
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmChannelPressure], 0, "ff",
						(float)value, (float)channel+1);
				break;
			}
			case kmmPitchBend:
			{
				int channel = message.getChannel();
				int value = ((message.getDataByte(1) << 7) | message.getDataByte(0));
				hv_sendMessageToReceiverV(gHeavyContext, hvMidiHashes[kmmPitchBend], 0, "ff",
						(float)value, (float)channel+1);
				break;
			}
			case kmmSystem:
			case kmmNone:
			case kmmAny:
				break;
			}
		}
	}

	// De-interleave the data
	if(gHvInputBuffers != NULL) {
		for(unsigned int n = 0; n < context->audioFrames; n++) {
			for(unsigned int ch = 0; ch < gHvInputChannels; ch++) {
				if(ch >= gAudioChannelsInUse + gAnalogChannelsInUse) {
					// THESE ARE PARAMETER INPUT 'CHANNELS' USED FOR ROUTING
					// 'sensor' outputs from routing channels of dac~ are passed through here
					// these could be also digital channels (handled by the dcm)
					// or parameter channels used for routing (currently unhandled)
					break;
				} else {
					// If more than 2 ADC inputs are used in the pd patch, route the analog inputs
					// i.e. ADC3->analogIn0 etc. (first two are always audio inputs)
					if(ch >= gAudioChannelsInUse)
					{
						unsigned int analogCh = ch - gAudioChannelsInUse;
						if(analogCh < context->analogInChannels)
						{
							int m = n;
							float mIn = analogReadNI(context, m, analogCh);
							gHvInputBuffers[ch * context->audioFrames + n] = mIn;
						}
					} else {
						if(ch < context->audioInChannels)
							gHvInputBuffers[ch * context->audioFrames + n] = audioReadNI(context, n, ch);
					}
				}
			}
		}
	}

	if(pdMultiplexerActive){
		static int lastMuxerUpdate = 0;
		if(++lastMuxerUpdate == multiplexerArraySize){
			lastMuxerUpdate = 0;
			memcpy(hv_table_getBuffer(gHeavyContext, multiplexerTableHash), (float *const)context->multiplexerAnalogIn, multiplexerArraySize * sizeof(float));
		}
	}


	// Bela digital in
	if(gDigitalEnabled)
	{
		// note: in multiple places below we assume that the number of digital frames is same as number of audio
		// Bela digital in at message-rate
		dcm.processInput(context->digital, context->digitalFrames);
	
		// Bela digital in at signal-rate
		if(gDigitalSigInChannelsInUse > 0)
		{
			unsigned int j, k;
			float *p0, *p1;
			const unsigned int gLibpdBlockSize = context->audioFrames;
			const unsigned int  audioFrameBase = 0;
			float* gInBuf = gHvInputBuffers;
			// block below copy/pasted from libpd, except
			// 16 has been replaced with gDigitalSigInChannelsInUse
			for (j = 0, p0 = gInBuf; j < gLibpdBlockSize; j++, p0++) {
				unsigned int digitalFrame = audioFrameBase + j;
				for (k = 0, p1 = p0 + gLibpdBlockSize * gFirstDigitalChannel;
						k < gDigitalSigInChannelsInUse; ++k, p1 += gLibpdBlockSize) {
					if(dcm.isSignalRate(k) && dcm.isInput(k)){ // only process input channels that are handled at signal rate
						*p1 = digitalRead(context, digitalFrame, k);
					}
				}
			}
		}
	}

	// replacement for bang~ object
	//hv_sendMessageToReceiverV(gHeavyContext, "bela_bang", 0.0f, "b");
	
	// heavy audio callback
	hv_processInline(gHeavyContext, gHvInputBuffers, gHvOutputBuffers, context->audioFrames);
	/*
	for(int n = 0; n < context->audioFrames*gHvOutputChannels; ++n)
	{
		printf("%.3f, ", gHvOutputBuffers[n]);
		if(n % context->audioFrames == context->audioFrames - 1)
			printf("\n");
	}
	*/

	// Bela digital out
	if(gDigitalEnabled)
	{
		// Bela digital out at signal-rate
		if(gDigitalSigOutChannelsInUse > 0)
		{
				unsigned int j, k;
				float *p0, *p1;
				const unsigned int gLibpdBlockSize = context->audioFrames;
				const unsigned int  audioFrameBase = 0;
				float* gOutBuf = gHvOutputBuffers;
				// block below copy/pasted from libpd, except
				// context->digitalChannels has been replaced with gDigitalSigOutChannelsInUse
				for (j = 0, p0 = gOutBuf; j < gLibpdBlockSize; ++j, ++p0) {
					unsigned int digitalFrame = (audioFrameBase + j);
					for (k = 0, p1 = p0  + gLibpdBlockSize * gFirstDigitalChannel;
							k < gDigitalSigOutChannelsInUse; k++, p1 += gLibpdBlockSize) {
						if(dcm.isSignalRate(k) && dcm.isOutput(k)){ // only process output channels that are handled at signal rate
							digitalWriteOnce(context, digitalFrame, k, *p1 > 0.5);
						}
					}
				}
		}
		// Bela digital out at message-rate
		dcm.processOutput(context->digital, context->digitalFrames);
	}
	
	// Bela scope
	if(gScopeChannelsInUse > 0)
	{
		unsigned int j, k;
		float *p0, *p1;
		const unsigned int gLibpdBlockSize = context->audioFrames;
		float* gOutBuf = gHvOutputBuffers;

		// block below copy/pasted from libpd
		for (j = 0, p0 = gOutBuf; j < gLibpdBlockSize; ++j, ++p0) {
			for (k = 0, p1 = p0  + gLibpdBlockSize * gFirstScopeChannel; k < gScopeChannelsInUse; k++, p1 += gLibpdBlockSize) {
				gScopeOut[k] = *p1;
			}
			scope->log(gScopeOut);
		}
	}

	// Interleave the output data
	if(gHvOutputBuffers != NULL) {
		for(unsigned int n = 0; n < context->audioFrames; n++) {
			for(unsigned int ch = 0; ch < gHvOutputChannels; ch++) {
				if(ch >= gAudioChannelsInUse + gAnalogChannelsInUse) {
					// THESE ARE SENSOR OUTPUT 'CHANNELS' USED FOR ROUTING
					// they are the content of the 'sensor output' dac~ channels
				} else {
					if(ch >= gAudioChannelsInUse)	{
						int m = n;
						unsigned int analogCh = ch - gAudioChannelsInUse;
						if(analogCh < context->analogOutChannels)
							analogWriteOnceNI(context, m, analogCh, gHvOutputBuffers[ch*context->audioFrames + n]);
					} else {
						if(ch < context->audioOutChannels)
							audioWriteNI(context, n, ch, gHvOutputBuffers[ch * context->audioFrames + n]);
					}
				}
			}
		}
	}

}

that is more or less the generic heavy render part, i just changed the NI audio parts you told me and write to some variables in the midi section to use in the lv2host long thread. while trying to just put an "if" around the rest and running these lines:

gBcf.push(BelaContextFifo::kToLong, context);
/// receive from the "long" render
    const InternalBelaContext* rctx = (InternalBelaContext*)gBcf.pop(BelaContextFifo::kToShort);

    if(rctx) {
        BelaContextSplitter::contextCopyData(rctx, (InternalBelaContext*)context);
    }

in the corresponding else, i realised two things:

  1. the midi part is still communicating with heavy of course, so i still adjust parameters in my heavy patch, so i will rewrite the midi-part into the else section as well, so as to mute midi to heavy as well when switching

  2. my approach to just put an if around everything else does not work :-) (didn't think so, but one can try) the audio is simply muted in my case. so audio input and output need to stay in the chain i guess. which part do i need to leave in the above render if i just want audio in from the regular (short) render to be passed to the long render and audio out from the long render back to the short one?

sorry if this is all obvious, and thanks as always. i feel i am getting closer to my desired result.

ok, i had the else part in the wrong spot somehow. now i placed the if statement around everything in render(), added the midi part and your lines from above into the else part and tried to run it. if i set my if to false, so that the long thread should get audio i get a segmentation fault.

so if part is the whole render and else part looks like this:

else {
		int num;
       
	for(unsigned int port = 0; port < midi.size(); ++port){
		while((num = midi[port]->getParser()->numAvailableMessages()) > 0){
		 
			static MidiChannelMessage message;
			message = midi[port]->getParser()->getNextChannelMessage();
			switch(message.getType()){
			case kmmNoteOn: {
				//message.prettyPrint();
				int noteNumber = message.getDataByte(0);
				int velocity = message.getDataByte(1);
				int channel = message.getChannel();
				if (velocity > 0) gIsNoteOn = 1;
				gNote = noteNumber;
				// rt_printf("message: noteNumber: %f, velocity: %f, channel: %f\n", noteNumber, velocity, channel);
			
				break;
			}
			case kmmNoteOff: {
				/* PureData does not seem to handle noteoff messages as per the MIDI specs,
				 * so that the noteoff velocity is ignored. Here we convert them to noteon
				 * with a velocity of 0.
				 */
				int noteNumber = message.getDataByte(0);
				// int velocity = message.getDataByte(1); // would be ignored by Pd
				int channel = message.getChannel();
				// note we are sending the below to hvHashes[kmmNoteOn] !!
			
				break;
			}
			case kmmControlChange: {
				int channel = message.getChannel();
				int controller = message.getDataByte(0);
				int value = message.getDataByte(1);
				gControl = controller;
				gCCVal = value;
				break;
			}
			}
			}
	}
	
		gBcf.push(BelaContextFifo::kToLong, context);
/// receive from the "long" render
    const InternalBelaContext* rctx = (InternalBelaContext*)gBcf.pop(BelaContextFifo::kToShort);

    if(rctx) {
        BelaContextSplitter::contextCopyData(rctx, (InternalBelaContext*)context);
    }
}

should that run?

    It should, yes. What do you have in the longRender()? Can you try to put a return; on its first line, so that none of the code in there runs at all. Or send me by mail the whole project (not just the render file).

    lokki ok, i had the else part in the wrong spot somehow.

    a good trick to avoid that, and also make the core more readable and maintainable, is to rename the heavy render() to something like heavyRender() and do

    void render(BelaContext* context, void* userData)
    {
      if(shouldDoHeavy) {
        heavyRender(context, userData);
      } else {
      /// all the stuff you have in the latest post, or put that in a `lv2Render`, and call it with lv2Render(context, userData)
      }
    }

    this is in longrender: (in a separate lv2host project this runs just fine)

    void longRender(BelaContext* context, void* arg)
    {
     // code here runs at "long" blocksize
     
    	
    //	static bool pluginsOn[3] = {true, true, true};
    	
    	// set inputs and outputs
    	const float* inputs[context->audioInChannels];
    	float* outputs[context->audioOutChannels];
    	for(unsigned int ch = 0; ch < context->audioInChannels; ++ch)
    		inputs[ch] = (float*)&context->audioIn[context->audioFrames * ch];
    	for(unsigned int ch = 0; ch < context->audioOutChannels; ++ch)
    		outputs[ch] = &context->audioOut[context->audioFrames * ch];
    			// do the actual processing on the buffers specified above
    	gLv2Host.render(context->audioFrames, inputs, outputs);
    
    			if (gCCVal != oldControl) {
    				oldControl = gCCVal;
    			switch (gControl) {
    				case 10: {
    					gLv2Host.setPort(6, 2, float(gCCVal/ 127.0));
    					break;
    				}
    			}
    			}
    		if(gIsNoteOn == 1){
    			//logic to switch between non "tonal" and "semitonal" scales 
    			if ((scale == whole) | (scale == dim)) scale = chromatic;
    		switch (gNote) {
    			case 20: {
    			if (!echo) {
    				gLv2Host.setPort(5, 1, 35);
    				gLv2Host.setPort(5, 3, 35);
    				echo = 1;
    			} else {
    			gLv2Host.setPort(5, 1, 0);
    			gLv2Host.setPort(5, 3, 0);
    				
    				echo = 0;
    			}
    				break;
    			}
    			
    			case 22: {
    			if (!octave) {
    				gLv2Host.setPort(1, 3, 0.8);
    				gLv2Host.setPort(1, 4, 0.8);
    				octave = 1;
    			} else {
    			gLv2Host.setPort(1, 3, 0);
    				gLv2Host.setPort(1, 4, 0);
    				octave = 0;
    			}
    				break;
    			}
    			case 23: {
    				// pull the plug
    				if (!powercut) {
    					gLv2Host.setPort(3, 2, 1);
    					powercut = 1;
    				} else {
    				gLv2Host.setPort(3, 2, 0);
    				powercut = 0;
    				}
    				break;
    			}
    			
    			case 36: {
    					//set scale, in key of c, no offset
    	for(unsigned int n = 0; n < 12; n++){
    	
    	gLv2Host.setPort(0, n + 12, scale[n]);
      // 	rt_printf("value%d: %d\n", n, scale[n]);
    	}
    	
    			break;
    			}
    		case 37: {
    			//set scale, in key of c# offset of 1 halftone
    			if (scale == chromatic) scale = whole;
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+1)%12 + 12), scale[(n)]);
    //	rt_printf("value%d: %d\n", (n+1)%12, scale[n]);
    	}
    			break;
    			}
    	case 38: {
    	//set scale, in key of d offset of 2 halftones	
    		if (scale == chromatic) scale = whole;
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+2)%12 + 12), scale[(n)]);
    	}
    			break;
    			}
        case 39: {
    				scale = chromatic;
    				break;
    			}
    	case 40: {
    	//set scale, in key of d# offset of 3 halftones	
    	if (scale == chromatic) scale = dim;
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+3)%12 + 12), scale[(n)]);
    	}
    	break;
    	}
    	case 41: {
    	//set scale, in key of e offset of 4 halftones	
    	if (scale == chromatic) scale = dim;
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+4)%12 + 12), scale[(n)]);
    	}
    	break;
    	}
    	case 42: {
    	//set scale, in key of f offset of 5 halftones	
    	if (scale == chromatic) scale = dim;
    	for(unsigned int n = 0; n < 12; n++){		
    	gLv2Host.setPort(0, ((n+5)%12 + 12), scale[(n)]);
    	}
    	break;
    	}
    	case 43: {
    		scale = major;
    		break;
    			}
    	case 44: {
    	//set scale, in key of f# offset of 6 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+6)%12 + 12), scale[(n)]);
    	}
    	break;
    	}	
    	case 45: {
    	//set scale, in key of g offset of 7 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+7)%12 + 12), scale[(n)]);
    	}
    	break;
    	}	
    	case 46: {
    	//set scale, in key of g# offset of 8 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+8)%12 + 12), scale[(n)]);
    	}
    	break;
    	}	
    			case 47: {
    				scale = minor;
    				break;
    	
    			}
    				case 48: {
    	//set scale, in key of a offset of 9 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+9)%12 + 12), scale[(n)]);
    	}
    	break;
    	}	
    	case 49: {
    	//set scale, in key of a# offset of 10 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+10)%12 + 12), scale[(n)]);
    	}
    	break;
    	}	
    	case 50: {
    	//set scale, in key of b offset of 11 halftones	
    	for(unsigned int n = 0; n < 12; n++){			
    	gLv2Host.setPort(0, ((n+11)%12 + 12), scale[(n)]);
    	}
    	break;
    	}
    			case 51: {
    				scale = penta;
    				break;
    			}
    			
    		}
    		gIsNoteOn = 0;
    		}
    
    }

    I cannot test with all the plugins you have and it is not failing for me at the moment, but you should definitely apply this patch:

    --- a/render.cpp
    +++ b/render.cpp
    @@ -664,7 +664,8 @@ getSamples(fileName3, table, channel, startFrame, lastFrame);
     bool setup(BelaContext *context, void *userData)       {
    
                    int fifoFactor = 8; // 16 to 128
    -        if(!gBcf.setup(context, fifoFactor))
    +       BelaContext* longContext = gBcf.setup(context, fifoFactor);
    +        if(!longContext)
             {
                 fprintf(stderr, "Error: unable to initialise BelaContextFifo\n");
                 return false;
    @@ -685,8 +686,8 @@ bool setup(BelaContext *context, void *userData)    {
                    fprintf(stderr, "Using Lv2Host requires non-interleaved buffers and uniform sample rate\n");
                    return false;
            }
    -       if(!gLv2Host.setup(context->audioSampleRate, context->audioFrames,
    -                               context->audioInChannels, context->audioOutChannels))
    +       if(!gLv2Host.setup(longContext->audioSampleRate, longContext->audioFrames,
    +                               longContext->audioInChannels, longContext->audioOutChannels))
            {
                    fprintf(stderr, "Unable to create Lv2 host\n");
                    return false;

    this is because you are currently initializing the maxBlockSize of the gLv2Host object with the "small" block size, while you'd actually want to use the "long" one. BelaContextFifo::setup() returns, on success, a new BelaContext*, whose fields should be used to initialise those objects that will run in the longRender().

    Is there an easy way to install http://ssj71.github.io/infamousPlugins/plugs.html#powercut ? The others I installed with apt-get.

      @lokki also you may be getting segfault if you have no Midi device connected when running the heavy side of things.

      managed to install powercut as well. Confirmed segfault with your code, and confirmed that the fix above fixes it.

      success!! thanks so much, this is really wonderful. it works with an attached switch as well, i can switch the two on the fly and the change is instantaneous. that is just really great.
      for now i used a latching switch but will probably reprogram with a momentary and some debouncing so as to not switch patches at very short interval due to switching noise.

      future plans might include creating different lv2 plugin sets to load as a chain. let's see how far i can go before i run out of CPU (even when switching renders) but for the moment i just have a big smile on my face :-)

        lokki ah yes, sorry about that...

        I was just concerned about the GUI dependencies, but then I saw that its cmake files are smart enough to ignore them and build without GUI if the dependencies are not there.

        lokki i have compiled a myriad of plugins for bela now :-) all the tap stuff as lv2 (mod team did a port to lv2) steve harris, mda etc... still going thru them, most of them run on the BELA some have unsupported ports....

        Great, does the default series patching of the plugins seem to work ok?

        lokki future plans might include creating different lv2 plugin sets to load as a chain.

        that would be a matter of having several objects of type Lv2Host each of which you initialize at setup() with their own effect chain, and you call the render() method only for the chain(s) you want. Bypassing individual effects should also be possible with some relatively straightforward changes to the Lv2Host.

          giuliomoro Great, does the default series patching of the plugins seem to work ok?

          yeah it is quite easy, one thing that is a bit annoying is that you have to rewrite all the port assignments if you change the order in the chain, but some clever coding could maybe make that easier.

          about multiple chains i figured that much, i am getting the hang of it 🙂

            lokki but some clever coding could maybe make that easier.

            Yes use a variable to contain the plugin position in the chain. The return value of Lv2Host::add() is an int that represents the position of the plugin in the chain (which will be exactly equal to the number of times you have called add() minus 1). You can use that return value to refer to the plugin later on.

            4 days later

            i have a follow up question:

            while testing out plugins i came across the mda/talkbox (i also tried mda/vocoder, but that did not work, sound is only coming thru at VERY low level). this plugin works fine when i use it first in the chain, one input is mod the other carrier. however if i would like to add some plugins before that, say mono plugins that won't work, or is there a way to specify a channel for a mono plugin and leave the other channel untouched until it hits a stereo plugin?

            i.e.

            l       &      r input
            |              |
            |              |
            plugin1 mono   |
            |              |
            |              |
            plugin2 mono   | 
            |              |
            |              |
            plugin3   stereo
            |              | (onwards in stereo)

            or alternatively, a way to have parallel plugins that handle left and right independently and at some stage they come together for stereo plugin processing?

              lokki mda/vocoder, but that did not work, sound is only coming thru at VERY low level).

              Maybe something wrong with how the sidechain is patched.

              lokki there a way to specify a channel for a mono plugin and leave the other channel untouched until it hits a stereo plugin?

              Not at the moment I think. I will have a look at the existing API again. It shouldn't be hard to add.