is there a way to decrease the number of incoming midi messages to a heavy program? i get some underruns because i send too many messages. they come from a quneo and i don't need the time granularity i have now. it would great to reduce all cc messages to be sent only every 20ms for example. doing this in the pd/heavy patch does not help, it seems just getting the data (and not doing anything with it) causes the underruns.

or is the actual linux midi implementation the culprit? does it by itself cause underruns? in that case i would have to look into external midi thinning solutions (arduino axoloti etc.)

I had this issue with something I was working on, and as far as i could tell, I had to 'fix it at source'.

one possible thought,
iirc (*) , the midi is initially read in a lower priority thread as its IO, before being passed to the audio thread.
perhaps it could be thinned there, as it comes in.
it would have to be selective thinning, i.e. you dont want to thin note messages, only continuous messages like cc/aftertouch.

(*) its been a long time since I looked at this code, so may have changed alot 😉

    thanks for "confirming". i think this should be adressed somehow on bela, so that incoming midi never causes underruns in audio. i know this is probably not an easy task but midi input is fundamental for quite some projects...

    ok, i've found a workaround for now. if i set the block size to 16 (previously 8) i get no more underruns.

    @giuliomoro is this a known issue? blocksize less than 16 with midi is a no go? the patch sits at 50% at block size 8 and at about 38-40% with 16.

    or is this just the cpu-power needed for midi messages? and at decreased block size it gets an issue.

    the underlying issue (high cpu usage on many incoming midi messages) persists of course. can this be tackled somehow?

    the render() function in the heavy wrapper goes through all the available messages:

     while((num = midi[port]->getParser()->numAvailableMessages()) > 0)
    {
    ...
    hv_sendMessageToReceiverV(...);
    ...
    }

    Now, I have two guesses as to where the performance sink is. One possibility is that hv_sendMessageToReceiverV() is just very expensive. The other is that it is not that expensive, but several messages have stacked up in the parser, and therefore numAvailableMessages() returns a large value, which in turn means that the loop is executed so many times that it eventually causes a dropout. I guess the actual cause is a combination of the two. When running at 8 samples per block, the audio callback runs every 180us. When using 50% of the CPU, that leaves chunks of about 90us for the OS to do all its things AND to read MIDI inputs. Consider that a
    thread may take anywhere between 20us to 100us and beyond to wake up. If the thread that runs Midi::readInputLoop() (which runs at Linux priority, not Xenomai priority, because it performs USB I/O) does not get to run for a while, and in the meantime several MIDI messages have stacked up in the input buffer, these may become visible to the audio thread (via numAvailableMessages()) all at once.

    One thing you could do is to verify for what values of the return value from the first call of numAvailableMessages() you start getting dropouts. Then, try and comment out the hv_sendMessageToReceiverV(...); call (at least for ControlChanges, which I assume are the ones that are causing the issue for you): are the dropouts still there?

    I will have a look on the scope to see how expensive the call to hv_sendMessageToReceiverV(...) is, but it would be good if you could figure out how many numAvailableMessages() are leading you to dropouts.

    thetechnobear perhaps it could be thinned there, as it comes in.

    yes it's a possibility. I figure for large blocksizes and low CPU usage by the audio thread, the non-rt loop could actually run several times between calls to render(), at which point thinning would be not fully effective (i.e..: render()` may still receive some duplicates), but then in that case it shouldn't be a big problem.

      giuliomoro but it would be good if you could figure out how many numAvailableMessages() are leading you to dropouts.

      sure. i will need some assistance i am afraid...🙁 where is the underrun message in the render.cpp file? (so to make sure i only send the value of "num" when there is an underrun.

      also i declared a variable "firstcall" which i reset to zero when outside the while loop to make sure i only save "num" on the first call if (!firstcall) {
      data = num;
      firstcall= 1;
      }

      does that make sense?

      it's not actually that straightforward, because the underrun is detected in the backend. One solution is to use a global variable to record the latest count of available messages and then print it out if there was an underrun.
      Edit the file in core/PRU.cpp (on the board) as follows (add the lines with a leading +, remove the line with a leading -):

      diff --git a/core/PRU.cpp b/core/PRU.cpp
      index 65f74f95..10d484d7 100644
      --- a/core/PRU.cpp
      +++ b/core/PRU.cpp
      @@ -854,6 +854,7 @@ int PRU::testPruError()
              }
       }
      
      +unsigned int gGlobal;
       // Main loop to read and write data from/to PRU
       void PRU::loop(void *userData, void(*render)(BelaContext*, void*), bool highPerformanceMode)
       {
      @@ -1450,7 +1451,7 @@ void PRU::loop(void *userData, void(*render)(BelaContext*, void*), bool highPerf
                                      // don't print a warning if we are stopping
                                      if(!gShouldStop)
                                      {
      -                                       rt_fprintf(stderr, "Underrun detected: %u blocks dropped\n", (pruFrameCount - expectedFrameCount) / pruFramesPerBlock);
      +                                       rt_fprintf(stderr, "Underrun detected: %u blocks dropped. Value: %u\n", (pruFrameCount - expectedFrameCount) / pruFramesPerBlock, gGlobal);
                                              if(underrunLed.enabled())
                                                      underrunLed.set();

      Then in render(), your approach above would work, but it's probably easier to just call getNumMessages() one more time, before the while():

       gGlobal =  midi[port]->getParser()->numAvailableMessages();
       while((num = midi[port]->getParser()->numAvailableMessages()) > 0) 
      {
        ...
      }

      Does it make sense?

        ok it does not work. the PRU.cpp compiles fine with the changes, but i cannot access gGlobal from the render.cpp. i get a:

        error: use of undeclared identifier 'gGlobal'

        so it seems simply declaring a variable in PRU.cpp does not suffice to make it visible in render.cpp

          i still went along and commented the hv_sendMessage part for cc messages out. this made underruns far less frequent, but they still occur. so you seem to be on the right track..

            what might be worth a test is to only process a maximum number of messages for each render.
            this would help, if there are 'peaks' in the flow as it would even them out a bit.

            it won't help much if the flow is continuously very high, and it would introduce latency.
            (in that case, Id think thinning really is the only option)

            what I'd also check is that its not the patches processing of the midi messages thats not too computationally expensive e.g. if you remove the processing, do you still get underruns.
            if not, then you'd likely be able to do the thinning in the patch.


            one thing Ive seen with some expressive controllers is continuous expressions can 'bounce' around quite a bit.
            i.e. when you think your finger is holding a steady pressure, it might be bouncing between cc74 = 65,66,65 at the full data rate (which could be 250-500 msg/sec). this really needs to be smoothed, I usually use a cheap low pass filter (or similar...)

            imo, it also often ends up sounding better - e.g. without this my Soundplane can sound quite 'jittery'.

            note: this depends upon controller and its software, often the software/firmware will have this filtering as an options, as its a really effective way to reduce data.

              thetechnobear

              yeah, i already tried without any processing. just sending the messages to heavy without even connecting anything in heavy/PD leads to dropouts. albeit only above a certain CPU usage of the patch. smoothing of the messages is a must, i already do that 🙂

                lokki error: use of undeclared identifier 'gGlobal'

                sorry you need an

                extern unsigned int gGlobal;

                at the top of the render.cpp file

                ok. it works! but, i get very low values... the maximum is about 6 i can generate. strange...

                if you can look into the "heaviness" of the hv_send thing that would be appreciated!

                Sure. What happens when you comment that out? At what number does it start dropping out?

                  hv_sendMessageToReceiverV does not look particularly 'heavy' to me.
                  basically it just creates a timestamped message and places it on a queue …
                  I assume it will then process the pipeline during hv_process() , so once per block as this would be inline with what pd does for control rate msgs.

                  I think giuliomoro previous point about 'stacking' up is the issue, a controller does not produced messages at anywhere near 180uS... so Id think the parse is backing up due to not enough cpu time.
                  … but if that's the case, then its just there is not enough spare time to process, but why?

                  have you monitored the cpu? are you seeing a cpu spike when you send the messages?
                  (when your app code is not processing them - as that needs to be removed from the equation)

                  would thinning help? and where?
                  I guess ideally it would be done in the midi parser... so render() doesn't even get it.
                  but it might still work in the heavy render(), since although the sendMessage may not be expensive, the audio thread is still going to have to process the control pipeline later... and that may be heavier.

                  I think what I would do, is try to implement a rudimentary thinning in render() , since I assume you know what the troublesome messages are.
                  if its mpe , they are pitchbend, cc74 and ch. pressure,
                  so you'd just need to store these 'per channel' along with a timestamp, and then only send if the new time vs last time is greater than your set threshold.
                  it be an interesting test, as it would point to if the cpu issue is below the heavy layer or not.

                  ( I think its a bit more difficult to do without overhead, in the generic case, since you'd need to store a timestamp for every note/cc/ and midi msg type.... which is a fair amount of data, albeit not that big... also grabbing a timestamp is expensive … unless we can use the one that accompanies the midi message, is that of sufficient resolution?)

                    lokki i still went along and commented the hv_sendMessage part for cc messages out. this made underruns far less frequent, but they still occur

                    hmm that's interesting. I am looking at all parts of the code in that while() loop and I cannot really find the bottleneck. I will really have to plug this into a scope and try and troubleshoot it that way.

                    thetechnobear I assume because its a control rate message, so will only every be processed 1 per block (if its going to behave like PD does), so it will process the pipeline during hv_process() (?)

                    Good point. This means debugging will be harder, as I will have to poke at the heavy internals at runtime.

                    thetechnobear have you monitor the cpu? are you seeing a cpu spike when you send the messages

                    CPU usage is an average (not sure over which time period), so you are unlikely to see the spikes that lead to dropouts (as they'd be concentrated in a few us).

                    thetechnobear so you'd just need to store these 'per channel' along with a timestamp, and then only send if the new time vs last time is greater than your set threshold.

                    I think the first thing you want to do is making sure that you send at most ONE message per CC number per each call to render(), which may be slightly simpler than that. So, if you receive 6 messages that are all for CC 1 , discard them all except the most recent one, and only send that one to heavy. This is easier said than done, however.

                    In the meantime, if you are in a hurry and all you care about is just to get it done with it, the easiest implementation is just non-discriminating throttling as suggested by

                    thetechnobear what might be worth a test is to only process a maximum number of messages for each render.

                    That would be done as:

                     int msgs = 0;
                     int maxMsgs = 5;
                     while(msgs++ < maxMsgs &&  (num = midi[port]->getParser()->numAvailableMessages()) > 0)
                    {
                    ...
                    }

                    Make a note about the earlier points about increased latency of this approach.

                      @lokki how do you have the qunexus setup up?
                      is it only sending continuous CC per note, or are you doing other messages like channel or poly pressure, and pitchbend? as you could try commenting them out in the render() as well.

                      also how many keys are you holding down?

                      another interesting possibility, is to run one of the C++ examples that process midi
                      do they exhibit the issue? this could help eliminate heavy from the 'enquiry' 🙂

                        thetechnobear it is a quneo. the offending pads send 3 cc messages at once: pressure x and y... i will have to investigate further on monday. will look into some of the suggested points then. thanks!

                        as i only get 5 to 6 messages when the underruns happen and sometimes i get underruns even with 2 to 3 messages i wonder if i am still doing something wrong in my patch...the patch sits at 55%

                        i "preprocess" all cc messages like this to route them per channel and cc number easily, maybe heavy/pd does not like the message $ or something...

                        alt text