i still went along and commented the hv_sendMessage part for cc messages out. this made underruns far less frequent, but they still occur. so you seem to be on the right track..

    what might be worth a test is to only process a maximum number of messages for each render.
    this would help, if there are 'peaks' in the flow as it would even them out a bit.

    it won't help much if the flow is continuously very high, and it would introduce latency.
    (in that case, Id think thinning really is the only option)

    what I'd also check is that its not the patches processing of the midi messages thats not too computationally expensive e.g. if you remove the processing, do you still get underruns.
    if not, then you'd likely be able to do the thinning in the patch.


    one thing Ive seen with some expressive controllers is continuous expressions can 'bounce' around quite a bit.
    i.e. when you think your finger is holding a steady pressure, it might be bouncing between cc74 = 65,66,65 at the full data rate (which could be 250-500 msg/sec). this really needs to be smoothed, I usually use a cheap low pass filter (or similar...)

    imo, it also often ends up sounding better - e.g. without this my Soundplane can sound quite 'jittery'.

    note: this depends upon controller and its software, often the software/firmware will have this filtering as an options, as its a really effective way to reduce data.

      thetechnobear

      yeah, i already tried without any processing. just sending the messages to heavy without even connecting anything in heavy/PD leads to dropouts. albeit only above a certain CPU usage of the patch. smoothing of the messages is a must, i already do that 🙂

        lokki error: use of undeclared identifier 'gGlobal'

        sorry you need an

        extern unsigned int gGlobal;

        at the top of the render.cpp file

        ok. it works! but, i get very low values... the maximum is about 6 i can generate. strange...

        if you can look into the "heaviness" of the hv_send thing that would be appreciated!

        Sure. What happens when you comment that out? At what number does it start dropping out?

          hv_sendMessageToReceiverV does not look particularly 'heavy' to me.
          basically it just creates a timestamped message and places it on a queue …
          I assume it will then process the pipeline during hv_process() , so once per block as this would be inline with what pd does for control rate msgs.

          I think giuliomoro previous point about 'stacking' up is the issue, a controller does not produced messages at anywhere near 180uS... so Id think the parse is backing up due to not enough cpu time.
          … but if that's the case, then its just there is not enough spare time to process, but why?

          have you monitored the cpu? are you seeing a cpu spike when you send the messages?
          (when your app code is not processing them - as that needs to be removed from the equation)

          would thinning help? and where?
          I guess ideally it would be done in the midi parser... so render() doesn't even get it.
          but it might still work in the heavy render(), since although the sendMessage may not be expensive, the audio thread is still going to have to process the control pipeline later... and that may be heavier.

          I think what I would do, is try to implement a rudimentary thinning in render() , since I assume you know what the troublesome messages are.
          if its mpe , they are pitchbend, cc74 and ch. pressure,
          so you'd just need to store these 'per channel' along with a timestamp, and then only send if the new time vs last time is greater than your set threshold.
          it be an interesting test, as it would point to if the cpu issue is below the heavy layer or not.

          ( I think its a bit more difficult to do without overhead, in the generic case, since you'd need to store a timestamp for every note/cc/ and midi msg type.... which is a fair amount of data, albeit not that big... also grabbing a timestamp is expensive … unless we can use the one that accompanies the midi message, is that of sufficient resolution?)

            lokki i still went along and commented the hv_sendMessage part for cc messages out. this made underruns far less frequent, but they still occur

            hmm that's interesting. I am looking at all parts of the code in that while() loop and I cannot really find the bottleneck. I will really have to plug this into a scope and try and troubleshoot it that way.

            thetechnobear I assume because its a control rate message, so will only every be processed 1 per block (if its going to behave like PD does), so it will process the pipeline during hv_process() (?)

            Good point. This means debugging will be harder, as I will have to poke at the heavy internals at runtime.

            thetechnobear have you monitor the cpu? are you seeing a cpu spike when you send the messages

            CPU usage is an average (not sure over which time period), so you are unlikely to see the spikes that lead to dropouts (as they'd be concentrated in a few us).

            thetechnobear so you'd just need to store these 'per channel' along with a timestamp, and then only send if the new time vs last time is greater than your set threshold.

            I think the first thing you want to do is making sure that you send at most ONE message per CC number per each call to render(), which may be slightly simpler than that. So, if you receive 6 messages that are all for CC 1 , discard them all except the most recent one, and only send that one to heavy. This is easier said than done, however.

            In the meantime, if you are in a hurry and all you care about is just to get it done with it, the easiest implementation is just non-discriminating throttling as suggested by

            thetechnobear what might be worth a test is to only process a maximum number of messages for each render.

            That would be done as:

             int msgs = 0;
             int maxMsgs = 5;
             while(msgs++ < maxMsgs &&  (num = midi[port]->getParser()->numAvailableMessages()) > 0)
            {
            ...
            }

            Make a note about the earlier points about increased latency of this approach.

              @lokki how do you have the qunexus setup up?
              is it only sending continuous CC per note, or are you doing other messages like channel or poly pressure, and pitchbend? as you could try commenting them out in the render() as well.

              also how many keys are you holding down?

              another interesting possibility, is to run one of the C++ examples that process midi
              do they exhibit the issue? this could help eliminate heavy from the 'enquiry' 🙂

                thetechnobear it is a quneo. the offending pads send 3 cc messages at once: pressure x and y... i will have to investigate further on monday. will look into some of the suggested points then. thanks!

                as i only get 5 to 6 messages when the underruns happen and sometimes i get underruns even with 2 to 3 messages i wonder if i am still doing something wrong in my patch...the patch sits at 55%

                i "preprocess" all cc messages like this to route them per channel and cc number easily, maybe heavy/pd does not like the message $ or something...

                alt text

                  lokki i "preprocess" all cc messages like this to route them per channel and cc number easily, maybe heavy/pd does not like the message $ or something...

                  When you say this:

                  lokki just sending the messages to heavy without even connecting anything in heavy/PD leads to dropouts

                  what does the patch look like? Just [ctlin] without anything connected to it? Can you try with deleting [ctlin] altogether?

                    giuliomoro what does the patch look like? Just [ctlin] without anything connected to it? Can you try with deleting [ctlin] altogether?

                    no it was with [ctlin] and my preprocessing, but without anything connected to the route objects. will try to delete the ctlin on my big patch and see what it does.

                    in the meantime i created a much smaller test patch which is much lighter on cpu and the results are as follows:

                    block size 16, no underruns. cpu at 22%
                    block size 8, no underruns. cpu at 30%
                    block size 4, underruns. cpu at 44-48%

                    so it seems that anytime i get close to 50 % usage the underruns with midi input start to happen. (i know that with a block size 4 there is even less time for other processes)

                    if i delete the [ctlin] from my test patch, i get no more underruns even at 4 blocks.
                    if i leave [ctlin] in there but disconnect it from everything else still no underruns.
                    if i connect it to the pack object and disconnect everything after, no underruns
                    ...only when i attach back the midi controls to the corresponding endpoints the underruns happen again.
                    which indicates that it is indeed a "problem" in heavy/pd (or my patch) and not before. my test patch is very simple though:

                    alt text

                    will now test with my actual patch...

                    ok the actual patch with no [ctlin] object has almost no underruns, but still some. the CPU indicator shows 55%.

                    the patch with a [ctlin] in it, but not connected to anything has much more underruns. so this is different to the test patch before, but CPU usage is also higher.

                    i still wonder if i do something in my patch (not midi related) that spikes the CPU, and i don't see it. and as soon as some midi messages come in it pushes it over the border.

                    yeah, I believe something is 'spiking' which is holding up all processing hence why the midi message arrive in a single burst.

                    (I really would not expect to ever see more than 1 message in that queue, given the regularity its called)

                    but hard to know whats the spike is, could be heavy, could be the patch, could even be something related to IO over USB.

                    now I don't think its USB IO as I do quite heavy usb IO with the soundplane, and its ok. however, I dont use midi (and so alsa) , and I don't use heavy. (*)

                    as for the tests.
                    I think if you have ctlin contained in your patch, but not connected - then I think heavy is doing all its work (in terms of distribution) .. so if its not underruning then, id say there a spike in your patch perhaps as a result of midi messages coming in. at least that would be my initial working assumption....

                    you could try to 'replicate' this by removing the midi io entirely and creating an lfo or sequence (or similar) which 'emulates' what you'd expect from your controller then
                    - if it underruns you know its something in the patch (unrelated to midi)
                    - if it doesnt underrun (and its a fair emulation 😉 ) then it points back to midi.


                    note : I must admit I've also been really careful how I put together my apps though, in particular I do as much processing as possible OFF the audio thread.
                    if I take your example above, all the processing of your CC messages would be done in an aux thread. (eg. route /127, motf) - unfortunately thats not possible with PD (esp heavy)

                      thetechnobear so if its not underruning then, id say there a spike in your patch perhaps as a result of midi messages coming in. at least that would be my initial working assumption....

                      i have a feeling that the [line~] object does not translate well in heavy. i have quite a few of them in my patch (basically all midi input is smoothed that way) and i will try to replace them with a [sig~]-->[lop~ 30] combo.

                      yeah, let the spike hunt begin!

                      a month later

                      thetechnobear - if it underruns you know its something in the patch (unrelated to midi)
                      - if it doesnt underrun (and its a fair emulation 😉 ) then it points back to midi.

                      ok, a bit late but i just ran some tests...

                      if i substitute the midi-in part with a [metro 5] connected to a counter (0-127) and feed that to the point where the CC was connected i get no underruns at all, so it is really the midi which is causing the trouble. i even left all the midi-in in place, and as soon as i start to press the PADs on the quneo the underruns start to appear again.

                      i think a [metro 5] is quite a fair emulation, no?

                      this is driving me a bit nuts, since i cannot use the rest of the CPU (i am at about 55% with block size 16 now) without getting very frequent underruns when using midi input.

                      BTW. replacing the [line~] objects did not help either, it all points back to midi.

                        giuliomoro That would be done as:

                        int msgs = 0;
                        int maxMsgs = 5;
                        while(msgs++ < maxMsgs && (num = midi[port]->getParser()->numAvailableMessages()) > 0)
                        {
                        ...
                        }

                        Make a note about the earlier points about increased latency of this approach.

                        @giuliomoro
                        i just tried this as well. values as low as 2 for maxMsgs still produce underruns, only a value of 1 makes them disappear completely.

                        ok, i now pushed the patch a little further (by integrating some more fx stuff) and now it even underruns with the maxMsgs at 1.

                        if i don't input any midi it still fine though. (midi objects are in place, i just don't touch the pads)

                        is there a way to see if my patch has some spikes in cpu usage even without midi? (and that one midi message causes the underrun)

                        can i try another lower level fix to throttle midi data?

                          lokki is there a way to see if my patch has some spikes in cpu usage even without midi? (and that one midi message causes the underrun)

                          do you have a good scope?

                          EDIT: digital scope that can do some basic stats (average, min, max of the duration of a pulse).