• General
  • Sensors increasingly off scale the more Bela stays on?

Hmmm this is weird unless there is something in your patch that changes its behaviour the longer the patch runs, which somehow has a cascade effect on the rest of the stuff? Namely, as floats are single precision, if you are increasing e.g.: a counter without ever wrapping it, it may show some issues after a long time the patch runs.

Just to be sure, can you confirm that when your patch runs for long time with all the processing enabled it is the actual readings from the sensors that have an unexpected offset? For instance, I want to make sure we are not in a situation such as "oh the pitch of the oscillator s off by the equivalent of 50cm, therefore there's something wrong with the sensor", when the problem is actually with the oscillator or something else between the sensor reading and the oscillator.

    well firstly thanks for taking the issue seriously. I do mean readings from sensors like the actual distance (the one obtained from that stuff in the screenshot, printed out right after the offset correction value). I mentioned at the beginning that I noticed this effect sonically, from the effect of an increased minimum distance to synth parameters, but then I made sure to print distance values as close to the source as possible to minimize other possible effects and check the actual readings. So yea, it really changes the minimum detected distance in blocks, 50cm at the time (but not so constant in time as I thought at first), regardless of analog and digital IO. It seems to have stopped occurring when I loaded just the stuff in the screenshot, dropping CPU consumption to like 20-25%. I've run it for just 2 hours or so, so I can't ensure it's completely clear for like 6 hours or more, but by that point in all the other cases I'd get drift up to 100cm already. If you think it's unlikely that a CPU overload would cause this, I'll stop focusing so much on it, but as per now the two things do seem correlated..

    giuliomoro Namely, as floats are single precision, if you are increasing e.g.: a counter without ever wrapping it, it may show some issues after a long time the patch runs.

    Im not sure exactly what you mean in reference to a counter, could you be a bit more specific? I am using quite a lot of counters so I wanna make sure that I'm not causing this problem by a dirty usage of [f] and [+1] and so on.

    What I was referring to is that one could equivalently write a counter in these two ways:
    alt text

    however, only the left hand side one will work forever. The right hand side one won't, because it doesn't wrap back to 0 the value that is stored into the [f ], which means that if the patch runs for long enough, the number will eventually become big enough that it won't be represented well as a single-precision float (which is what Pd uses internally) and it will start misbehaving. There are other cases where long-running patches start misbehaving, often triggered by something like this. However, this doesn't seem to be the case for your patch case, as the code in the screenshot has none of these problems, and as you are checking the values as soon as they are output by each sensor reader, I don't see how a problem elsewhere in the patch could cause that.

    Unfortunately this means I have to look back at one of the ugliest parts in the Bela core code where this issue could be coming from ... Could you try your code with 1024 and 4096 block size? Does the issue persist and does it occur at the same point in time (i.e.: after 30 minutes)?

    In the meantime, have you tried restarting the patch without restarting the board? This is done by tapping the button that's on the cape. That should give you about 2 seconds downtime instead of 10-15 that you'd get otherwise.

      I see what you mean. I've corrected them wherever I've found stuff like the right one inside my patch, i'll keep this in mind from now on.

      giuliomoro Unfortunately this means I have to look back at one of the ugliest parts in the Bela core code where this issue could be coming from

      i'm sorry you have to face your darkest demons. we can't run away forever :<
      The issue persists at 1040 and 4096 block size, seemingly after a similar amount of time (hard to point at a specific moment, but I have noticed in both cases an increase in all distance values comparable to the issue we're talking about).

      giuliomoro This is done by tapping the button that's on the cape

      you're talking about the white button which says "OFF"?

        Oh no actually my mistake. It seems to not appear on 4096 block size. For some reason I avoided this setting until now, since it seemed impossible to find a proper correction offset in this case (I tried some time ago and abandoned it to stick to 2048). The more I tried to calibrate it to start from as close to 0 as possible, the less it seem to work (and kept losing the calibration). Now it motivates me to try to sort this part out, as it might be a neat solution for the drifting problem.
        I'm going to run some more tests tomorrow to make sure I'm not double seeing stuff, but if that was the case - and the issue doesn't appear at 4096 - what would be the explanation for it?

          robinm you're talking about the white button which says "OFF"?

          yes! Perhaps a misleading label ...

          robinm I'm going to run some more tests tomorrow to make sure I'm not double seeing stuff, but if that was the case - and the issue doesn't appear at 4096 - what would be the explanation for it?

          I have no idea ... as this seems to be somehow related to CPU usage, maybe this affects it ... but it is a very weird behaviour nevertheless ...

          robinm For some reason I avoided this setting until now, since it seemed impossible to find a proper correction offset in this case (I tried some time ago and abandoned it to stick to 2048). The more I tried to calibrate it to start from as close to 0 as possible, the less it seem to work (and kept losing the calibratio

          This is also hard to explain for me .... at some point you may want to share your full patch so I can do some tests on my side ...

            giuliomoro at some point you may want to share your full patch so I can do some tests on my side ...

            Sure thing. Can I attach it here somehow? (I'm sorry about the dumb question, I couldn't find a way to do it)

            Anyway, after few more tests today the problem seems to still appear at 4096, and it still seems to relate to recurrent CPU peaks (probably caused by an overlapping of expensive processes that occur when the radar sensor turns to 1, I'm working on it to see if I can at least distribute them better).

            giuliomoro yes! Perhaps a misleading label ...

            I mean it does what it says.. it stops the patch from running. Is there a way to reboot it in a similar way too?
            I have to specify that my efforts have been so far into keeping this entire project "off the box", meaning no computer in sight, so whatever process involving rebooting/restarting would be preferably executed without the need for the IDE.

              robinm it stops the patch from running. Is there a way to reboot it in a similar way too?

              When the patch is running on boot, it will restart automatically when stopped. So you stop it tapping that and it will restart in a couple of seconds.

              robinm Sure thing. Can I attach it here somehow? (I'm sorry about the dumb question, I couldn't find a way to do it)

              Best option is to put it on github or a file sharing system (e.g.: onedrive, google drive, dropbox) and share the link here

              robinm (probably caused by an overlapping of expensive processes that occur when the radar sensor turns to 1, I'm working on it to see if I can at least distribute them better).

              now this makes more sense. I think I have an intuition as to why a CPU spike would cause the large-blocksize audio thread to start drifting away from the other thread performing I/O at 128 samples per block.

                Great, I've put the entire Bela project here:
                https://listahaskoliislands-my.sharepoint.com/:u:/g/personal/robin19_lhi_is/ETvYB-Nt0j9DjyR10uCodsMBOP9uCoKY1DhvAiRgrxkhKw?e=dhoQpH

                Let me know if you're able to access the link (should be public). I wanna clean it up before I put it on Github..
                nonetheless it should be all commented enough for you to get oriented, if you need to take a look inside.
                Sensors settings are in the top-left area of the patch.

                giuliomoro I think I have an intuition as to why a CPU spike would cause the large-blocksize audio thread to start drifting away from the other thread performing I/O at 128 samples per block.

                I tried to use delays of 100ms for each instance of the radar sensor input, so that every time it turned to 1 it would switch processes on one by one (the ones that I thought are overlapping, wherever synchronicity wasn't needed). It didn't change absolutely anything in the CPU consumption, and the drifting occurred anyway. It's possible that I'm missing something stupid and obvious on the way, so if you happen to do tests in that sense I'd be happy to hear what you get.

                wow your patch looks really complex, I am not going to be able to parse that ... however I set up a loopback test (without Pd) where I can reproduce your issue: when running with a large blocksize (so that there is an internal fifo), then exceeding allocated time in the render() function causes the input to output latency to change and does not trigger a block dropped error. I'll work on this.

                  giuliomoro wow your patch looks really complex, I am not going to be able to parse that

                  Sorry about that.. I'm glad to see you still found a way around it.
                  Thanks Giulio, you're really giving me a huge help. It's nice to know it's not just about something wrong I was doing, and it's relieving as hell to know that something can be done. I'll be waiting for your updates - I hope I can fix this before the 17th when I'm supposed to show it to people ahahah!
                  If i don't manage, at least I'll know for future applications, it'll still much appreciated 🙂

                  5 days later

                  I think I fixed this now (sorry a bit late for your application). If you update your board to the latest dev branch of the Bela repo, then it should work for you. The issue was that the roundtrip latency was not fixed to the nominal level (blocksize * 2 + 128), but it started at a smaller value, depending on the CPU load, and then increase towards the nominal value (in steps of 128 or multiples) every time you had a CPU spike. Now it will stay at the nominal value throughout and when an underrun occurs it will not change.

                  15 days later

                  Giulio!! So sorry for the late reply. Thank you so much for putting time into this and figuring it out!!
                  I just fetched the sculpture with Bela after few days at the Intelligent Instrument Lab, I'm gonna try this right away and let you know.
                  (FYI, the day of the exhibition I just had to reboot it every now and then when there were no people coming in, it's been quite fine 🙂 )
                  If you're curious how it looked/sounded, here is a video (doesn't really manage to portrait the most interesting parts, but it gives an idea of what I was working with):

                  Thanks again for your help, this has been so much fun!! I'm really looking forward to make more weird stuff with Bela. It's just so flexible and powerful, and it's a great incentive to fill in the coding gaps that I have with practical approaches. You're awesome 😃

                  Tested, and not a single flaw - It's been running for 5+ hours now with 4096 block size and no drift has occurred. Thanks again Giulio!