Hi all,
I'm gathering options for a vision-sound project, and the wonderful Bela looks like a great candidate.
It's not clear to me whether it's possible to interface the Bela cape to something like the Beagley-AI (https://docs.beagleboard.org/boards/beagley/ai/index.html). Admittedly my HW fundamentals are a bit rusty (I do machine learning usually).
I imagine the Beagley, being a BBB derivative, would provide the OS and compute backbone, and the Bela cape extending it somehow, is this intuition correct?
The data flow would be: a neural model runs on the Beagley, producing predictions as a data stream (few tens of XY coordinates per video frame), which would then be sent as control signals to the Bela for sound generation.
What hardware connections / drivers / wires would one need to pull this off?
Thank you in advance for any and all pointers.