I'm building an instrument which uses a lot of different samples. One sample might be used by different tracks/parts of the instrument.
I read some manuals of different embedded sampling instruments and it seems most use the following strategy:
- samples are loaded from the disk into a sample pool (which usually has a fixed size)
- the sample playback object chooses a sample from the sample pool and reads samples from it
I thought of implementing it like this:
class AudioResource
{
std::string filename;
// vector of channels containing a vector of samples
std::vector<std::vector<float>> buffer;
};
// shared pool
std::vector<AudioResource> samplePool;
// load new sample in slot x
// this line will be called inside an aux task
samplePool[x].buffer = AudioFileUtilities::load(gFilename);
// access audio samples from the sample in slot x
buffer[x].buffer[channel][frame];
Now I wonder about (re)allocation. The samplePool might grow up to somewhere in the 200-400 MB range. The samples will be accessed in a random(ish) way (i.e. granular synthesis, loop slicing). Therefore loading only parts of an audio sample and loading the next part if it's almost needed does not suffice
If I add or remove a new sample to the shared pool, does the complete samplePool vector reallocate to stay contiguous in memory? If yes, it seems to me that loading new samples takes more time each iteration right?
If I replace a sample by loading a new sample into samplePool[x].buffer (and thus resizing buffer), does the entirety of samplePool reallocate if buffer grows (or shrinks) in size?
Should I even use std::vector for the shared pool? Is there another container better suited for this?
I watched this video which mentions using a custom allocator which uses preallocated memory. Could this be relevant here? This seems to solve the "allocation takes too much time inside an audio callback" issue, but that's not really an issue for me if I use auxiliary tasks for loading samples..