Hello forum!
Been playing with MuBu and really liking the sounds I’m getting.
I have run into a couple of issues on how to actually implement MuBu into my general patches/workflow.
What I would like to do is to record audio into a regular Max buffer~ (using karma~) and then use mubu.knn/mubu.process/mubu.granular~, like in the MuBu mosaicing example, to create granular mosaicing of real-time incoming audio.
My questions are as follows:
-
How do I get mubu/mubu.knn to refer to an existing buffer? (Or a specific section of that buffer?)
I’ve tried all sorts of approaches by creating a named mubu container (same name as the buffer~ I’m recording in to) and then trying to refer/point mubu/mubu.knn to it. Nothing seems to work. I did find an example (http://forumnet.ircam.fr/user-groups/mubu-for-max/forum/topic/copy-associated-buffer-example/) about copying a buffer into a mubu, but this seems like a pretty round about way to do this (using peek~ to uzi a whole buffer into a mubu). -
How do you get the mubu.knn that drives mubu.granular~ to work in a concat-y way where it doesn’t find a match if the incoming audio is quiet (or no nearest match is found). At the moment, as long as “enable selecting” is engaged, mubu.granular~ will stay on the same grain blasting away forever, even if there is no incoming audio at all.
At the moment I’ve cobbled together a hack here where I’m banging individual grains out of mubu.granular~ and using onebang/delay and a randomised position to emulate positionvar/durationvar parameters. It works, but it’s not elegant.
Sorry if some of these questions have glaringly obvious answers, I’ve just been going through all the help files and testing tons of different approaches, and haven’t been able to find ways to accomplish both of the above things.
Thanks!