Running some catart-mubu patches for digging that way for a concatenative synthesis project, I have a couple of questions here.
Th rmain idea is :
- I select one or multiple samples (the input)
- I run the analysis on this input and it gives me a map (imubu I mean)
- I play with trajectories of the cursor on the map (manually, generatively, etc)
If I have one sample as the input, that’s quite simple.
BUT what about the polyphony of mubu-concat~ ?
If I have a corpus of sample as the input, it is more complex and I have more questions.
1/ what would be the best practice to be able to have multiple cursors triggering the playing of segments ?
2/ is it possible to have multiple imubu controlling the same mubu-concat~ ?
Of course I know I could put the whole set of imubu mubu-concat~ including analysis part into an abstraction and multiply that : if I have 8 samples, I’d use 8 set and I can have the whole control of all of my playing, each set = one sample.
But in that case, I’d loose the overlaying visual feedback of the whole segment classification for each samples.
I hope I don’t put more confusion here.
The ideal situation would be:
- got my whole corpus of samples, on the same imubu
- play them on many ways (multiple cursors moving, triggering sometimes possibly all nodes in a 2D area whatever the buffer involved, or sometimes some cursors triggering only one buffer specifically and note the other even if a node is in the radius of the considered cursor)
- got kind of polyphony for mubu-concat~
Possibly, even If I loose the overlaying visual feedback as I wrote, the "only way would be to have as much whole sets of imubu mubu-concat~ including analysis part as samples numbers.
Any inputs, insight or simple advices here would be much appreciated.