< Back to IRCAM Forum

Some questions about playing controls, polyphony and more

Hi there,

Running some catart-mubu patches for digging that way for a concatenative synthesis project, I have a couple of questions here.

Th rmain idea is :

  • I select one or multiple samples (the input)
  • I run the analysis on this input and it gives me a map (imubu I mean)
  • I play with trajectories of the cursor on the map (manually, generatively, etc)

If I have one sample as the input, that’s quite simple.
BUT what about the polyphony of mubu-concat~ ?

If I have a corpus of sample as the input, it is more complex and I have more questions.

1/ what would be the best practice to be able to have multiple cursors triggering the playing of segments ?
2/ is it possible to have multiple imubu controlling the same mubu-concat~ ?

Of course I know I could put the whole set of imubu mubu-concat~ including analysis part into an abstraction and multiply that : if I have 8 samples, I’d use 8 set and I can have the whole control of all of my playing, each set = one sample.

But in that case, I’d loose the overlaying visual feedback of the whole segment classification for each samples.

I hope I don’t put more confusion here.

The ideal situation would be:

  • got my whole corpus of samples, on the same imubu
  • play them on many ways (multiple cursors moving, triggering sometimes possibly all nodes in a 2D area whatever the buffer involved, or sometimes some cursors triggering only one buffer specifically and note the other even if a node is in the radius of the considered cursor)
  • got kind of polyphony for mubu-concat~

Possibly, even If I loose the overlaying visual feedback as I wrote, the "only way would be to have as much whole sets of imubu mubu-concat~ including analysis part as samples numbers.

Any inputs, insight or simple advices here would be much appreciated.

Hi Julien, great questions here!

  • mubu.concat is fully polyphonic (can play as many grains at once as your CPU can handle): just send current parameters, then trigger (by bang, or markerindex when @autotrigger is on)
    • However, for periodic modes, of course only one period can be managed at a time, and
    • fence mode only works for one input stream (unless you manage the repetition filtering yourself)
  • Thus, in practice, it is simpler to have one select and concat per “voice”, i.e. per cursor/process/etc. Look at the catart-mubu-poly example, camu.select help, or camu.voice.
  • buffer choice is handled by knn’s include/exclude. There’s an example coming for that.

Best!

1 Like

Hi Diemo, and thanks for your answer.
I think I understand polyphonic & a select/concat context per “voice”

If I want to trigger grains of a same buffer with 2 or 3 differents path on 2 or 3 differents 2D maps (I mean: I sort fragments x/y frequency mean/loudness, and on another 2D maps spread/duration … same segmentation process, two different view) …

then I think the better way is, maybe, to have a WHOLE context (including the process, the 2D map & select, the concat) duplicated 2, or 3 times… even if I use the same segmentation process)

coming with other questions… :slight_smile:

actually ended some tests.
one imubu + n camu.voice is not so easy for deep control of segments selection.

I mean: as one imubu can “only” handle one position (cursor on the 2D map), if we want to control multiple trajectories (one for each voice… that’s tricky)

Except if I missed something else, getting multiple imubu (actually that’s not optimized as imubu = a mubu) with SAME data just of driving specific trajectory/selection even on the same sound corpus could be only way.

ERRATUM:

yes we can have multiple imubu with different scripting name related to one mubu !
So 1 corpus, 1 process & multiple voices (imubu + select + concat) is easily possible !

1 Like