< Back to IRCAM Forum

How to locate cursor according to parameters other than position?

Hi,

When using mubu.granular~ and mubu.concat~, the only way I know to decide which part of the audio file to play and granulize is by position (time). But it is not meaningful, because the machine cannot decide the content by itself, the human needs to understand exactly which part of the audio is desired, so the flexibility of the system is restricted.

Is there any way to choose which part of the audio file to play, according to parameters such as selected ranges of frequency, volume, or even timbre of sounds, instead of just position? Like how Catart sorts the grains? I know we can segment the audio by pitch and onset velocity and make markers out of it, but how can I choose certain markers to play? And should I look at mubu.knn?

And speaking of Catart, it sorts the gains comprehensively according to desired contents of an audio, but to my understanding, the way to control it must be manual, not automatically by predefined messages?

If you have any suggestions, please let me know. Thank you!

Best regards,

Liddy

Hi Liddy,

how can I choose certain markers to play

This is done via the markerindex and bufferindex messages to mubu.concat~, which is used also by catart, and of course you can control concatenation purely by messages, sequences, generated patterns, whatever. Max is a programming language, after all…

The catart-mubu package on the Max package manager has comprehensive tutorials how it can be controlled (mubu.knn is one possibility). Regarding the link to descriptors, mubu tutorial 4 gives an in-depth introduction to analysis and segmentation.

Now, the point about pre-selecting ranges of descriptors is interesting. I’d love to hear more details about this use case, how you would like to integrate this into a compositional or interactive workflow. It could be possible via an additional data column active that is set by some filtering expression. I can make a quick example of this shortly.

Best, Diemo

Hello Diemo,

Thanks for your respond! About pre-selecting ranges of descriptors, for my compositional need, I am trying to rearrange grains to create new texture of sound. I would like to sort out the grains which I desired in several audios, so I don’t need to listen, chop and arrange them thousands of times in a DAW (but I’m doing the traditional way now because mubu is so complicate for me, I can’t master it yet, ha…).

So the result I desire is a system that mix grains in several different audios, according to their similarities (or differences) in frequency, shape (figuration, pitch rising or falling, vibrato), timbre, length… etc.

The mechanism I imagine would be as simple as sending messages to mubu.concat like when we adjust the position or period… etc of the grain. I just need to enter the range of frequency (e.g. 800-1600hz), length of the grain (e.g. 200-1000ms), velocity (e.g. >-6dB, so numerous grains which are basically silent would not appear), or even timbre of the sound (e.g. more metallic, more woody, noisier, with more pure pitch…etc), and send it to mubu.

Right now I’m working on a piece for a dancing violinist with electronics. The majority of the sources of audios to be put in grains are all kinds of versatile techniques and sounds of violin, with some other sources that may match the sound of violin.

With the ability to control the parameters above, I hope that I can get all kinds of different textures or gestures of sound, which can be the accompaniment, background, or even counterpoint to the violin player.

I’m curious about what can a column “active” do? May you make a quick example?

Thank you very much!

Liddy