< Back to IRCAM Forum

Enquiry about the use of SKataRT

Hello,
I would like to use SKataRT for the following two scenarios and I am interested in whether this is possible:

  1. recorded rhythmic passages, which have a length of about 1 min, should be analysed for their similarity in order to be able to arrange them appropriately in the compositional timeline. In simple terms, it would therefore make sense if the similarity passages in the plot could be easily played and recorded straight away. Similarities can either be based on loudness, pitch, timbre or similar. But here I would be happy if there were more dimensions available to discover new things.
  2. the resulting tracks could then be macroscopically compared with each other in a similar process in the live performance, categorised and made suitable for playback.

Is it possible to use SKataRT (or Max4Live) for the realisation of the two scenarios?
Thank you in advance for your answers!

Hi Knut,
if I understand you right, you want to compare different segments of ~1min to each other in terms of audio similarity?
Skatart’s descriptors would be too coarse for this (you only get mean values), but with catart-mubu, you could add the variance of each descriptor, or develop rhythm descriptors (onset density, regularity, etc.). You’d also want something more flexible than the whole segment playback…
Best!

Hello,
yes exactly, I want to look for similarities and the dimension you suggested would definitely be rhythm. But timbre would also be important.
You said that I would be better off at catart-mubu. could you give me a starting point or point me to an example patch that I can build on?
I also have the “CataRT-MuBu Tutorial 5a: Segmentation Analysis” in front of me at the moment and am testing out the segmentation variants. I would try desc-onseg according to your suggestions. But there are a lot of arguments to choose from here. Which one should I focus on for my needs? Please excuse these beginner’s questions, but I’m just testing my way into this new area and would be very grateful for even the slightest help.
BTW: Would it make sense to cut the 1-minute segments into individual 4-bar segments by hand beforehand?

Thank you very much,
Knut

Hi Knut, onseg is most sensitive to the (relative) threshold and the filter size.
Shorter segments would make sense, too.

Hi Diemo,
I would like to use catart-mubu to develop rhythm descriptors. You suggested that determining onset density, regularity etc. would be useful for this. Could you please explain to me how this could be realized using the example of onset density?
Thank you,
Knut

Hi Knut, this would mean going into Max development: Based on onset detection,

  • for density: count the number of onsets within a given time span, and write that to a Mubu track’s column.
  • for regularity: do statistics on the inter-onset-intervals (IOIs), e.g. mean and stddev or loudness-weighted histograms

There should be ample literature about rhythm analysis/description in the MIR field.