< Back to IRCAM Forum

Quantifying polyphonic rhythmic structures

Hello!

I am looking for ways to perform polyphonic rhythm quantification on nested lists of durations (in milliseconds).

What I would like is a quantification process that takes into account how much the rhythmic layers overlap. During quantification, the layers would be slightly adjusted so that they produce no shared attacks, or at least as few coinciding attacks as possible. Of course, with many layers and depending on the rhythmic structure, a perfect result may not always be achievable, but the goal is to minimize simultaneities where possible and keep the rhythm accurate.

An easy way would be to give just one rhythmic layer different units, such as 1/20 and 1/16, but this does not seem ideal to me. Any suggestions are welcome.

Thank you!

Hi Teemu,

This depends of course on the context of the layers. Can you give an example so we can understand better the problematic?

Best
K

I will try to explain with an example:

Suppose I want to quantify two lists of durations in ms with various values into a polyphonic entity. Using omquantify, the result looks like this, and red lines indicate shared attacks.

This is not the worst case, but I would like a way to flexibly adjust the quantification so that successive layers align to a grid that overlaps less with the upper layer.

Ideally, only the first attack would be shared out of necessity, and all subsequent attacks would be offset to minimize coincidences. I am aware that I can force subdivisions, for example, 8 for the upper layer and 7 for the lower but I believe there could be a more elegant way to calculate an optimal non-coinciding grid. Again, if there are even more layers, then this gets difficult but I would like to get as optimal as possible. Does this make sense?

This makes sense, but here we lack:

  1. the list of durations in ms for each layer
  2. More important the quantification parameters for each layer: ie, what did you give to each layer as quantification parameters?
    And when you talk about the most accurate, what you mean about that? Is it resolution?

That is what we are missing here in your problematic.

BEst
K

Here is an example patch. As you can see, some attacks are shared more or less, depending on the generation. I would like the quantification to take this into account so that minimum of common attacks are generated.
example.omp (50.1 KB)

Dear Teemu,

Thank you for the example. Here we can start talking.

First things first. We can calculate starting from your durations the differences of their onsets:

If you notice, we have some onsets really very close (2 milliseconds apart) which makes things really difficult not to have in one beat.

One thing we may do to be more “precise” is to augment the tempo and the divisions which will yield a more “detailed” rhythm:

However due to small durations, they are skipped.
We have a new quantificator using grace notes (soon in the 7.7 version which will be releeased very soon, i am working on it):


![Screenshot_2025-11-21_14-54-47|690x204](upload://
c3TUK4fXjiJdEgMgSK2thWRs5qO.png)

If this last quantification suits you, i ask you please to wait some in order to use it in the next release.

Here is the patch (without the new function):
example 3.omp (108.1 KB)

Hope this helps you out with your problem.

Best
K

More accuracy is of course great, so if my structures don’t have coincidences then the quantification would also not have them.

Not necessarily. If your onset lists have small differences such as 2-5 ms apart there is a big chance that they coincide, like your example before.

Best
K

Yes, I need to think a different approach. A question: so far as I know, it is only possible to quantify to the regular grid based on the beat. Are there any plans to implement a more unconventional grid, such as the following based on a dotted note quarter plus an eight-note.

Unfortunately this is not yet implemented. Maybe in the future?

Best
K