< Back to IRCAM Forum

Any way to make a list of revised note times?

Hey all!

I’m using Antescofo as the basis for a predictive automatic accompaniment system. I’m using various machine learning methods to learn from rehearsals with musicians to predict expressive timing changes (similar to Chris Raphael’s Music Plus One), for a network music performance project with virtual cues. I’ve found Antescofo to be the best score follower out of the things I’ve tried, including my own attempts to make one using Dynamic Time Warping (which only worked moderately well).

My main question is: is it possible from Antescofo to retrieve a list of revised locations for previous events? Antescofo gives you note events as they occur, however if it has made a mistake, I’m assuming that via the HMM stuff inside it (Viterbi Algorithm?), it can revise its hypotheses for previous notes. Having these available would be awesome for what I’m doing, and for other things as well, I’m sure. This is because, as my system is predictive, I don’t care so much about the note being detected as soon as it happens as much as I want the detection to be fairly precise even if it takes a little longer. So does anyone know if there is a way to get these?

Many Thanks!
Bogdan

You asked:

is it possible from Antescofo to retrieve a list of revised locations for previous events?

You’d usually want that if your application is non-realtime (such as offline audio-to-score alignment). However, @cuvillier in our team is working on a similar approach in Antescofo. But until this becomes available, I would like to know WHY you’d want this? In my humble experience, everytime I wanted something like this I managed to get to what I wanted using other methods. If your application is “online accompaniment”, I strongly suggest you to follow the procedures and techniques on this post, which has been tested extensively in the past few weeks:
http://forumnet.ircam.fr/user-groups/antescofo/forum/topic/antescofo-and-supervp/#post-8676

So let us know what you have in mind and we’ll propose a solution.

Hey Arshia, thanks a lot for responding!

The reason I want to do this is because I want to make a system that can predict a future note given more than just the score and a general model for anticipation based on the tempo. For example, in a lot of romantic music tempo changes can be extreme and sudden, and if you want to do automatic accompaniment you need to be able to anticipate stretches in timing that are not obvious from the score. Strangely enough, what I actually need to do is to predict the input musician’s timing, as well as possible: this is because I’m testing a new approach to network music performance with timing cues that adapt to the musicians by learning their preferred playing style from rehearsals. As it’s over the internet, you can’t wait for the note to actually be detected by the score follower, otherwise you’ll send it with latency, so the idea is to predict the sounded notes and give cues for the predictions. So I have a very big Bayesian Network (pretty similar to Chris Raphael’s) that describes probability distributions for time stretching at each point in the score, given the musician’s input events and the other musicians’ distributed input (latency compensated). I don’t expect it to work without some degree of error (obviously you can hold a note out for as long as you want due to free will), but I still want it to be like a musician that can anticipate based on previous rehearsals at least, and then lead the musicians with plausible cues.

Either way, I’ve been hoping to not have to make my own score follower as it’s been done very well so many times before, and Antescofo seemed like the best choice. It does work pretty well and I’ve had good results for my application with Antescofo’s input. However, once in a while Antescofo will miss a note or report a slightly off timing, and once that event has gone it is never ‘fixed’ by looking back through the alignment after the initial onset. I realize that Antescofo was not designed to do what I’m describing, but I was wondering if there was maybe a hidden outlet somewhere :).

Metabog,

Your problem is quite interesting and we might already have some pointers for you:

In general, Antescofo uses predictions (or more preferably Anticipatory mechanisms) to detect events. You can actually access the anticipated time-position of the next event in the score by declaring an additional parameter in the object instantiation as @outlets ANTEIOI. This will create an additional outlet and will continuously send out the anticipated duration until the next Onset. Moreover, the output tempo of Antescofo is predictive. See this link for more details.

The problem with counting on Machine Listening only for automatic accompaniment is the fact that you predict up to one event ahead which is musically not enough! This is where our reactive language and synchronization strategies come to game:
In the accompaniment procedure and example exposed in this post (link), we use a real-time time-stretch (using SuperVP). The @target [4#] attribute used on the Curve in that example has the following control consequences:

  • The Curve will have an anticipatory mechanism based on 4 events in the future (the '4#' parameter)
  • The Curve will have its own tempo, calculated out of its anticipated target AND the musician's real-time tempo. You can actually watch this tempo and compare it to musician's detected tempo by accessing the internal variable $local_tempo inside the curve! Note that this local tempo is calculated much more often than the musician tempo since it's continuous. This allows for example the time-stretch to freeze in case of a long wait (meaning local tempo converges to zero).

Note that in real-time, it is difficult to “go back” and correct things. That’s why we haven’t explicitly implemented backtracking.

However, if there is any special outlet that you think you need, just let us know and we’ll do our best to implement it for you.

Thanks for your input! At the very least I can use the built in event anticipation to test my own predictions, I think. Ideally I’ll see if I can build my rehearsal-informed predictor into the system to work with Curves!

Incidentally, how do you think Antescofo’s anticipatory model would deal with sharp tempo fluctuations, such as for example in this rendition of a Chopin Mazurka:

There are oscillatory beat duration fluctuations on the order of 200ms from beat to beat, as well as ‘elastic’ points where the musician slowed down to a halt and then sharply returned to a tempo. It seems intuitive to me that an IOI and score-based method will not be able to anticipate the sharp changes in tempo with great precision as the tempo parameter itself is intentionally made very loose by the musician. However, if I was trying to synchronize to someone playing a piece like this, I wouldn’t be simply listening to their tempo but I’d also be able to see them and use a variety of visual gestures that come directly from the source: the leading musician. This is why when accounting for romantic ensemble music I thought it might be a good idea to augment this with rehearsals to discover a particular ensemble’s preferred expression. I haven’t been able to test this piece with Antescofo mostly because I haven’t been able to get it to track the pretty flowery piano performance well enough (these are hand annotations), but are these the sort of tempo changes that Antescofo should be able to anticipate, as opposed to ones that are ‘continuous’?

Metabog… I somehow missed your post but a late reply is better than never:

The example in your post from Chopin Mazurka is interesting (actually @cuvillier is wokring on the same example for a new version of Antescofo to be released this year!).

Your posted figure shows temporal fluctuations between beats (IOI). The real question is (1) how and whether they should be incorporated to tempo calculation of the musician (or not), and (2) how they should be inferred to accompaniment tempo/predictions. Note that the musician’s tempo and the accompaniment tempo though related, might not necessarily have a linear or direct relationship to these local temporal fluctuations.

Now regarding Antescofo:

  • IOI fluctuations such as the ones you show will contribute to prediction errors of the tempo agent in Antescofo. The tempo agent has a musical inertia meaning that it does not immediately react to errors but takes 5-6 events to converge. This is musically useful! For more info on this, have a look at the RESULT section of my TPAMI paper where I do synthetic tests.
  • Once the musician's tempo is calculated, the accompaniment tempo (for example in Curve) might end up in a completely different value based on the anticipatory strategy (for example in the @target #4 attribute etc. It is a function of the calculated musician's tempo BUT ALSO the anticipated time-distance to achieve the target goal. This local tempo is calculated more often than the musician's tempo (with regularity equal to the grain of the curve) and can vary drastically.

@echeveste is preparing a publication that actually visualizes the two aspects above in Matlab.

We can send you an Alpha version of the new Antescofo that actually works on Chopin Mazurka database if you want. Let me know personally.