< Back to IRCAM Forum

Conducting Antescofo

Hi!

I’m currently researching techniques for live electronics that could be used to follow the gestures of a conductor. I have currently developed techniques in Max to recognize beats and follow beat patterns in a relative efficient and reliable way.

I’d like to use Antescofo as the cue system that follows the conductor but I’m having trouble finding the correct event type to write the score.

My first attempt was using Events and in every beat I’d send a “nextevent” message. It works but any mistake in the conducting or in the beat-recognition is pretty disastrous, since it can only be corrected manually or if you change the conducting (which is terrible for the musicians, of course).

Since I can tell beats apart, I tried to write a score with notes associated with every beat (e.g. beat 1 is always 61, beat 2 is always 62…) and the score follows the time measures directly. This would make antescofo able to “catch up” when there’s a mistake. The notes would be triggered by MIDI notes, using @inlets MIDI on the antescofo object. I don’t really understand why but after a “start” message antesocofo just waits for the first event and then goes on by itself. I can play the notes slowly, fast or not play them at all, the score plays as if it received a “play” message.

Does anyone knows why the MIDI option is not working? Could anyone suggest other options of writing a score to conduct antescofo?

Thank you very much!!

Hi there!

Perhaps @giavitto or @arshiacont could clarify the behaviour of Antescofo, but to my understanding there are perhaps 2 issues that you might need to consider.

  1. The use of HMM/semi-HMM chain - if what you are doing is like this:
    NOTE 61 1 measure1
    NOTE 62 1
    NOTE 63 1
    NOTE 64 1
    NOTE 61 1 measure2
    NOTE 62 1
    NOTE 63 1
    NOTE 64 1

And then you keep repeating this chain for say 100 measures, there aren’t any “distinct” features telling Antescofo which bar you are on. So what ends up happening (in my understanding) is that the listening machine becomes too “smart” and sometimes either lingers on bar 1 or teleports to bar 3.

  1. MIDI handling in Antescofo >0.92 - The listening machine gives consideration to neighbouring notes and harmonics of the note (e.g. with a given C4, C5 can be considered as C4, since if you are using a microphone, sometimes it picks up C5 stronger than C4 on an instrument).

What you might consider is creating a setvar that triggers nextevents. Mr. Giavitto kindly provided an algorithmic follower example here - MIDI handling in v1.0-410 - #3 by cow. It works really well - though I sometimes have issues when I exit the midi_follower function (it gets stuck in limbo as it is listening for the setvar when I am back to HMM following).

Other strategies include this tabla following: Tabla and input representation
This example is pretty much what you are looking for, except that it only does nextevent messages and if you accidentally skip beat 2 (or Antescofo doesn’t recognize this), then you will need to quickly do beat 2 and 3 in quick succession.
And a really interesting example by José Miguel Fernandez: https://www.youtube.com/watch?v=I2VmJonTbKM

If anyone on this forum knows how he did this I would love to know more!

1 Like

Hey @cow, thanks for you reply and suggestions.

The first point about the score (my score does look like you example). I’m very curious to understand if that’s what’s happening, if Mr. Cont or Mr. Giavitto would confirm. It would explain a lot but I would assume there’s someway to go around it, since in the José Miguel Fernandez score that you linked is only using “NOTE 60” as events.

I have no idea what’s going on there (and would also love to know more about it), but it looks as if Antescofo only looks for the first beat of each measure and from that it controls the tempo for all of the synthesis. He seems to be using MUBU objects to track his gestures by using accelerometer data from his gloves (??). Really interesting indeed!!

The tabla idea doesn’t really work for me because I also want to conduct electronics with real musicians, so it would be incredibly confusing to quickly “correct” beats just because the computer got lost.

Thanks for the alogrithmic MIDI follower!! I’ll try to adapt to what I’m doing. The most important thing for me is handling errors and it appears to do that. I think it might work the best for me.

@RaphaelVilani If your “recognition” (of gestures in this case treated as Events) happening outside, I would personally do as you did with the following details:

  • have fake events like “Note c4 1.0” each with a beat duration.
  • send “suivi off” to Antescofo so it wouldn’t “recognize” anything. Then simply “start” and when you have your beats, just send a “nextevent” message: this way you are using Antescofo as a smart sequencer driven by a recognition agent outside.
  • I assume that you want Antescofo to infer the tempo. If not, send a “tempo off” message or put that in your Asco.

Then comes the real thing: writing actions and compound actions that react to your events. That’s the fun part with Antescofo.

Imagine that you want to playback a simple note sequence as a musical Phrase (which is a simple Group in Antescofo language).

How do you want that phrase to react to your Conductor?
You want it to fire on the right event and then a “best effort” with recognized tempo? This is the default behavior.
You want it actually to End at a specific point in the future? (Check Trapani’s experiments with Canons)
You have Anchors in that phrase that should be in sync in the middle of the phrase? That’s Marco Stroppa’s Pivots and one of the best examples of this is Manoury’s Partita 2 (cc @lemouton ).
You want them to sync as if they always converge (let’s say) 4 beats in the future?

These are all possible by “Synchronization Strategies” which are attributes of the compound action (Group, Loop, Curve, etc).

These days I am no longer the best Antescofo programmer! :innocent: there are different ways to achieve the same think.

My 2 cents: you already have a recognition machine outside. Don’t make it more complicated by adding a new one. Enjoy the Timed Synchronous language of Antescofo instead which is designed to be deterministic and suitable for performance and rehearsal.

Let us know what’s the end goal and we might be able to be of more help.

Arshia Cont

1 Like

Thank you for your reply, Mr. Cont! I believe I have solved the problem after your answer and after looking through @cow 's topic on MIDI handling.

The solution was actually a lot simpler than I imagined because of a silly mistake: I was triggering MIDI notes to Antescofo without muting them with a 0 afterwards. I was just sending “61 127” in beat 1 “62 127” in beat 2 and so on. After the first bar, Antescofo just went by itself, which I believe is caused because the note it is expecting is still “there” (I’m basing this after the existence of the “pitch_tab”). So just by using a [makenote] object, it seems to work as Antescofo normally does, still keeping the suivi on.

for a bit more context:

I want to have a traditional ensemble + electronics with both reaction to a traditional conducting gestures and also recognizing specific gestures to control some processes (e.g. left-hand raising makes synthesis more noisy; making a “two” or “three” with fingers trigger specific processes, and so on, which I intent to do with setvars). In the end, I want to explore all kinds of synchronizations that I can in this setup.

For that to happen, Antescofo should follow the conductor as it does with instruments, both for the tempo infering and for dealing with mistakes by the conductor or by the beat-recognition system. So the most useful scenario would be that each beat is a specific note, so if the system does not recognize beat 3 in a 4/4 measure, when it sees the movement for beat 4, it knows that it needs to catch up. In this scenario, the “nextevent” message doesn’t work well because its hard to correct it without causing confusion in the musicians. But I believe this discussion helped me figure out the way.

Thank you very much!!

Hello @RaphaelVilani.

The “combinatorial MIDI follower” (see the file midi_follower.asco.txt attached in the thread) does not require a MIDI note-off event. You only need to signal the onset of the expected MIDI note at the appropriate time, for example by setting setvar $pitch_in.

If you use the MIDI mode of the listening machine, then yes, a note-off event is required, for exactly the reason you mention.

If you are interested in using another listening machine, you may want to look at a paper that will appear at ICMC 2026:Automatic Hybrid Following in Real-Time Mixed Music: A Case Study with Antescofo and ipt˜ for Flute Playing Techniques (1.0 MB). The paper explains how several listening machines can be used together and how one can switch between them when needed.

José Miguel Fernandez has developed two approaches based on gesture following. In Sources rayonnantes , he uses the Gesture Follower developed by the ISSM team to track the gestures of the conductor, who is equipped with RiOT sensors. The Gesture Follower performs a gesture-to-gesture alignment with a prerecorded reference gesture. Anchor points are defined on this reference gesture; when the live gesture reaches one of these points, a notification is sent to Antescofo (using setvar). This notification triggers (via whenever) a next_event, allowing the system to advance in the score.

The score itself remains a standard Antescofo score, where electronic actions are anchored to musical events. In this configuration, however, the musical events are not detected by the built-in listening machine but are instead notified by the Gesture Follower through the setvar / whenever / next_event mechanism.

This type of setup—essentially following the conductor—has also been used in other works, such as Idea by Sampo Haapamäki, see also IDEA project: Conducting Gestures Dataset | Ircam Forum. If you are interested in pursuing this direction, you should contact Serge Lemouton, who has implemented several such setups.

In the longer-term project Gekipe, developed by José Miguel Fernandez together with HEM Genève, a Kinect sensor tracks the position of the performer’s hands relative to the torso. The hands are also equipped with RiOT, and the accelerometer data streams are sent to Antescofo together with the hand position data. Some preliminary processing of these data streams is used to categorize gesture types (for example kicks, smooth directional motions, etc.). The detected gestures and their positions are then interpreted contextually within the score. This approach is therefore performer-centered, and if you want to explore it further, you should contact José Miguel Fernandez.

Finally, a remark regarding your last statement:

“For that to happen, Antescofo should follow the conductor as it does with instruments, both for tempo inference and for dealing with mistakes by the conductor or by the beat-recognition system…”

It is important to note that Antescofo has no concept of musical measures. Musical events are specified without bar information: the score is represented as a flat sequence of elementary musical events (notes, chords, trills, etc.), rather than as measures containing beats.

Moreover, when using the Gesture Follower to track a conductor, the system does not really consider the notion of a “mistake.” Instead, it continuously aligns the incoming accelerometer data with the prerecorded gesture, stretching or compressing the time axis when necessary. In this framework, gestures are not “missed” or “skipped”; rather, the system simply assigns different probabilities to the gestures stored in its gesture dictionary during the alignment process.

Hope this helps.

1 Like

Dear @giavitto,

Thank you very much for your answer and for all the resources you linked in your post. The projects are super interesting and I had not heard about them. I took a quick look at all of them and will make sure to have a deeper read later.

Just to clarify my quote about the mistakes: as of right now, the gesture following happens completely outside of Antescofo and I’m translating beats in the measure to midi notes, so if one beat is missing and the following beat (note) is identified, Antescofo catches up, even if it doesn’t know it is a musical measure. I’m in an early phase of testing everything, but I have tried this a few times and it does seems to work consistently.

My project does looks like the IDEA project and to Gekipe! As things develop and I find new challenges, I’ll make sure to contact Mr. Lemouton and Mr. Fernandez to learn more about their experience.

Thank you for all the ideas!!