< Back to IRCAM Forum

Integrating Antescofo in Human-robot collaboration

I am working on a project that focuses on the synchronization between human performer and a musical robot. Our idea is to use Antescofo to make musical robot synchronizes with the human. For example, when human player change the tempo a little to enrich the musical expression, the robot can follow the tempo change.
The robot listens to the MIDI, however, there is a fixed latency between the robot getting MIDI signal and generating the sound. In which case, we need somehow predict the time stamp of next note based on the current tempo. We tried to use the RT_TEMPO of Antescofo, and calculate the next time stamp, then offset the latency, but it does not work very well. Is there any elegant way to deal with the latency issue? I guess I may need to extract some more information from Antescofo and put them in the score text file reasonably.

@liang: The very last version of Antescofo has a new feature for latency compensation. This is exactly what you would need! @echeveste will reply with details on how to employ this for your robotic use case.

Also, you might want to use @target synchronisation strategies for your accompaniment for your case. See this post (link) for examples.

You will need both the new @latency and accompaniment strategies to make your scenario work.

Thanks a lot, Arshia! I would like to try the new feature for latency compensation in the latest version. Actually, I could tell the difference between @target, @tight, and @loose in my scenario. For now, @tight seems like works better than the other two, but theoretically @target should be the best among the three. Maybe the way I compute and offset the latency has some problems. I will keep on working with it, and look forward to hearing from echeveste.

Thank you for your feedback.

@tight strategy might never anticipates the detection of an event: the accompaniment is always waiting for the upcoming detection before going on.
This might be an issue for systems with output latency, as the latency urges to trigger the accompaniment “before” the detection.

Note that you can combine @tight with the @progressive attribute: doing so, the accompaniment would ont wait for the upcoming detection if it happens too late.

Note also that @target[x] with a very small time x (e.g., x = 0.05s) becomes similar to @tight @progressive, while it gives a smoother accompaniment with larger values x.

Thank you for your reply.

In our system, the latency between the triggering signal and actual output sound from robot is 500ms, which is very long in terms of anticipation and synchronization. So, I tried to use the current ‘RT_TEMPO’ as (60000/RT_TEMPO * n - 500)ms to offset the latency (e.g. n can be 4, when the piece’s signature is 4/4. I want to trigger the next measure’s first note which will be played by robot based on the current measure’s first note which I am playing). It is like the nth measure of human part trigger the (n+1)th measure of robot part. This approach does not work so well. I observe the tempo estimation value in Max, and know it is quite sensitive even I try to play in a constant tempo. As a result, the tempo of robot’s playing is not stable.

I also tried to imitate the way you use in ‘sync’ example patch. Grouping the notes and chords of robot part, then putting it after the first note human performer’s part, also offsetting latency by using RT_TEMPO works better than the method I mentioned above. It is smoother, but still going to unsynchronized often. I guess I might need to either find a way to smooth the RT_TEMPO and use ‘target’ appropriately, or explore an another approach of synchronization.

Is there a way to get less sensitive RT_TEMPO value? Or, any other decent way to offset the latency?

Thank you very much!