Antescofo and SuperVP for Automatic Accompaniment


Hello and happy new year!

I would like to reproduce the effect of this demo video:
Is it using the SuperVP for Max plugin found at ?

I assume Antescofo tracks the tempo and drives SuperVP which does the realtime stretching of the wav file.
Arshia can you please shed some light on how to achieve this?


Hi Gorlist,

Automatic accompaniment has been thought to be an easy affair. Our extended studies in the past year has shown that it’s tricky especially when the accompaniment is in Audio. It is also one of the main reasons we restructured the Antescofo language in the past months. Describing the entire process requires more space and we will do that in a dedicated article. Meanwhile, here are several insights on how you can get along.

Keep in mind that a good accompaniment is neither synchronisation with tempo nor position but both! Think of this as synchronizing to tempo, and also wanting the accompaniment to wait for the solo musician or jump to the beginning of phrases. In our approach, this is left to the artist to program using attributes of Antescofo language.

Moreover, our experience has shown that a scientifically correct synchronization will lead to a musical disaster! In live situations, the accompaniment will serve as feedback to the musicians, making both go slow and more… . I won’t get into details of this explicitly but will shed some lights for now in the solutions below.

In what follows I am going to base on the following experiments on voice/kareoke accompaniment, Piano Concerto Accompaniment, and Jazz Accompaniment.


  • A working version of Antescofo for Max or Puredata
  • The score of the solo part (the piano score of a Piano Concerto, the voice melody of a song, the vibraphone score of Round Midnight).
  • SuperVP for Max, or a good quality real-time time-stretching algorithm. SuperVP M4L devices will work as well.
  • A Microphone! (can work with a cheap one or internal mic depending on the piece and instrument)

Preparing your Solo Score

For most existing pieces, you should be able to find a MIDI or MusicXML version of the score on the internet. Once there, just drag-n-drop them into AscoGraph (which will comeup by double-clicking on Antescofo object in Max, or by launching the application in the Antescofo package for Pd users). It’ll be automatically converted to Antescofo format. It’s always good practice to double-check the consistency of the score since some MIDI an MusicXML files are corrupted by nature. An easy way to check this is to press the [play] button on either Max/Pd or AscoGraph and listen to the MIDI simulation of the score.

Choosing your Time Stretch Strategy

With SuperVP for Max, you have two choices represented by two Max objects, both stretching an audio file preloaded in a Buffer object (see each object help patch):
(1) SuperVP.Play~ Object: By default is used for stretching between two time-tags in the buffer (begin, end) within a given time (all in milli-seconds).
(2) SuperVP.Scrub~ Object: Allows stretching continuously using buffer position (think of a cursor moving on the buffer and varying speed).

Now, simplistically speaking, strategy (1) is a good one if you have macro segmentations in your accompaniment audio (e.g. phrase positions to synchronize); whereas the second strategy is good if you have micro-segmentations of the audio (e.g. beat positions). They can ofcourse be combined.

Preparing your Accompaniment Score

In general, you put the Accompaniment commands inside your Solo score prepared previously. Based on the chosen strategy, there are several (easy) ways to do this. Here I am going to pass you the basic approaches leading to different musical behavior and will leave the details in a more comprenhensive article. I start with the historical (and easiest) one and go towards slightly harder (but more convenient) solutions:

(1) Mark down musical positions in the accompaniment audio that musically correspond to the solo part. Then at each starting position in the solo score, you trigger SuperVP.Play by asking it to play from start-position to end-position (marker i to marker i+i) at a duration corresponding to the beat-duration of that excerpt but translated by a function to milli-seconds. Additionally, update the SuperVP.Play speed parameter using the real-time detected tempo of Antescofo. This is the approach in the video posted 2 years ago for Mozart Piano Concerto No.24. If I had to re-do this example, I would use approach #3 below!

(2) Pass the accompaniment audio in AudioSculpt and run IrcamBeat. It’ll put markers on every beat! Export the markers as TEXT from AudioSculpt menu. Each marker address (in milli-seconds) corresponds to a beat position. Rearrange them in binary maps, where the first elements is an increasing integer (beat positions), and the second the buffer address. Put them in a Curve in the begining of your score, with an Action, triggering SuperVP.Scrub.
This is a smart solution: The Curve will automatically create linear curves that will scrub the buffer and hence synchronize with tempo (of the score and of the musician). Your Accompaniment Audio is now time-free! Additionally, add a @tight attribute to the curve to force it to synchronize with positions on top of tempo.
This is basically how the RoundMidnight Accompaniment was done.

(3) Enhance strategy #2: We do not always synchronize with tempo and position. There are places (called Pivots by composer Marco Stroppa) whose position values are more important for synchronization (e.g. beginning of a phrase). This means that we do not want to be tight on all beat positions but on some! This can be simply done by giving position labels to the @tight attribute. Moreover, you might want your accompaniment to have a different tempo that that of the musician… . You want it (for example) to coverge at the END of a phrase. This can be done by the new @target attribute.
This is basically how the Hey Jude accompaniment demo was done.

Once your score is ready, then you just plug it in, press start and play along just as in the videos!

The last approach is somehow new and is being intensively tested as we speak! We hope to publish an article with accompanying examples for all three (and other) approaches mentioned here. Meanwhile, hope this helps… .


Arshia thanks for the great reply!

Automatic accompaniment has been thought to be an easy affair. Our extended studies in the past year has shown that it’s tricky especially when the accompaniment is in Audio.
Is this because of the input from the microphone not being the solo instrument only? From what I've read Antescofo can only follow one instrument at a time. Might be a silly idea but have you tried realtime audio subtraction? Since the accompaniment part is known beforehand and it gets played on top of the solo part (with milliseconds of delay that Antescofo introduces), it might be possible to isolate the solo instrument input by processing the audio stream in realtime.

Something like .


You said:

Is this [tricky automatic accompaniment with audio] because of the input from the microphone not being the solo instrument only? From what I’ve read Antescofo can only follow one instrument at a time. Might be a silly idea but have you tried realtime audio subtraction?

Not really. Antescofo (in principle) is intelligent enough not to mix the solo from the accompaniment section. If the speakers (diffusing the accompaniment) and the microphone (for solo performer) are fairly spread apart, and you pass through the Calibration Process (as indicated in the HELP and Tutorial) you should not have any major feedback problems.

Just as a side note: Antescofo is usually used for synchronizing live electronics with performers (think of the accompaniment audio replaced by several or many audio effects!). We have done concerts in major halls where the flute solo was surrounded by a huge (and loud) orchestra. This is for example the case in Pierre Boulez’ …Explosante-Fixe… (see video on this link… this performance actually uses Antescofo… Note the microphone on flute towards 1:10).

HOWEVER: I have experimented feedback problems on iPads, if you use the internal mic… which is very close to the speakers (genious design:). There, we have problems… . I tried some real-time substraction algorithms (already on ios SDK). The downside of all that is introducing DELAY in recognition. Which is bad. But even in the iPad case, if you use external speakers or a cheap Bluetooth microphone the problem is solved!

In short: The tricky aspect of automatic accompaniment is not actually technical but musical! :slight_smile: We will try to document all the approaches I mentioned in my previous post with examples so you and others can test. We are also in the process of finalizing some tests on this matter.

For Info: There will be a dedicated session on Automatic Accompaniment in the next Acoustical Society of America (ASA) meeting in Providence (May 2014). We will be there!


@Arshia, can you please do a tutorial on the integration of SuperVP and Antescofo?
I’m particularly interested in strategy 1 and 3.
While I understand what you describe above in theory, I don’t know how to implement it.


Attached to this post is a compressed file containing a small Max patch on creating Automatic Accompaniment using Antescofo and SuperVP. The ZIP file also contains a step-by-step demo, including a score, accompaniment audio file, and a simulation file; on Thelenious Monk’s “Round Midnight” (30 seconds). To download the file, you need to see the post on the web and be logged.

To run the demo, you need the Antescofo and Supervp.scrub objects either in your Max path or a copy of them next to the patch.

Just follow the steps in the patch to see the demo! You can use the “Simulation_piano.aif” file to simulate a musician playing. Alternatively, you can use a microphone (don’t forget to calibrate).

Once you load the score “RoundMidnight_Score.txt”, double-click on Antescofo object (open AscoGraph) to see the score.

This is how I produced this demo:

Preparing the solo score

It is easy to find a MIDI score of the piece online! Once found, just drag and drop it to AscoGraph. Then Save the file to a safe place as Antescofo text score. It'll only contrain the instrumental part. Next, we'll prepare the audio accompaniment.

Preparing the Accompaniment

Here, we attempt to do Audio Accompaniment on "RoundMidnight_Accompaniment.aiff" which contains everything except the solo. To do this, we use the SuperVP.scrub object which basically does (very high quality) real-time phase vocoding controlled by the position on audio buffer. We need to produce the correspondence between the instrumental score and the accompaniment audio. To do this, I extract beat positions on the accompaniment audio using IrcamBeat technology already in AudioSculpt, convert them to a Curve and insert it in the score. Here is the procedure to follow:
  1. Open AudioSculpt and load your accompaniment file (here it is "RoundMidnight_Accompaniment.aiff")
  2. Press the IrcamBeat button and follow procedures. It'll simply put Markers on beat positions! Listen to the result for fun!
  3. Goto File menu, Export Analysis as Text and Export Markers As; and save the text file somewhere safe. (See sample result in "Markers/RoundMidnight_Markers.txt")
  4. Open the text file in an editor that accepts RegExp for Find/Replace (I used TextWrangler). For FIND, use the following pattern ^[ \t]*([0-9.]+)\s([0-9.]+)\s and for REPLACE, use the following pattern \t 1.0 \{ \(\1*1000.0\) \} \;. This procedure will convert your markers to Break-points for Curves in Antescofo! (see sample in "Markers/RoundMidnight_Markers_afterGrep")
  5. Wrap this with proper Curve syntax and insert it into your Antescofo score! See the "RoundMidnight_score.txt".

The best way to test the coherence of your result, is to open the patch, load your score and
to simulate your score using the play command! If the correpondence between the MIDI simulation
of instrument and audio accompaniment is correct then you’re fine.

NOTE: The Curve in the score above interpolates between break-points with a time-grain of 0.05 beats and sends them to SuperVP as buffer positions to synthesize.
Additionally, the @target [4#] is our synchronization strategy here! It basically says that at each moment, try to synchronize so that 4 events later (4#) musician and curve are synchronized (think of Anticipation).

There is definitely more to synchronization strategies than what described here. We are in process of publishing and preparing documentations on this new feature. (6.49 MB)


Arshia, thank you for your quick reply.
The problem with this tutorial is that it requires AudioSculpt, so those of us with SuperVP only can’t really use it.
Could you also prepare a tutorial based on the other strategies that don’t depend on AudioSculpt?


Hi Gorlist.

Audiosculpt is used here just to produce the beat markers. You can use Protools or Logic.

Audacity is an open source free software you can use to produce the markers. See for instance this help page. You can put the label by hand, and then export it. You have then to edit the exported file to achieve the same syntax as the one used in the example given by Arshia.

Alternatively, you can install the Queen Mary Vamp plugins that propose a beat tracker (never used, so I cannot tell you the quality of the produced markers).



As Jean-Louis mentioned in his email, the only thing you need is to be able to put MARKERS on your accompaniment audio and translate them to CURVE points in Antescofo. The MARKERS should correspond more or less to your synchronization points. One easy way to do this is to just “tap” in beats by hand on the audio file while being played as Markers. Some software allow that. Once Markers are created then you need to export them somehow as lists in text file. A good hack for this is the sfmarker~ object in Max! As mentioned in my previous message this is rather straight forward with AudioSculpt. We are thinking of integrating this workflow into AscoGraph through a drag-n-drop of Audio in future version.


Bringing this topic back into focus. A lot of great information in this thread, thank you Arshia. Is everything here current or have there been more advancements in automatic accompaniment techniques?

I’ve started using strategy #2 described above. Instead of using AudioSculpt, I just extracted the important sync points from the score manually. I then have a curve driving supervp.scrub~ that looks like this:

curve SVP  @Grain := 0.05 , @Action := ScrubPos $x  
	 { 0. } ; start	  
	 32 { 8000. }  ;m2	A  
	 20 { 13000. } ; m3  
	 52 { 26000. } ; m6 B  
	 20 { 31000. } ; m7  
	 100 { 56000. } ; m12 C  

This is of course similar to the Round Midnight example you posted. Now, wouldn’t it be nice if instead of writing out / computing all of these curve breakpoints, we could generate them dynamically?
Something like:

$t := tab [0, 32., 52., 104.]  
$n := @dim($t )  
forall $i in [$x | $x in 1 .. $n ]  
	curve SVP  @Grain := 0.05 , @Action := ScrubPos $y  
		$y {  
			{ ($t[$i-1] * 250.) } ; from  
			($t[$i-1] - $t[$i])  { ($t[$i] * 250.) }  ; to  

But this doesn’t work, I’m not sure why. Are the curves even being created?
In any case, this would create many little curves, which isn’t exactly what we want. Something like this would be even better:

curve SVP  @Grain := 0.05 , @Action := ScrubPos $x  
		{ 0. } ;start  
		forall $i in [$y | $y in 1 .. $n ]  
			($t[$i-1] - $t[$i])  { ($t[$i] * 250.) }  ; next  

But we get a syntax error, looks like forall is not allowed within a Curve definition.
So I guess this is my question: would there be value in allowing dynamic constructs such as “forall” inside Curve definitions?
Or is there an easier way to do this that I’m missing?



Hello Grig.

There are some ways to achieve the construct you mention. A Curve is a statement and its structure is fixed. Within the structure, the number of breakpoints is variable but statically know at the curve definition. You can use an expression to define the elements of a breakpoint, but you cannot use an expression to define the entire set of breakpoints. However, they are solutions :slight_smile:

I can suggest two approaches to achieve your requirements.

The first approach is to consider a Curve with only two breakpoints and to embed this Curve in a Loop. Something like :

$duration := [...]   
$y := [...]  
$i := -1  
Loop L ($duration[$i+1])  
   abort C  
   $i := $i + 1  
   Curve C   
   @grain ...  
   @action ...  
      $x {  { ($y[$i]) } ($duration[$i]) { ($y[$i+1]) }   
} [(sizeof($t) - 1)#]

Notice two things : the index of the current breakpoint must stay the same during the whole Curve. So we update it first at the entry of the loop body. The second thing to note is the abort command: it may happens that the iteration (i+1) starts at the same instant the iteration i stops. It leads to perform two curve’s action at the same date. To avoid this kind of overlap, we take care to cancel the previous curve (if there is one).

The second approach is perhaps more close of what you have in mind. There is an alternative version of the curve construct where, instead of a predefined set of breakpoints, you use a NIM. The curve is then like a NIM player. A NIM is a data structure that acts like a (breakpoint) function : you can apply it for instance. But you can also add dynamically breakpoints to a NIM. But BEWARE : if you change the value of a NIM while the NIM is played by a Curve, do not expect that the Curve reflects the corresponding change: the Curve considers only the value of the NIM at the moment where the Curve is fired.

Here is an example of a dynamic NIM construction. The example has no special meaning but illustrates the idea that a NIM can record a set of changes in time and can be played later.

$nim := ...  

Curve C  
@grain ...  
@action ...  
   $x : $nim  

See the reference manual page 84 for defying and managing NIM and page 53 for using a Curve to “play” a NIM.

One of the interest of a NIM is that you can extend it dynamically. For example, in the following code :

$last := $RNOW  
$y := 1  

// This NIM has only one breakpoint. We will add new ones.  
$nim := NIM{ 0 0, (1+@rand(0.5)) $y }  

// This loop adds point to $nim  
// the duration of the new breakpoint is randomly chosen between 1 and 1.5  
Loop (1 + @rand(0.5))  
  // The interpolation type of the new breakpoint is also randomly chosen  
  $type := (@rand(1.) > 0.5 ? "SINE_IN_OUT" : "LINEAR")  

  // we add a new breakpoint with function @push_back  
  // this update the NIM "in place" (no new NIM is created).  
  // The height of the new point is $y (here it increases by one  
  // each time we add a new breakpoint  
  $dummy := @push_back($nim, $RNOW - $last, $y, $type)  

  // preparation for the new state  
  $y := $y + 1  
  $last := $RNOW  
} during[5#]  // we do this five times  

Thanks Jean-Louis, looks like the dynamically constructed NIM is what I was looking for!


I forgot to mention that the synchronization strategies apply equally well to a curve playing a NIM. You will find in the soon coming PhD thesis of José Echeveste, a detailed presentation of the existing strategies. If the curve is supposed to stay continuous despite the tempo fluctuations and the errors, a anticipative strategy has to be considered.


Hi guys, back to this old topic :slight_smile:

One question: is there a way to pause&resume Antescofo’s follower (and, implicitly, the Curve running SuperVP), in-between events?

Like say, have an external message trigger a setvar which causes the engine to freeze until another message is received?

I know we can abort a curve via external triggers, but what about pausing it?


By any chance, is it any zip folder for Mozart Piano Concerto No.24 and Hey Jude accompaniment?
Something like