Hi Gorlist,
Automatic accompaniment has been thought to be an easy affair. Our extended studies in the past year has shown that it’s tricky especially when the accompaniment is in Audio. It is also one of the main reasons we restructured the Antescofo language in the past months. Describing the entire process requires more space and we will do that in a dedicated article. Meanwhile, here are several insights on how you can get along.
Keep in mind that a good accompaniment is neither synchronisation with tempo nor position but both! Think of this as synchronizing to tempo, and also wanting the accompaniment to wait for the solo musician or jump to the beginning of phrases. In our approach, this is left to the artist to program using attributes of Antescofo language.
Moreover, our experience has shown that a scientifically correct synchronization will lead to a musical disaster! In live situations, the accompaniment will serve as feedback to the musicians, making both go slow and more… . I won’t get into details of this explicitly but will shed some lights for now in the solutions below.
In what follows I am going to base on the following experiments on voice/kareoke accompaniment, Piano Concerto Accompaniment, and Jazz Accompaniment.
Ingredients
- A working version of Antescofo for Max or Puredata
- The score of the solo part (the piano score of a Piano Concerto, the voice melody of a song, the vibraphone score of Round Midnight).
-
SuperVP for Max, or a good quality real-time time-stretching algorithm. SuperVP M4L devices will work as well.
- A Microphone! (can work with a cheap one or internal mic depending on the piece and instrument)
Preparing your Solo Score
For most existing pieces, you should be able to find a MIDI or MusicXML version of the score on the internet. Once there, just drag-n-drop them into AscoGraph (which will comeup by double-clicking on Antescofo object in Max, or by launching the application in the Antescofo package for Pd users). It’ll be automatically converted to Antescofo format. It’s always good practice to double-check the consistency of the score since some MIDI an MusicXML files are corrupted by nature. An easy way to check this is to press the [play] button on either Max/Pd or AscoGraph and listen to the MIDI simulation of the score.
Choosing your Time Stretch Strategy
With SuperVP for Max, you have two choices represented by two Max objects, both stretching an audio file preloaded in a Buffer object (see each object help patch):
(1) SuperVP.Play~ Object: By default is used for stretching between two time-tags in the buffer (begin, end) within a given time (all in milli-seconds).
(2) SuperVP.Scrub~ Object: Allows stretching continuously using buffer position (think of a cursor moving on the buffer and varying speed).
Now, simplistically speaking, strategy (1) is a good one if you have macro segmentations in your accompaniment audio (e.g. phrase positions to synchronize); whereas the second strategy is good if you have micro-segmentations of the audio (e.g. beat positions). They can ofcourse be combined.
Preparing your Accompaniment Score
In general, you put the Accompaniment commands inside your Solo score prepared previously. Based on the chosen strategy, there are several (easy) ways to do this. Here I am going to pass you the basic approaches leading to different musical behavior and will leave the details in a more comprenhensive article. I start with the historical (and easiest) one and go towards slightly harder (but more convenient) solutions:
(1) Mark down musical positions in the accompaniment audio that musically correspond to the solo part. Then at each starting position in the solo score, you trigger SuperVP.Play by asking it to play from start-position to end-position (marker i to marker i+i) at a duration corresponding to the beat-duration of that excerpt but translated by a function to milli-seconds. Additionally, update the SuperVP.Play speed parameter using the real-time detected tempo of Antescofo. This is the approach in the video posted 2 years ago for Mozart Piano Concerto No.24. If I had to re-do this example, I would use approach #3 below!
(2) Pass the accompaniment audio in AudioSculpt and run IrcamBeat. It’ll put markers on every beat! Export the markers as TEXT from AudioSculpt menu. Each marker address (in milli-seconds) corresponds to a beat position. Rearrange them in binary maps, where the first elements is an increasing integer (beat positions), and the second the buffer address. Put them in a Curve in the begining of your score, with an Action, triggering SuperVP.Scrub.
This is a smart solution: The Curve will automatically create linear curves that will scrub the buffer and hence synchronize with tempo (of the score and of the musician). Your Accompaniment Audio is now time-free! Additionally, add a @tight attribute to the curve to force it to synchronize with positions on top of tempo.
This is basically how the RoundMidnight Accompaniment was done.
(3) Enhance strategy #2: We do not always synchronize with tempo and position. There are places (called Pivots by composer Marco Stroppa) whose position values are more important for synchronization (e.g. beginning of a phrase). This means that we do not want to be tight on all beat positions but on some! This can be simply done by giving position labels to the @tight attribute. Moreover, you might want your accompaniment to have a different tempo that that of the musician… . You want it (for example) to coverge at the END of a phrase. This can be done by the new @target attribute.
This is basically how the Hey Jude accompaniment demo was done.
Once your score is ready, then you just plug it in, press start and play along just as in the videos!
The last approach is somehow new and is being intensively tested as we speak! We hope to publish an article with accompanying examples for all three (and other) approaches mentioned here. Meanwhile, hope this helps… .