They are two main modules coupled in Antescofo : the listening machine which implement the audio-to-score alignment and the reactive and timed real-time programing language used to specify and implement the electronic score.
The listening machine is based on techniques (a variant of Markov chain) that are AI but do not involve ‘deep learning’. Here are a few references on the technics involved:
-
Arshia Cont. 2010a. A Coupled Duration-Focused Architecture for Real-Time Music-to-Score Alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 6 (2010), 974–987.
-
Arshia Cont. 2010b. A coupled duration-focused architecture for realtime music to score alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 6 (June 2010), 974–987.
Note that alignment must be performed in real time. This constraint is severe and prohibits many methods.
For the language part, the techniques used are based on the interpretation and compilation of programming languages, with a particular focus on efficiency and real-time constraints:
-
Arshia Cont, Joseé Echeveste, Jean-Louis Giavitto, and Florent Jacquemard. 2012. Correct Automatic Accompaniment Despite Machine Listening or Human Errors in Antescofo. In Proceedings of International Computer Music Conference (ICMC). IRZU - the Institute for Sonic Arts Research, Ljubljana, Slovenia. http://hal.inria.fr/hal- 00718854
-
José Echeveste, Florent Jacquemard, Arshia Cont, and Jean-Louis Giavitto. 2013. Operational Semantics of a Domain Specific Language for Real Time Musician-Computer Interaction. Discrete Event Dynamic Systems 23 (December 2013), 343–383.
You can find additional documents on this page