Hi @philippesalembier,
First of all, thank you for your interest in Dicy2, please feel free to share your projects following the contributions here: https://discussion.forum.ircam.fr/t/thread-projects-by-forum-members-using-dicy2 !
I will first answer your question in brief, then point out theory-oriented tutorials dealing with these questions, and finally give some links to publications.
Briefly:
1) Optimality with regard to the scenario/query enabled by the model:
The algorithm combining machine learning and pattern matching ensures that the result of a query will be the best possible tiling of subsequences from the source material in terms of sequence length and temporal context (depending on parameters chosen).
2) Optimality with regard to a sequence of scenarios/queries enabled by the architecture:
The architecture of the model is such that, if you don’t “reset” the agent, it remembers the outputs associated with the last received scenario, and the set of candidate sequences matching the last received query are sorted according to those that best continue what was previously generated.
3) Optimality with regard to concurrent and overlapping queries/scenarios enabled by the scheduling model:
If two queries concern a same temporal scope, an execution trace model that takes into account these two queries and the current rendering date of the buffered result from the previous queries can “rewind the generation time” so that tiling can take place in real time by synthesizing all calls as a single, up-to-date hybrid query.
Tutorials :
These points are covered in the last 3 tabs of tutorial 0_Introduction_to_Dicy2 (in Max/Extras/Dicy2-Overview), respectively: 1) offline_scenarios, 2) scenarios_timeframes, 3) real_time_scenarios.
These theory-oriented tutorials are not essential for using the library but were designed for people wishing to find out what is hiding underneath the engin. They also correspond to the last 3 chapters of the Dicy2 concepts video tutorial by Matthew Ostrowski (@matty) , starting at 17:36 here:
Articles
-
Details about the algorithmic and architecture aspects of 1) and 2) are explained in the following article:
Jérôme Nika, Marc Chemillier, Gérard Assayag. ImproteK: introducing scenarios into human–computer music improvisation. ACM Computers in Entertainment , 2017. https://hal.science/hal-01380163/document
-
Details about the scheduling aspects of point 3) are explained here:
Jérôme Nika, Dimitri Bouche, Jean Bresson, Marc Chemillier, Gérard Assayag. Guided improvisation as dynamic calls to an offline model. Sound and Music Computing (SMC) , Jul 2015, Maynooth, Ireland. https://hal.science/hal-01184642/file/smc15-improtek.pdf
-
More generally, you’ll find other related publications here: Research – Jérôme Nika
I hope this answers your questions!