< Back to IRCAM Forum

Inconsistent behaviour of @local arribute

Hello,
Working on Antescofo Max tutorials (v. 1.0-410) on Mac (OS 12.6), another big inconsistency is encountered.

@local attribute (tutorial 14 and following) behaves nothing as expected.

1.) first problem is already without @local. Playing the first example in the tutorial with a random second note, Antescofo will perform the action associated with the second note, just like if a D4 had been played. This does not coincide with the described and expected behaviour. Is Antescofo behaving incorrectly or is the description in the tutorial not so clear?
The “normal” behaviour (i.e. waiting for the last, correct note to be played and only then perform actions associated with previous events) is achieved only by playing both first notes incorrectly!
The same happens with the second score example.

2.) with the @local attribute, the score syntax of the example is probably incorrect or at least inconsistent with what was previously stated (i.e. the delays make actions happen after the onset of subsequent events). This generates a erratic behaviour of @local; it is sometimes calculated correctly (@local actions are not performed if their parent events are skipped) and sometimes not (@local actions are performed anyway even if the event is skipped, as if @local was not there).
In the specific case, I observe that @local actions are always interpreted correctly when their delay falls “inside” the duration of their “parent” event. See example:

In the case of a score similar to that originally provided (example 2), the second @local action is performed nevertheless (i.e. in the case of a wrong second note “2 skip_test 3 @local” is performed immediately before “skip_test 4” when the NOTE E4 is played).

My understanding is that this happens because “2 skip_test 3 @local” is positioned after note E4 on the timeline because its delay exceeds the duration if his “parent” note D4, so Antescofo still performs this action because of how it interprets the score.

This seems logical, but is in complete contradiction with what is written in the tutorial’s text… What is the correct behaviour?

A similar inconsistent behaviour is encountered in tutorial 18 when @local is assigned to group actions; here too @local is correctly interpreted only if both first notes are wrong, otherwise actions happen at their scheduled beat positions. Moreover, in this tutorial both good and bad examples work exactly the same (i.e. more or less correctly?)!!

After these tutorials one is left with many many questions and confusion… It’s not encouraging, if these simple elements already expose such inconsistent behaviour, what will happen with more complex structures?

Hello zenoor-13.

It is possible that bugs have been introduced in the latest Antescofo version. Have you tried with an older, more stable version? Version 0.92 is one of the more tested versions, and is also the last version with which we tested the tutorials (regrettably, our human resources are very scarce).

It is difficult to tell for sure whether the second note was recognized or not, because if you do not play a D4, the listening machine may still decide that a D4 was recognized. The listening machine is probabilistic and uses not only the pitch but also the temporal information to decide whether the next event in the score has occurred or not.

You have a label on each musical event so you can use a whenever to look if one event has been dismissed or not. Just put

whenever ($LAST_EVENT_LABEL)
{
         print $LAST_EVENT_LABEL
}

in front of your score, where print is a max receiver that directs to the console.

Only the recognized events will print their label (that is, if note D4 is not recognized and the listening machine detects C4 and then jump to E4 on its detection, you will not see “skip me”). If you see “skip me” on the console, then it means that the listening machine has, incorrectly, detected D4. Due to the probabilistic nature of the listening machine, this is not a bug (and admittedly, even if the note played was incorrect, from the point of view of the accompaniment actions, there is nothing to change). If you don’t see the label, then it means there is a bug (and we will try to correct it in the next version).

For the second point, perhaps the documentation is not clear enough, and your summary

The “normal” behaviour (i.e. waiting for the last, correct note to be played and only then
perform actions associated with previous events)

is only partially true while your analysis is correct. The idea is that an action is launched in two modes: the ‘normal mode’ or the ‘late mode’. The ‘late mode’ is when the action is launched after its due date. If D4 is missed (with the previous proviso), it is the case for ‘skipe_test 2’. When E4 is recognized, the real-time engine hurries to catch up with the current position and the actions that are launched, are in ‘late mode’. The ‘@local’ attribute means: in the end, don’t launch local late actions.

In your second example, ‘skip_test 3’ is not late when E4 is missed: its due date is 4 beats after the start of the score, and note E4 (which is recognized) is 3 beats after the start. It nevertheless may appear before skip_test 4, because the start of their delay is not the same (in one case the missed note D4 or the recognized note E4).

This behavior is designed to handle processes that no longer make sense (because they can’t be started on time) and to preserve processes than can still meet their deadline.

If you need a strict order between actions that are anchored to distinct events, you have to change the score structure to put it in the same group. This can be painful and there is another approach using the @tight synchronization attribute. With this attribute, the due date of an action is counted from the occurrence of the event that precedes this action (e.g., a @tigh ‘skip_test 3’ implement the launching of this action as a delay of one beat later than the note E4 (because the previous event has a duration of 1, and the delay of the action from this action is 2).

Error handling strategies and musical synchronizations have many(too much ?) subtleties. They have been designed with composers to ease the writing of the score. But these subtleties are a challenge for their explanation, and also for their testing, and the possibility of bugs in a version cannot be excluded.

Thank you @giavitto for your precise and detailed answer.
This makes much more sense now! I will indeed try the $last_event_label on the tutorials and let you know what is the result. But as you say, it is logical that this comes from the probabilistic nature of the listening machine; if a wrong note is played but tempo and duration of events are respected then it would probably mean that the actions should be performed nevertheless (just like a human accompaniment would). In my analysis I was expecting a simple behaviour instead of a complex (probabilistic) one.

It is a pity that these subtleties are not explained in the tutorials nor in the online references, it would be extremely interesting (and musically useful) to have access to resources providing a deep and more detailed explanation of the basic structure and functioning of Antescofo (how is it designed and why, what kind of probabilistic structures are dictating the listening machine’s behaviour, at what level everything is happening, and so forth), just like your reply does concerning the “wrong-note-error” handling.

Is there any such resource available? What about texts containing more detailed insights on the structure of Antescofo language?

I am currently working on a Master thesis project (HEAR/University of Strasbourg) that should be centred around the temporal synchronization strategies and time handling within Antescofo as a programming language, and the documentation I found is for obvious reasons more practical-oriented and lacks the some of technical and theoretical insights I was looking for.

Thank you again for your answer!

The antescofo temporal model is an hybrid model, handling atomic event and a continuous passing of time. From this perspective, antescofo is an hybrid systems (in the vocabulary of control theory) but what makes the antescofo model of time unique, is that we have to handle multiple unrelated times (unrelated because we cannot establish an a priori relation between two time flows. This is explained in the chapter Notion of TimeS in Antescofo. Even if the technical details are not presented, they gives the “why” of the time model as well as practical insights.

Here are some links to more technical articles describing the synchronization. Beware that these articles are partials snapshoots of the system and do not give an accurate description of the actual implementation.

Formal semantics give a significant anchor point. But in my experience, this anchor point is meaningless for the musical side (i.e., not really relevant). And it comes with their own drawbacks (e.g., timed automata can handle only “one” time because they are computationally untractable with more than one time source, e.g. to handle the physical time as well as the musician progression).

Hello @giavitto,
A big thank you for your exhaustive and detailed reply!

I will take time to read all the articles next week and see what I can get from them.
This discussion about how Antescofo handles timed actions and different times by synchronizing events is very interesting; I think I will find some clarity in the texts you shared, and I hope they’ll inspire more questions!

About what you were writing in your previous message:

We tried the “print $LAST_EVENT_LABEL” and indeed there is no bug; events are recognized in a probabilistic way and depending on context.

Hi @zenoor-13

Regarding the “probabilistic detection” : Antescofo uses two kinds of information to advance in the music score: audio information (matching of microphone sound to notes/chords in the score) and timing (to simplify: how long is the current event and how much time you have already spent there given the detected tempo).
You should also know that Antescofo tends to advance. Meaning that if you don’t assign a @fermata on an event and stay too long there, Antescofo will advance to the next event!
The final result is a combination of both depending on the context.

Now that “context” depends both on the audio and the score. You have little control there but you should make sure tuning is right, sampling rates match etc etc. It also depends on the score: In your example, if the listening machine is consistently walking over notes that are not played, there is something simple there to fix that behavior. Without having access to your audio, I would start by adding a silence between the notes or increasing their duration (especially if it’s just a test). Adding a silence (even with zero duration) can help in many cases. An example for illustration:

Imagine a score with all legato and no silence in a measure followed by a big silence in the second measure. If during the performance the musician adds a relative silence between notes (such as breathing etc, which are not written in the score) Antescofo might think that the user is jumping to the 2nd measure! In most cases this does not happen but pushes the system to the boundaries of decision making!

Adding fake silences would just make the system relax in those situations.

Cheers,