< Back to IRCAM Forum

Beat & Time

Hey!
First of all, I want to share my excitement and gratitude for the software. Chapeau bas!
The idea of computer-human interaction and making computer an improvisation partner has long been with me and seeing a piece of software done just for that is mind blowing.

I’m trying to understand how does time annotation work in somax.
When I record or create a new audio corpus from existing audio data, I understand that the events are time annotated.

If the server is running during the recording:

  • Are events annotated with respect to their position in a measure? Or only in relation to the beat?
  • Where can be a time signature edited?

If the server is not running:

  • I understand the timing information will be missing in descriptions

What happens if a corpus is created from an existing audio file?

  • Does the analysis track time signature, measures etc, or it only tracks the beat?
  • Does it actually track it, or relies on what’s input in the builder as a starting point?
  • How is the time grid aligned with existing audio?

Lastly (I’m not very fluent with Max):

  • How can the click from the Server be routed to audio output so that I can hear it while recording a corpus?
  • can the server be synced with global max transport?

Thanks!