< Back to IRCAM Forum

Recording MIDI Note on, CC & sound analysis from Live

Hi there,
I’m interested by using MuBu for :

  • recording midi flows (notes on, cc) and sound analysis results from outside of Max,
  • storing these as files usable outside of max, eventually (as binary data maybe?)
  • reading them back in Max.

Actually, the main idea comes from using midi flows + sound analysis to alter visuals generated in non realtime way (for having a big rendering quality without killing the cpu on stage)

So the idea would be:

  • composing a track (sound, automation for visuals stored as midi CC)
  • storing the whole flows (midi + sound analysis) which I need to alter visuals in MuBu with highest sampling rate possible
  • Reading back these values in Max while doing my frame per frame offline rendering

Ideally, in order not to add too much work exporting midi cc curves, etc, the process would be realtime. I’d have a patch just for storing all my flows using MuBu, and I’d have an abstraction, for instance, that I would put inside my visuals rendering patch in the end of the process to alter visuals during the frame per frame offline rendering.

Midi stuff real time would be required, actually.
Sound analysis, in my case, can be offline. easy with pipo desc (I love it, btw, except that I have other kind of analysis to do sometimes, but easy to write signals in MuBu tracks from analyzer~ for instance)

I have some questions about how you would do this:

1/ storing midi notes on
I don’t really know how to store midi note time position.
Maybe I could store discrete values : 1 for the note on time position, and 0 when there is not note

My main issue would be: sampling rate for storing/reading. Actually, if I have a midi note which is not exactly in a specific time grid, this one could be missed, of course.

2/ storing midi cc
Same as 1/ but easier. I can smooth values too. and store a flow of MIDI CC values.

For reading these back, how would you proceed ?
My main issue, again, is the sampling rate.
My visuals rendering just would need 60fps, which is FEW compared to audio sampling rate.
So, I’m currently thinking about having a kind of step by step process that would increment (60 times per second) the cursor position:

at each frame required, the process moves the cursor from 1/60s, read all values, including midi notes eventually, then alter visuals, then render the frame.

If any of you got experiences with this, I’d be very interested to read about these feedbacks :slight_smile:

Thanks a lot,

Julien

Hi Julien, wow, what a great collection of questions, for a very interesting use case!
I’ll try to group and label questions here.

1. data formats

mubu can export and import

  • note or bpf tracks as standard MIDI files
  • individual tracks as text with time stamps
  • individual tracks or the whole container as SDIF binary format
  • the whole container as .mubu binary (which is a bespoke SDIF schema designed for seamless recall)

2. MIDI / bpf recording, lookup, and playback

  • real-time recording / playback is done with mubu.record / mubu.play
  • if you need to control recording time yourself, use the append message to mubu.track
  • in your rendering phase, non-real-time lookup with interpolation is done by the sample message to mubu.track for bpfs, and getindex for notes
  • MIDI notes are represented in a track with start time and duration, it seems recording would need to insert them at note-off when the duration is known

If I missed something, please complete the questions.
We’d also be curious which analyzer~ descriptors would be good to have in pipo.
Best,
… Diemo

Hi Schwarz, thanks a lot for your answer.

Now, I’d just have to figure out how to create a “player” that could pop out, one at a time, during my offline loop process, a value. Getting a value is ok, but getting a value at THIS specific moment, considering the asynchronous timelines (I mean the one of the offline rendering, and the time line in the track with sampled values). I think I’d need things from oversampling / interpolation fields.

About analysis, I was more thinking about zsa descriptors sound descriptors extractors than analyzer~, actually (even if I didn’t quote them). This would be interesting to have difference of energy between 2 frame, like zsa.flux~ (I guess it can be done with PiPo operations) and peaks of frequencies.

How would the sample message explained in 2.3 above not be sufficient for that? Of course you have to calculate the time point to interpolate at…

Yes, like pipo~ slice:fft:bands:delta and slice:fft:peaks.

1 Like

It seems that everything is covered and I even didn’t know.
I wanted to dive into MuBu since a lot, and now I am.

Thanks.

1 Like