Hello to both of you,
Thank you for your interest in Dicy2!
Quick answer:
The plug-in for Live is indeed Audio → Audio. One could imagine :
- A real Audio → Midi version. To develop this :
- you have to choose in advance which MIDI descriptors in the “memory” to match which Audio descriptors in the “guiding input” (even if you can imagine that pitch, chroma, etc. are fairly straightforward)
- perform this conversion at the interface between perception and query, i.e. between the identifier object (which would analyze the incoming audio) and the MIDI memory query (“generate” message sent to the agent)
- implement a MIDI renderer to play the result
- A much simpler “hack” if we have 2 versions of our memory, both audio AND midi memory: use “classic” Dicy2 audio but don’t do audio rendering (audio memory would only be used for idendification) and use the sequences of segments to be played generated by the agent to do midi rendering.
In Dicy2 for Max:
There are examples of Midi Input → Midi Output in the Dicy2 for Max use cases, but there is indeed no Audio Input → Midi Output use case.
I could add one indeed, or ideally, someone from the community could do it and share it with the rest of the world!
→ Question
What would be according to you the ideal workflow to enable MIDI out ?