Lost in translation :-)


#1

Dear colleagues, I’ve dived into SDIF. It went well to convert the audio via Spear to SDIF. The MIDI information is also correct represented in the Chord-Object. But when connecting the Chord-Object to a Voice object I get a “stretched” representation. What have I misunderstood? The same happens connecting the Chord-Object to omqantify.

Best, Dagfinn

sunbite-and-frostburn.omp (942 KB)


#2

And the SDIF file.


#3

Hello Dagfinn,

On the right part of your patch (connection CHORD-SEQ => VOICE) I don’t think your voice has been stretched: it looks more like it was just quantized with a given minimum unit, which groups close notes into clusters (chords) in the quantified voice, making it sound slower. But the overall duration is roughly equivalent (at least when I evaluate it here).

The left part is wrong (taht’s a bug, not your fault), but it will work better if you use the list of durations (X->DX of the CHORD-SEQ’s lonset output) as an input of OMQUANTIFY rather than just connecting the CHORD-SEQ. Connecting the CHORD-SEQ internally calls “TRUE-DURATIONS” on it, which for some reason does not work well here.

Jean


#4

Hello Jean, thanks a lot!

I’ve done the adjustments you recommended. And I also “discovered” maximum division in the omqantify. (Allways something to learn after 20 years using PatchWork/Open Musc…)

And I have to work much more accurate on the audio files when I want to achieve more or less clear monophonic melodic lines.
In the example file, I did the filtering directly in SPEAR through “select partials below threshold”. But I see I have to do some more preparation in my DAW before I import the audio into SPEAR.

Best, Dagfinn


#5

Hi Dagfinn!

And I have to work much more accurate on the audio files when I want to achieve more or less clear monophonic melodic lines.

The ‘Streamsep’ library possibly does what you want, check the enclosed patch.

-anders

segregate-streams.omp (323 KB)


#6

Thanks a lot, Anders!