hey! thanks, the example is quite near what i am looking for, but !..
i a looking for a more ‘offline’ and qualitative approach. The point is to loop a bit of audio in real-time, then once the loop is filled, to perform an analysis (of partials + residual/noise/stochastic kind) and then use the result of that analysis to play with the recorded loop via mubu.additive~ . The aim is to access a world of high quality transformations.
The problem here in the proposed example (i suppose you’re refering to additive-resynthesis in “examples” tab of MuBu-overview ?) is that the audio is chopped in real time, and there is a number of partials, but they don’t smoothly “slide” from the frequency in one analysis window to another analysis window. The peaks are different in frequency from one window to the next and they don’t interpolate. Another thing is that there is no “residual” or “stochastic” part of the sound analysis in this example. I suppose with something that would not be a “rolling” buffer that would be very feasible in MuBu, when i look at the list of pipo descriptors available, but i’m a little lost right now as to where to begin