hey! thanks, the example is quite near what i am looking for, but !..
i a looking for a more âofflineâ and qualitative approach. The point is to loop a bit of audio in real-time, then once the loop is filled, to perform an analysis (of partials + residual/noise/stochastic kind) and then use the result of that analysis to play with the recorded loop via mubu.additive~ . The aim is to access a world of high quality transformations.
The problem here in the proposed example (i suppose youâre refering to additive-resynthesis in âexamplesâ tab of MuBu-overview ?) is that the audio is chopped in real time, and there is a number of partials, but they donât smoothly âslideâ from the frequency in one analysis window to another analysis window. The peaks are different in frequency from one window to the next and they donât interpolate. Another thing is that there is no âresidualâ or âstochasticâ part of the sound analysis in this example. I suppose with something that would not be a ârollingâ buffer that would be very feasible in MuBu, when i look at the list of pipo descriptors available, but iâm a little lost right now as to where to begin