< Back to IRCAM Forum

Spectral fusion between a sound recorded in real time and sounds present in a database

Hi there,

I am currently working on an installation project which will require the use of MuBu.

In this work, a continuous sound in the background is transformed by some short sounds (maximum 60 sec) recorded in real time.

The background sound is characterized by a slow interpolation between different concrete sounds contained in a sound database.

The sounds recorded in real-time must act as control sources, i.e. the spectral content of these sources must search and recall a sound in the database that has a similar spectral content, in order to have a spectral fusion between the two sounds (background and first floor). It should do something like “find similar sound events”.

Background sounds do not need to be segmented but can remain their original length.

is it possible to do such a job with MuBu?

Hi Massimiliano,
this is totally possible, based on the mubu-mosaicing example.
It would use mfcc analysis in mubu.process, drop the first coef (loudness), chop with size 0 to get their mean (and maybe stddev), and mubu.knn for lookup.

1 Like