< Back to IRCAM Forum

théorie d'yves meyers

Hello,
This is a question for Axel. I just read an article in Le Monde describing the work of Yves Meyers who recently received the Abel price for Mathematics. The article states that what he discovered can replace the Fourier Transform for signal analysis…So when will we have an Audiosculpt (or a software named differently) based on that discovery?

Hello Antoine,

the different types of transforms including Fourier and Wavelets all have strengths and weaknesses. Signal transformation with spectral representations can be signficicantly simplified if the spectral representation is adapted to the type of operations that are to be performed on the signals. Transformation harmonic signals for example is signficiantly simplified if sinusoids are individually resolved in the representation. Now music and speech are basically composed of harmonic signals and therefore the use of standard DFT representation (with a constant but optimally time adaptive frequency resolution) makes the mathematics that are required to perform signal manipulations much easier than equivalent operations for the case of wavelet based representations. On the other hand there are many other tasks for that other types of frequency scales that are closer to wavelets are more suitable and more efficient.

So before I elaborate a little bit more on the question I’d like to say that especially for modification of music and speech Wavelets are very impracticle, and therefore I don’t think that the work of Meyer will end up in AudioSculpt or similar programs at some point in the future.

Now looking a little bit further around the direct question I would like to stress that the AAAS analysis that you can perfom in AudioSculpt since now about 2 years creates a signal representation with signal adaptive resolution. The use of these signal adaptive representations has been made possible by the ground breaking work on non-stationary Gabor frames, here notably using time varying frequency resolution, that have been made possible based on results established by Monika Doerfler and colleagues. Compared to the frequency dependent but nevertheless fixed time and frequency resolution of wavelets we have here an approach allowing to adapt the frequency resolution over time, which from my perspective and for music and speech signals is much more interesting. Based on the work of Monika and colleagues my former PhD student Marco Liuni has in fact developed an algorithm that does the adaptation of the frequency resolution automatically (leading to the AAAS analysis). For the moment the integration of the algorithm into AudioSculpt is still quite rudimentary, but I hope we will find means to better integrate it in the future.

I’d like to mention further that multi resolution approaches have also been used to establish a significantly enhanced f0 estimation algorithm (the swipe algorithm by Arturo Camacho), that I hope we will be able to introduce into AudioSculpt in a not so very long future, and that we use frequency dependent time and frequency resolution (not wavelets, but perceptual frequency scales) for example in recent work for texture analysis/synthesis.

Multi resolution signal analysis is a rather large and complex field, but over time all this will come into the main stream of audio signal analysis and
transformation. So while I don’t think the work of Yves Meyer is very relevant here, there are many other tools and and results that we are working with to improve our audio signal processing software.

I hope this answers your question.

With kind regards
Axel

Hello Axel,
Thank you so much for your long and detailed answer: I did not expect that much. I am always embarrassed to waste the time of somebody so intelligent with my naive questions. I hope Marco Liuni is doing well, he was my professor for one of the trainings that I took at Ircam.
Best regards,
Antoine Escudier