< Back to IRCAM Forum

Music and Machine Learning: the Issue of Euphony

Hi,

After Douglas Eck’s presentation of Magenta, Google’s TensorFlow offshoot and (market-oriented) research into generative kitsch, I’d like to pose a question on what I’d call the issue of euphony. I’m broadly and loosely calling euphony what makes Music sound good not only as it pertains to a particular canon or tradition, but also what makes Music sound good independently of canons and traditions. Here’s my question:

Is there any Machine Learning research being done to probe cross-canonical and cross-traditional corpora in order to discover Weltmusik (or Géomusique, as Deleuze would put it), i.e. the (hypothetical) generative topos/space of good-sounding Music independently of canons and traditions?

If, as I suspect, there is indeed (some approximation of) such a generative topos/space waiting to be discovered beneath the buoyant surface of Global Music History, I’d really love to explore it theoretically and creatively — to know and understand it of course, and ideally even to experiment with potentially new and unheard forms of good-sounding Music — with all the computational (Machine Learning) tools available to us.

Any hints or insights, as well as any fleeting/free-floating thoughts, on this issue would be much appreciated.

All the best,
António

1 Like

Hi,

For anyone interested in piercing through the field of generative Music (and assisted Music creation), here are some recent high-grade references that can assist you (as they’re assisting me) in the journey.

Two papers by Jean-Pierre Briot (CNRS) and François Pachet (Spotify):

The deep dive in book form:

  • Deep Learning Techniques for Music Generation (Briot et al., Springer, 2019.)
  • Handbook of Artificial Intelligence for Music (Miranda et al., Springer, forthcoming 2021 — chapter/paper preprints available online.)

All the best,
António

1 Like

Hi @antonioflorenca,

Thanks for starting this topic. DL is growing so fast and can solve issues in various music tasks. From MIR to generative music, via style imitation. Thanks for sharing these ressources (BTW do you have a link to Miranda preprints available online ?).
Another recent interesting paper is from Geoffroy Peeters, former Ircam MIR researcher. https://www.springerprofessional.de/the-deep-learning-revolution-in-mir-the-pros-and-cons-the-needs-/18945020

My 2 cents

Hi Greg,

No problem, you invited me to repost my original question here, and I was perfectly happy to do it.

A few preprints of several chapters of HAI4M available online:

https://hal.archives-ouvertes.fr/hal-03081561/document

https://hal.archives-ouvertes.fr/hal-03046229/document
https://psyarxiv.com/a5yxf/

Thanks for Peeters’ paper, he did great work on timbre and low-level audio descriptors. Even if it’s not his field, it should be an interesting read.

All the best,
António

Thanks for starting this topic.

Hi @antonioflorenca,

Thanks for starting this topic. DL is growing so fast and can solve issues in various music tasks.