< Back to IRCAM Forum

Strategies and experiences while mapping sensors to concatenative synthesis parameters

Hi there! I am currently working on a piece for woodwind quintet and electronics. Each of the instruments has an gyroscope and accelerometer attached to it (MPU6050 via arduino Mega). The idea is to have the instruments to control a concatenative synthesis patch. Ideally, each instrument would have separate patches and independent buffer/conc motor.

I wonder if someone has had any experience that could help me in the design of the mapping of the data coming from the sensor to the triggering of the segmentation in the concatenative synthesis. Should I go with some movement recognition strategy, such as mubu.gmm or mubu.hhmm and use likelihoods to feed mubu.knn? Or should I find a way to scale directly the data from the sensors and feed the decision tree?

Any advice or comment would be very appreciated!

Agustin.

Hi Augustin,

Few elements that might guide you:

About mapping

  • first try the simplest thing: map directly the sensors values to knn (try the 1-D MuBu overview, or 2-D case in CataRT)
  • then of course you could try gesture recognition. There are some examples in the “examples” folder of the Gestural-Sound-Toolkit fir gmm/hmm. For example, you could use the likelihood from gmm/hmm to mix (or select) the results from different the buffers in mubu.concat

Fred

1 Like

Hi, @bevilacq! Thank you very much for your kind answer. It was of great advice! I’ve been trying all this new stuff for the last days and seems to work fantastic! Thank you again!

Agustin