< Back to IRCAM Forum

Crosspost: Strategies and experiences while mapping sensors to concatenative synthesis parameters

Allow me to crosspost a very interesting question by agustinissidoro on the mubu forum (please answer there):


Hi there! I am currently working on a piece for woodwind quintet and electronics. Each of the instruments has an gyroscope and accelerometer attached to it (MPU6050 via arduino Mega). The idea is to have the instruments to control a concatenative synthesis patch. Ideally, each instrument would have separate patches and independent buffer/conc motor.

I wonder if someone has had any experience that could help me in the design of the mapping of the data coming from the sensor to the triggering of the segmentation in the concatenative synthesis. Should I go with some movement recognition strategy, such as mubu.gmm or mubu.hhmm and use likelihoods to feed mubu.knn? Or should I find a way to scale directly the data from the sensors and feed the decision tree?

Any advice or comment would be very appreciated!