Hi all,
We at IRCAM are looking for a Max/MSP developer who’s up for something a little different: starting a revolution in the world of brain science!
This is a fixed-term, 2-year contract based in Paris. Description below, and attached. I (JJ) will be available for informal discussion on Wednesday this week (19/11), so feel free to drop by (IRCAM -2 level, A208) if you are around for Forum workshops.
Position: Max/MSP Developer (fixed term)
Place: IRCAM (STMS, UMR9912), in central Paris (France)
Duration: 2 years
Contact: JJ Aucouturier (CNRS) aucouturier at gmail.com
The context for this position is ERC Starting Grant project CREAM (“Cracking the Emotional Code of Music”) lead by PI Jean-Julien Aucouturier. The developer will be based at the STMS laboratory (Science and Technology of Music and Sound, UMR9912) in IRCAM, in central Paris (www.ircam.fr). The developer will work under the scientific supervision of JJ Aucouturier (CNRS), as part of IRCAM’s “Perception and Sound Design” team (dir: Patrick Susini).
Scientific context and objectives:
We’re a small group of computer scientists and physicists on a mission to crack a complicated brain science problem (“how does music create emotions”), with a hacker/DIY mentality, the drive to learn all the biology that’s needed on the way, and the ambition to become references in the field within 5 years.
A recent theory suggests that music may create emotions by imitating the emotional expression involved in spoken language: music’s trembling notes, hesitating phrases, bright or dark timbres may well be “heard” and processed by the brain “as if” they were emotional speech (Juslin & Laukka, 2003). However, the cognitive neuroscience community does not have the audio signal processing tools and ability that would be necessary to test this hypothesis.
As Max/MSP developer in the team, your role will be to develop a series of Max/MSP tools able to transform certain voice and music characteristics in real-time. These tools will be developed in collaboration with the other researchers in the team, and will used to conduct experimental neuroscience studies.
Work description: The real-time audio manipulations will be specified from our recent pilot studies using professional audio hardware (VoicePro, TC Helicon; see Aucouturier et al., 2014). Precisely, your task will be to emulate in Max/MSP some of the functionalities available in hardware: pitch modification (vibrato, inflection, pitch shifting), format shifting, filtering (high-pass, low-pass) and dynamic compression. The software tools will need to meet strict real-time constraints, with maximum latency 15-20ms.
As a first step, you will build a prototype using tools already available in the Max/MSP community, in IRCAM and elsewhere (SuperVP, Trax, PSOLA, etc.). Then, you will extend these existing tools by porting in Max/MSP more recent functionalities (e.g. those developed in the Analysis/Synthesis team in IRCAM) that may prove necessary for the neuroscience studies conducted in the team. Finally, you will help disseminating these new tools in the neuroscience community, e.g. by supporting other laboratories who’d wish to use them in their own work.
At IRCAM, you will work in close interaction with a signal processing doctoral student tasked to use and test your developed tools. Besides, you will be part of a larger team composed of at least one other doctoral student and postdoctoral researcher.
Ideal background:
The ideal person for this position is a software developer specialized in real-time audio signal processing. He/she should hold a Master or Doctorate in computer science/signal processing (or equivalent work experience) and demonstrate excellent audio programming skills in Max/MSP and C++. He/she should have experience developing Max/MSP objects, audio acquisition hardware/pilots and have good knowledge of audio processing algorithms (PSOLA, Phase Vocoder, etc.)
Environnement:
IRCAM was created by composer Pierre Boulez in 1977; it is now the world’s largest R&D institute in computer music, as well as an important art centre for contemporary music (http://www.ircam.fr). IRCAM is ideally located in central Paris, just opposite the Centre Pompidou modern art museum. Fully equipped with psychoacoustics and neuroscience experimentation booths, IRCAM’s Perception and Sound Design (PDS) team (http://pds.ircam.fr) is the only research unit in the institute devoted to cognition and experimental science of sound and music. Taking roots in the seminal studies of music timbre perception by D. Wessel and S. McAdams, work in the PDS team now encompasses topics as varied as sonic environmental quality (for which we won the Ministry of Environment’s Décibel d’or 2014 award), sound design (we designed sound for the new Renault electric cars) and music neuroscience.
Duration: 2 years (can be discussed)
Salary: c. 2000€ monthly, according to work experience and academic degrees (This is a CNRS ingénieur d’étude/ingénieur de recherche-level position)
Applying: Candidates should send a detailed CV, cover letter and online portfolio of software developments (github, maxobjects, etc.) by email to Jean-Julien Aucouturier (aucouturier@gmail.com) before December 1st, 2014. Candidate interviews will be held in Paris in December 2014, for a starting date as early as January 2015.
Aucouturier, J.J., Johansson, P., Segnini, R., Mercadié, L., Hall, L. & Watanabe, K. (2014) Covert digital manipulations of vocal emotion alter the speaker’s emotional state in a congruent direction, submitted (disponible sur demande auprès de JJA).
Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code?. Psychological bulletin, 129(5), 770
Job-description-EN.pdf (84.7 KB)