< Back to IRCAM Forum

Thread: projects by Forum Members using Dicy2

:notes: :arrow_forward: This new thread aims to list resources (dates, shows, videos, articles, medias, etc.) related to creative projects using Dicy2 for Max or Dicy2 for Live by Forum members.

:notes: :arrow_forward: Another thread dedicated to Ircam projects using Dicy2 is available here.

:computer: :arrow_forward: To share techical issues (detailed setups, feedback, suggestions, bug reports, feature requests, etc.) please use Dicy2 for Max Forum discussion or Dicy2 for Live Forum discussion.

:sparkles: :sparkles: Feedback is very important to us! And sharing your setups and projects will make it possible to inspire / be inspired by other projects!

1 Like

Duos for one pianist by Hans Tutschku
(Thank you @hanstutschku for sharing this information with us!).
Two versions will be performed for the first performance on 6 April 2024 at Harvard University, by Vijay Iyer and the composer himself.

Development of an improvisation environment for two pianos. The pianist’s improvisation is tracked via MIDI and sent to Dicy2. The listening agent is empty initially and learns from the live player. The generated sequences are performed on a player piano, inspiring the improvising musician to react. It’s a mutual listening scenario.

Video

Duos for one pianist from Hans Tutschku on Vimeo.

I am working towards my ‘duos for one pianist’. Today, I tested some new listening agents. Some are not intelligent, just straightforward number-crunching devices with many layers of random. Others, like this one, seem more intelligent and musical, based on IRCAM’s dicy2. The left piano is a Steinway-D, outfitted with the PianoScan strip, and the right instrument is a Steinway spirio-r.

2 Likes

Experimentation on noises and voices by kalikay @kalikay
When I discovered Dicy 2, it was during a presentation at IRCAM. I was quite intrigued and I must say, impressed by the software and its capabilities. I felt a bit overwhelmed, perhaps the theoretical aspect, though interesting, seemed beyond my artistic and creative concerns.
Later on, I started watching tutorials on Youtube, which helped me understand how Dicy 2 works. However, up to now, I have mainly used the Max for Live versions.

With these versions, I became interested in exploring how Dicy could “react” with less musical, less harmonic sounds.

This is a research I am currently conducting, mainly within the framework of my project around the historic recording of Antonin Artaud’s « Pour en finir avec le jugement de dieu," where I aim to use the voice from the original recording as a sound that I would link to the text and its meaning. I would like to use Dicy 2 to categorize and cut these sounds, and ultimately, to “manipulate” them in a live context.
This recording is quite noisy, and I find it interesting to see how Dicy will be able to “transcribe” these sounds, as well as the sound of the words. This is the aspect of the project that intrigues me the most.

My patches are really simple and I mainly use the « easy » modules of the Max version

In this video I use Dicy 2 with Field recording materials

In summary, my use of Dicy 2 is intended to be very experimental: how to go beyond its usual use.

1 Like

Hey everyone !

My name is Vincent Cusson, I’m an audio technologist interested in interactive music and instrument design.

We’ve had the chance to experiment with one of the first version of the tool since 2022.
It all started in preparation of Tommy Davis’ doctoral recital, focusing on human-machine improvisations.

I remember being intimidated at the time by all the settings and options available. We had to find a balance between aiming for a throughout comprehension of the system and keeping our focus on the resulting artistic outcome. The concert went well (in the sense of “without crash”) but we felt we hadn’t even touched the surface of what a tool like this could offer.

We then decided to work on our own performance setup including the Dicy2 library. The project, called eTu{d,b}e, simultaneously refers to the name of the eTube instrument and to a series of improvised études based on human-computer musical interactions.


The latest version of the eTube controller


The intent behind the development of this augmented instrument was to allow the musician to communicate with the agents during a performance. In addition to the corpus creation and curation, we identified various parameters in the patch which seemed to afford certain kinds of interaction in real time.

An eTu{d,b}e performance presented at NIME in 2022


In the following year, Kasey Pocius joined the team to work on interactive spatialization models for the agents. A lot of work was done to update the library and implement multiple cool features. In the latest iteration, we have combined Dicy2 with other improvising frameworks to extend our research on musical agents.

This new setup can be seen in action in this video :

Our work with these tools also led to some publications in the past 2 years.

Looking back, it is interesting to observe correlation between our understanding of these systems and the artistic output over time. We will try to update this thread with upcoming work and are looking forward to see how you are using Dicy2 and exchange on the subject.

Thanks to @jnika for their advice and support !

3 Likes

Building a Live Coding environment using Dicy2 by Federico Placidi
(Thank you @federicoplacidi for sharing this inspiring work in progress!)

2 Likes

DICY2 is used to query or navigate a sound map in real time. The resulting list of indices then exported to PatchXR, which can be thought of as a VR version of Max/pd.

Each color on the map corresponds to a class, and DICY2 structuring generation particularly restores the continuity that can be lost when chopping sounds in this way.

Early attempts at correlating animation with generative audio (sound-driven visuals).

1 Like

1_IMG_photo_StefanSchultze Kopie

**USING Dicy2 to improvise by
Franziska Baumann (Berne, Switzerland), voice, live electronics, Dicy2 (Ableton)

I am a singer, composer and improviser who often plays with a SensorGlove.
In 2023 I started to explore Jérôme Nika’s Dicy2 and its possibilities for composing interaction variability. It opens up great possibilities that go far beyond conventional mapping strategies. In January 2024 I was invited to play an intervention at a KI symposium in Bern. In the pieces, which were recorded with a mobile phone, I also used SKataRT plug-ins. The impression in the video is somewhat limited because I played with a four-speaker system to make the Dicy2 perceptible to the audience:

a) The frontal speakers are playing the direct audio signals. Speakers three and four are placed behind the audience playing the signals from Dicy2 and SKataRT.
b) The four speakers are used as spatialization with E4L plug-ins

In the two pieces, I have examined two contrasting behaviours of Dicy’s interactive agents.

Piece I

The piece I “Who talks?” blurs the boundaries between human and artificial expression, between embodied and disembodied vocal sound. The synthesized voices in Dicy2 react so quickly that the listener cannot distinguish between natural and artificial voices. Later in the piece, the SensorGlove plays a corpus within SKataRT.

KI Symposium, Kornhausforum Bern, January 2024 (Video, recorded with a mobile phone)

Piece II

In the second piece, the composed behaviour is designed to allow artificial duo interactions with a scale that follows Dicy2’s inspiration or reaction events. The second part includes also a Looper and pitcher that is played with the SensorGlove.

1 Like

A new post by @kalikay updating on his work with Dicy2 here :

Thanks !