since I have received some questions from some of you privately (thank you for that!), I will respond here for everyone, hoping also others might find this useful.
- I didn’t get to mention this, in the video but the real-time analysis is implemented also through the mubu library, specifically with the pipo.mavrg fed with values from the average~ object (with an fffb, dividing the spectrum in 8 bands)
-The dynamic detuning is obtained exciting two natural harmonics on two strings, holding them by balancing the bow pressure, speed and position and alternating between touching repeatedly the node and detuning the string, going back and forth from the node of the string to the peg.
I mention a 7Hz difference between the two partials of the natural harmonics, one being at 330 and one at 323 (not 223…of course, a typo, thank you for pointing that out!).
On using Audiosculpt as analysis tool, to understand the behaviour of the instrument and sound:
the musician is not asked to obtain specifically a 7hz detuning, but to achieve this balance between the two harmonics, start the action of detuning (while maintaining the relationship established initially) and search for natural beatings within a microtonal range and then retune the string. The need of several analysis was because: to our knowledge, there is no previous literature on this specific technique yet; the musician can explain and be conscious about the actions and some behaviours only to a certain extent, while the spectral analysis displays exactly which of the partials are affected mostly by the beatings etc.
The speed of these beatings, occurrence, stability is quite unpredictable (i.e. one can never exactly predict nor control when the fundamental might take over nor on which of the two strings it might take over; this might require the musician to interrupt the action on the peg and touch the node again and go back and forth). Rather than imposing constraints that would “fight” against the organic nature of this kind of technique and sound (which is very rich), I wanted to embrace this instability and its unpredictable features, play with them, compose with them. After several analysis, different attempts, discussions with Florentin Ginot, we agreed that the best way to achieve that, was to foresee a degree of freedom, a space within which the musician could move inside, by deciding the “harmonic” path, a phrasing with a global and local duration/relationship, approximate ranges of detuning-retuning expressed in microtones, with a more indeterminate notation… etc
Because both the live-electronic player and the double bassists are playing with beatings, it becomes a duo, where the two players need to listen to each other within these detuning ranges. It is the essence and heart of the piece: an instrumental body that becomes a whole with the musician and the electronics, where none of the three dominates on the other, but rather are continuously searching to maintain a fragile balance.
the patch is using the angular position of the sources and dynamic movement through the spat5.pan~ (no virtual room): no reverb was used, neither in spat, nor for the performance, nor for the post production…the binaural encoding for the video does affect the sound on some level
If there are any further questions I will be happy to answer!
Claudia Jane Scroccaro