< Back to IRCAM Forum

SPAT for Max for Live

I have been trying to configure SPAT 5 as a Max for Live patch to use in live performance. Akihiko Matsumoto has posted a 1 hour video on how to do this https://www.youtube.com/watch?v=3yaeMKECrhI. However, the screen resolution is poor (and so is my eyesight, even using a magnifying glass). I can not get past the early part of the tutorial without missing parts of modifications he types into objects and messages.

Has anyone made a SPAT Max for Live patch they would be willing to share? I am sure there are others who would be grateful for this. However, if you want message me directly, my email is hall_g@rogers.com

2 Likes

Hi,

Here’s one of my public project : PanoLive

Best,

Jerome

3 Likes

Hi Jerome,

Thank you so much for your quick reply and generous sharing. I am still struggling with SPAT and have not yet tackled Panoramix. This is a much appreciated ‘leg up’.

Best wishes,

Glen

Hi @glenhall,

Great Idea, I push forward your request. Now that the internal stereo structure of Live and M4L (from one plugin to another for instance) might create a “non efficient” programming style and result in a heavy session. For instance if you route a 64 channels out into another device, you will need to route 32 stereo channels using audioroute M4L library. Takes some time. But you might not need to use extensive multichannel if you are not doing HOA or to play in a big concert hall with many speakers. What is your workflow then ? Personally, I use spat in a separate Max patch and I route audio through my sound card from live to Max which I control using a bunch of OSC message send from a M4L device. What reason would drive us toward a M4L device ?
Best

Hi Greg,

Thanks for your encouraging response. My workflow is quite modest. Mostly it is for spatializing small groups (2-4 musicians) in my personal studio (8 speakers, usually just 4 for ‘surround’), small clubs and performances spaces. I occasionally perform at Doug Van Nort’s Dispersion Lab at York University (Canada) with 24 speakers, but he has a tech who has promised to eventully configure my Ableton Live system for the Lab.

My concern is to have a some M4L ‘building blocks’ to get beginners started, allow DJs to utilise SPAT in Live, use SPAT in Zoom and other live online broadcasts, and, in particular, minimise the CPU workload on SPAT instantiations on several tracks. As for 64 track outputs, that is something beyond my needs and capabilities at present. But I feel certain that there are some SPAT users that have these needs.

Some GitHub repository for these varied solutions may be a good place to start collect and store these M4L patches. Researchers at IRCAM and skilled Max for Live users in the SPAT community may be the place to begin to put out a call for contributions.

1 Like

what reasons :slight_smile: ?

  • to reach a community of users and place that is not familiar with max and have however interest in spatial sound, and that live with other constrains than in the academic field.
  • to be able to quickly load a set up with a flexible tool. This is important for places, venues where production time is an issue. Of course in the academic area this is not a subject -> point one
  • to have a tool that gives more freedom to the creative aspect than to the technical issues.
  • to manage format of projects that can be prepared in advance and deploy in whatever setup, or in venues when the artist knows the system will be the same. And ableton is the most popular daw in the world, and is improved regularly.
  • To take the advantage of Ableton time line which indeed you could still have with osc sending to max but then you dont need to care of osc anymore towards another software anymore.
    …and more.

I never been a pro Ableton. You can even find thread in this forum where i m addressing the same questions about why the point to have ableton. I changed my mind, because i believe practice around spatial sound field is still kept under the knifes of technician/engineer while it should became an artistic langage. I understand totally @glenhall interests, as i do belong to a diy scene too. We need to popularize that langage and give the keys of creation to artists not engineer, whose the work is already exceptional by developing that library (poke T.)

My integration of spat in live is following its path. Far from finished but i m happy with where it is now already. I even been pleased to observe that from scratch in my studio, without any pre established set up, it takes few minutes before focusing on the creative part.

spat for sat

What have done Matsumoto in the video is very basic and actually i doubt in the case of spat, it would help at the end. I mean, if its for doing this, then as said @beller , i would definitely stay with max and put my energy in osc device. My propositions takes another way, which is believe is better and justify why you go in live.

I hope to be able share a beta ‘basic’ version sometimes soon in the beginning of the year. It based for now on 3 amxd, and the visual domain is handle by jitter. Actually jitter will the place where i aim to have the geometry happening, and some other cool visualization features that will help to understand the overall sound scene. All the rooting is transparent for the user. For the distributed version, It will likely have a main limitation : 8 sources maximum. But working in conjunction with audio route (you can send signals from one track to any of these sources) it will offer way enough possibilities already.
Cheers!

2 Likes

Can’t wait !

Let me know if you need testers for the Alpha version :wink:

All the best

1 Like

yes for sure greg! as soon i have something that’s roughly ok.

i have some kind of big architectural wonderings at the moment. While for now, the jitter windows is included in a ‘master’ amxd device, i think that would seem more steady to host all what is depending from jitter, in a max-runtime that’s called at the load in ableton and use osc to manage all the message to that window from ableton. Leading some test right now + It would also protect a bit more the jitter-side code :slight_smile:

Hello Fraction,

Thank you for your detailed and insightful response.

Your reasons are very much in line with my hopes for a ‘technician’s answer’ to an amateur’s wishes. I recently spent $200 on a programmer recommended by Cycling74 onlyto come up with a barely useable version of SPAT in M4L. His
explanation was difficult to follow and to recreate.

As with many M4L patches, Live users either:

 do not know what Max components are needed to achieve their

goals and do not know how those components need to be configured or

lack the motivation to research how to achieve their goals and 

simply are waiting for someone to do the work for them

I am in the middle of these two.

My goal is to have configurations of SPAT in M4L that can address my
changing needs. For instance, I am working with an improvising
violinist who wants to have her playing spatialized for a live radio
performance (therefore it would be a ‘virtual spatialisation’ for stereo reproduction).

Myself, I use CataRT and OMax that I route through Live so that I can apply various effects to their outputs. I would like to be able to spatialise their outputs in my own 4-speaker
and 8-speaker systems.

As projects evolve, I discover that my needs for having SPAT configured
for use in Ableton keep evolving with my changing needs. The SPAT
tutorial video is too small on my 15" screen to see the values
being entered. They are unreadable, so the tutorial’s information, for
me, is unusable. I wrote to him with questions but did not receive a
reply.

Fraction, it seems you both understand what I and other Live users might
be looking for and you have the technical expertise to create the needed
kind of M4L patches for SPAT’s integration with Live.

I believe Greg Beller would be an excellent advocate for any ‘alpha’ or beta
testing that will be necessary for finding the =‘sweet spot’ for the M4L .amxd patches you might find the time and energy to create.

You will have very grateful users thanking you.

Best wishes,

Glen

Hi Glen,

I’m building an interface for a performance at University of Toronto’s EMS. We have a 9 channel system here. I’m currently stuck on configuring the 9 channel outs within M4L.

As of right now, in Max Standalone I’ve succeeded in routing the spat5 object and sending audio to 9 separate outs, and adjust their panning of a [ playlist~ ] object to their channels out of a [ dac~ 1 2 3 4 5 6 7 8 9 ] accordingly. Everything sounds great.

However, I tried to implement this into a M4L device on an Audio track, this time using [ plugin~ ] -----> [ plugout~ 1 2 3 4 5 6 7 8 9 ] as well as trying [dac~ 1 2 3 4 5 6 7 8 9 ], and Ableton 10 is not able to give me the proper audio routings out of either object:

Using dac~, I get no output at all.

Using plugout~, it routes channel 1 to outs 1, 3, 5, 7 and channel 2 to outs 2, 4, 6 ,8 .

I’m assuming this is an issue with Ableton. I haven’t been able to find anything online that addresses this problem very well yet.

I was wondering if anybody has any idea as to how I could fix this: as I’d optimally like to use Ableton’s clip launchers and other tools within my performance.

Picture of Standalone version in Max

Hi noceilings3,

Since most of what I do is for stereo, 4 and 8 channel outputs, I don’t have any experience with this issue.

As your problem, as you say, seems to come from Ableton and M4L, I would suggest that you contact Ableton Support. As more people begging to use Spat in M4L, it’s an issue they are going have to contend with sooner rather than later.

I find their support techs knowledgeable and eager to help, often following up on their suggestions to see in the inquirer’s problem has been fixed.

Good luck with your project. Once the pandemic is over, I’d like to see how it worked out. I live near Toronto, so a visit is a possibility.

Best wishes,

Glen

You might learn a lot about multichannel audio routing in M4L from here :

1 Like

@glenhall

catching again the thread.
I am making progres here with my developpement. I ll be presenting a full demo during IX-Ircam forum at SAT nxt week (online). I have created it as a frame for an audiovisual piece/installation, but perhaps the software part could be interesting also for generic use.

Meanwhile you can have a look with this little video that shows the process. All audio routing is transparent for user, and all spat relies on 3 m4l object : engine, source, room. Everything loads quite quickly, you can save the project and find back your babies etc etc.
(https://www.dropbox.com/s/ksmakshvjhjd2m2/video_19122020.mov?dl=0)

you can quite easily shift between set up and pan type as use live automation. I have to say that live automation is very cpu non-efficient, but it works. It would all depends on the machine. On the other hand, using generative and procedural device to controle source motions will be interesting as it will be handled by an external application running in the jitter “domain”.

It needs now on my side to test its limit a bit more to really announce as a good alternative. It’s still at the experimental stage.

There will be some limitations comparatively to run it directly from max, but i think that for small scale projects or quick set up it might be in a decent option, i hope.

More soon,
E.

Hi,

Well, this is all very interesting, I didn’t know that you can use several Spat devices without any problem. My PanoLive project works with a central M4L based on panoramix .

But after making another Spat device for each track, I had problems when there are multiple instances of a device using Spat. I’m talking about :

Alveoli.amxd : coming soon

If I put two, the manipulation to make it work is to launch Live and load the device, then open the project containing the different instances.

So I’m very curious to know more, and to see the software project. If I can help, it will be a pleasure

Best,

Jerome

Hi Jérôme,

There is a peculiar issue when using spat5 in Ableton Live, and loading a session that contains multiple instances of a device. Under certain circumstances, this provokes a (reproducible) crash.
It’s a known issue of spat5.
But it’s also rather easy to fix, as long as 1) you let me know, and 2) you send me the culprit device.
I have now fixed issues (for the next release), and you’ll be able to load Alveoli without troubles.

Best,
T.

Hi thibaut,

you don’t remember that you have alredy seen this device several months ago. You told me about the workaround. :).

Glad to see that everything will work soon

Best,

Jerome

hi Jerome,

i took a specific path. It’s not working with multiple spat. I only use one spat. It is nested in one engine that operates as the core of the system, in term of audio routing and message. It reduces options to manage together sources operating with different pan type, but in my community this is not anyways super important. 99% of practical cases are artists (including myself) playing with only one pan type for the entire scene and are looking to get an environnement easy to set for live performance and easy to adapt to different context. I had to hierarchize what it was the most important to have : a such creative plateform, that does well what it does, or an environment covering all the cases but would be hard to implement in live, even though it would be used by only 1 person on 100. I made a choice here. Anyways your own initiative with your project is offering the panoramix based alternative which could be the solution for such cases.
This offered me more cpu flexibility in ableton but also complexified a lot the whole code because with only one spat, it makes necessary to have a way to manage syntax system perfectly. Just to picture a bit the work, i have now a system that is based on likely more than 50 abstractions (i dont have to exact count) just for the spat-for-spat, i mean without counting spat5 dependencies. I also dont use the spat.oper but the oper_ little brother to address messages (such a useful object, thx T!) and replaced the oper gui by a jitter based application, offering me also more flexibility in the future for sound field representation which was at the core of my project in residency.

I will share the demo video here next week,

1 Like

Hi Fraction,

I can see better where you’re going with this, and it seems very interesting to use Jitter… If you have a system ready, that’s fine… In the long run, do you plan to share your work ? Or make a standalone ? It is true that I stay with PanoLive which has already been experimented on stage (inside and outside), with success… So an alternative must be different and you did it !

I can’t wait to see more…

Best,

Jerome

yes definitely, i will share the work. I m thinking at the moment how i m going to manage this. But first of all i need to put it on stress test to evaluate its potential and limits. Its an humble step, but after using already in my studio (quadri), it’s quite convenient enough to raise some interest especially in the case of “generic” case or non-advanced user (the others have max right?)

My focuse was initially going toward the jitter environnement where i think it’s a place for spatial sound where we have a lot potential to make both creative stuff and useful interface for users - it ended up that i investigated a huge amount of work on the m4l set of tool to get it dynamic enough to use ableton as the “sound” platform for my project. Anyways, it wont be complicated to put it back in pure max environnement if necessary for large scale projects (especially m4l are straight fwd to load in max environnement)