As some questions around the subject often come up around me, I had started to work on an article but thought that you wouldn’t mind some extracts of it. I bring this in from the angle of Spat Revolution, the standalone application (and integration plugin suite) which is based around Spat. I’ll give my perspective on it.
Indeed Dolby Atmos Music with the online distribution services for independent artists has triggered some conversations lately. More importantly, since Dolby is putting a focus on music.
At the base and in my view, Dolby Atmos is more of a proposed workflow (with selected DAW integration where Spat intends to offer wider options with more PI formats) and a specific deliverable for Dolby Atmos arrangements. Granted part of their offer is the ability for an ADM master, something that is a recognized proposed standard and agnostic of the actual format (the object base mix). More on this later.
Dolby (and others) are a commercial proposal where much has to do with licensing (example: the streaming/distribution services to be able to deliver Dolby content). Sony Music has their approach here as well for delivering Sony 360 content on various platforms. This is the key spin on these proposals.
Major differences can be found between Spat and Dolby Atmos production tools although I wouldn’t want to bluntly say that one sounds better than the other. Let’s just say that the 30 years of Ircam expertise and research is making it a very high-level technology. The two are simply different beasts. One beast is just more agile than the other. I’ll let you conclude which one.
Dolby Atmos Production Tools is ultimately an object mixing renderer that outputs a specific speaker arrangement family (Atmos) whereas in Spat you have all the flexibilities here. Pre-defined common arrangements to complete custom ones. 64 channels support in Spat Revolution. This simply means that you can create for various deliverables. Up to a custom immersive installation.
Monitoring your object-based mix in a different environment then the deliverable is much possible with Spat. (when you have a smaller or larger monitoring setup). An example is you can monitor on your available system (done in a separate virtual room with a different format but with the same object mix) while rendering in parallel your actual deliverable (being an Atmos mix or anything else you need). Part of monitoring potentially brings the need or want to virtualize on headphone any speaker arrangements you are creating for. This is fully flexible with the binaural monitor in Spat
We can as well highlight more possibilities with binaural overall like dealing with HRTF libraries, personalized HRTF, various binaural modes some not around HRTF such as Snowman and Spherical models. Dolby does offer some binaural implementation too.
My understanding and experience are that Dolby doesn’t use a VBAP panning algorithm rather they go for something more forgivable such as a layer-based approach (LBAP in Spat). When dealing with dynamic objects and elevation, the triangulation in VBAP can sometimes be challenging with non-uniform systems. An LBAP approach could be more forgiven. In all cases, all the panning options are found in Spat where Dolby is a fix panning. As it was mentioned, it may not exactly be the same.
We can say that Dolby is about a panning tool where Spat welcomes as well room acoustic simulation (reverberation) to these virtual spaces where the object mix is done. Much is on how sound is perceived in the real-world and the room effect of Spat comes to reinforce the localization amongst other things. (Simply think about the possibility to localize the simulated early reflections of each source object). We can say that Spat goes deeper in source/object properties (perceptual parameters and many other options beyond object position). You can use as little or as must as wanted.
Ultimately, Spat will allow you to create any deliverables. Being an Atmos, a Sony, a DTS, a dome - sound installation. For that matter of fact, a scene based ambisonic deliverable distributed to be decoded down the line.
Talking about agnostic deliverables (channel-based or renderer agnostic), Dolby supports exporting an ADM master from your audio creation workflow. This means that you can for example import this object-based mix (ADM BWF.wav audio file with metadata) into a tool such as the Ircam ADMix player and render the various formats with the various technologies of Spat. The OSC integration in this player allows driving the objects (the dynamic or static objects) in the external renderer. We in our team have done this with Spat Revolution.
Thanks to the ADM-OSC initiative I have had the pleasure to be part of, some DAW now supports the integration of external renderer. This was added by Nuendo and Merging Technologies at the end of last year and this is expected to grow. With this, the renderer such as Spat Revolution have a true nice bidirectional communication with the DAW panner and as a few DAWs can import ADM masters, you end up with an object-based mixing environment that is renderer format agnostic (even if the original creation was made in the Dolby environment). This initiative started for the live production side of the immersive wave! From audio capture to live broadcast workflow (where OSC is the main protocol in this industry already). Building ecosystems for creators to diffusion.
My two cents on the matter
@holto , I am truly interested to hear your work here. Maybe I can share with you some more of the ADM OSC initiative.