Hello everyone,
I aim to use AirPods Max & its head orientation sensor data in a setup in which footstep sounds are augmented and transformed (moved) three-dimensionally in real-time based on live footstep coordinate input, ideally having it sound as „real“ as possible.
I tried doing it with LogicPro: live head tracking and live object panning works, but the Dolby Atmos sound spatialization is limiting when it comes to vertical object placement, and Logic Pro when it comes to space augmentation.
Did I understand it right that MaxMsp and Panoramix or PanoLive is the recommended Spat way of approaching it?
My MaxMsp knowledge is very basic, and my spat knowledge not existing yet but eager to dive into both.
(Getting the Airpod head-tracking data stream to maxmsp should be possible via running the iOS SDK test app on macOS silicon)
Any hint would be very much appreciated.
Thanks a lot!