Hi,
My question is specifically about the reverse of audio spatialization, one could say that it’s about forensics rather than production. It’s naive and somewhat rough around the edges, but here goes:
Assuming that we’re speaking of the recording of an actual live event in a physical setting. How far can we go in assessing/representing the actual spatial setup of said event from the final audio render alone? And how does our ability to assess it change if the final render is monophonic, stereophonic, etc.?
Any hints and insights or free-floating thoughts on this issue would be much appreciated.
All the best,
António