Super great video Dave. I was really adamant about using object beds for discrete locations, and I see that isn't as important. I still hope to see more listening parties being held in movie theaters. Maybe even Get some live performance theaters with the potential for immersive sound, kind of like the sphere in Las Vegas and other venues that support immersive live performances. I wonder how this kind of thing will translate in the future as well with apple Vision Pro, and future augmented reality headsets. Super neat to see how technology has evolved Within the past five years. Glad to be here to witness it🙂
Thanks, Alex. I'm also curious about live performance since I've worked with other immersive systems for live sound. Right now the biggest issue I'd see with Atmos as I know it is the latency. I'm really curious what they're doing with the venue out in Vegas.
@@goingto11 I feel like there should be a relatively easy way you route audio to even a separate processor that accounts for the latency, but that's only one factor of the. I was thinking a Stereo PA for the band, and something like 7.1.4 playback tracks running in another DAW? Something I plan on looking into in the next few years. Also forgot to ask the question about object sizes compared to an array, and maybe some compromise between binaural and speaker playback using the size parameter to spread into more speakers. Probably is kind of messy with binaural, but something worth checking out!!
This is great David!!! Answers all the questions I had from the previous video. So the big problem is that this IS not how it works in any home system, even on a Trinnov. It SHOULD be the standard going forward. If the processors for home theater could do what the renderer could with arrays would solve SO MANY people’s problems that use higher speaker count systems.
I think it's important to also understand that consumer delivery is never the FULL Atmos. There is always spatial coding happening that clusters audio into either 12, 14, or 16 objects/channels depending on how the bit budgeting is done--and LFE is one of those so it's really more like 11, 13, or 15. That's why some of those "object viewers" people use for home theaters to see how "active" a soundtrack is aren't very accurate as to what is really happening in the mix.
@@goingto11 Yes, for sure, but you can still see the things that are encoded as objects in the Trinnov viewer, even if they are in clusters you still see them represented and it gives you a little number counter too of how many objects or I should say object clusters are active at that moment. The fact remains that on a 7.1.4 home system an object or bed mix will sound basically identical minus the extra 2 heights, but on a 9.1.6 home system the bed layer will not engage the front and rear heights or the wides. Same goes if you added more side surrounds for multiple rows. I know the theatrical won’t engage the wides on bed only, but for home theater the 6 height setup only engaging the middle 2 heights is a big problem for many, so much so that many discourage going to 6 heights for that reason. It’s weird in that it’s a problem that will only affect the bigger budget systems and not the people that stick to 7.1.4. Usually stuff like that is the other way around where you get the short end of the stick if you don’t spend the extra money.
In the renderer you're showing, it looks as if individual speakers that are part of an array can play individually. My understanding is that that's not how an array works. From the Trinnov Altitude 32 processor manual (p. 80): "...it becomes essential to be able to create an array of speakers: use several speakers to play the same channel. ... Similar configurations have been used in commercial cinemas for years." In other words, when a group of speakers are setup as an array, they will play the same signal. They will never be able to play individually. So that renderer is very confusing.
One of the early selling points for Atmos was it allowed mixers to "pan through arrays". If we are working with an object, we can pan to any location in the environment and the Dolby Atmos renderer will place it in the best speaker(s) to recreate that position based on the current speaker configuration. If we work within the bed in Atmos, it will utilize the arrays.
@@goingto11 interesting. That would require a processor that knows how to differentiate between the two. But perhaps that is how Atmos enabled processors work with arrays, compared to how arrays would be setup and work in the past prior to Atmos?
In any Dolby format there is metadata, and that metadata should tell the device where to put things. I'm not exactly sure how that works in consumer devices, though, partially because I'm not sure exactly what is happening with the encoding for consumer formats(ex. DD+, TrueHD, etc.).
@@goingto11 yeah. That's constantly being tested by the home theater community. Especially since the launch of the spatial audio calibration toolkit. That's how we've found out that some mixes don't render as we would want them to (e.g. 7.1.2 mixes with 7.1.6 setups) and that different devices actually do things a bit different, and that the user's settings can mess things up without them knowing (e.g. assigning speakers that are located as surround heights as "surround heights" in Denon/Marantz, which will make them completely silent with Atmos content).
@@isak6626So you say, if someone has height speakers, they should be assigned as top speakers not height? Did you test it? Because as I know Dolby Atmos supports height configuration and many people are happy with that.
Thanks David, you are good at… “ givin good guidance “.. thanks Bro
great. well explained
Super great video Dave. I was really adamant about using object beds for discrete locations, and I see that isn't as important. I still hope to see more listening parties being held in movie theaters. Maybe even Get some live performance theaters with the potential for immersive sound, kind of like the sphere in Las Vegas and other venues that support immersive live performances. I wonder how this kind of thing will translate in the future as well with apple Vision Pro, and future augmented reality headsets. Super neat to see how technology has evolved Within the past five years. Glad to be here to witness it🙂
Thanks, Alex. I'm also curious about live performance since I've worked with other immersive systems for live sound. Right now the biggest issue I'd see with Atmos as I know it is the latency. I'm really curious what they're doing with the venue out in Vegas.
@@goingto11 I feel like there should be a relatively easy way you route audio to even a separate processor that accounts for the latency, but that's only one factor of the. I was thinking a Stereo PA for the band, and something like 7.1.4 playback tracks running in another DAW? Something I plan on looking into in the next few years.
Also forgot to ask the question about object sizes compared to an array, and maybe some compromise between binaural and speaker playback using the size parameter to spread into more speakers. Probably is kind of messy with binaural, but something worth checking out!!
@@goingto11 Las Vegas uses a Holoplot X1 system which employs 3D Audio-Beamforming and Wave Field Synthesis. Quite bit more advanced than Dolby Atmos.
This is great David!!! Answers all the questions I had from the previous video. So the big problem is that this IS not how it works in any home system, even on a Trinnov. It SHOULD be the standard going forward. If the processors for home theater could do what the renderer could with arrays would solve SO MANY people’s problems that use higher speaker count systems.
I think it's important to also understand that consumer delivery is never the FULL Atmos. There is always spatial coding happening that clusters audio into either 12, 14, or 16 objects/channels depending on how the bit budgeting is done--and LFE is one of those so it's really more like 11, 13, or 15. That's why some of those "object viewers" people use for home theaters to see how "active" a soundtrack is aren't very accurate as to what is really happening in the mix.
@@goingto11 Yes, for sure, but you can still see the things that are encoded as objects in the Trinnov viewer, even if they are in clusters you still see them represented and it gives you a little number counter too of how many objects or I should say object clusters are active at that moment.
The fact remains that on a 7.1.4 home system an object or bed mix will sound basically identical minus the extra 2 heights, but on a 9.1.6 home system the bed layer will not engage the front and rear heights or the wides. Same goes if you added more side surrounds for multiple rows. I know the theatrical won’t engage the wides on bed only, but for home theater the 6 height setup only engaging the middle 2 heights is a big problem for many, so much so that many discourage going to 6 heights for that reason. It’s weird in that it’s a problem that will only affect the bigger budget systems and not the people that stick to 7.1.4. Usually stuff like that is the other way around where you get the short end of the stick if you don’t spend the extra money.
So there is only right-left top speaker panning in commercial theatres, if sound is mixed as arrays?
If you send to the height channels in the bed, you only have left and right in the array. If you use an object, you can pan through the entire array.
@@goingto11 Got it. Thanks 😊
In the renderer you're showing, it looks as if individual speakers that are part of an array can play individually. My understanding is that that's not how an array works.
From the Trinnov Altitude 32 processor manual (p. 80):
"...it becomes essential to be able to create an array of speakers: use several speakers to play the same channel. ... Similar configurations have been used in commercial cinemas for years."
In other words, when a group of speakers are setup as an array, they will play the same signal. They will never be able to play individually. So that renderer is very confusing.
One of the early selling points for Atmos was it allowed mixers to "pan through arrays". If we are working with an object, we can pan to any location in the environment and the Dolby Atmos renderer will place it in the best speaker(s) to recreate that position based on the current speaker configuration. If we work within the bed in Atmos, it will utilize the arrays.
@@goingto11 interesting. That would require a processor that knows how to differentiate between the two. But perhaps that is how Atmos enabled processors work with arrays, compared to how arrays would be setup and work in the past prior to Atmos?
In any Dolby format there is metadata, and that metadata should tell the device where to put things. I'm not exactly sure how that works in consumer devices, though, partially because I'm not sure exactly what is happening with the encoding for consumer formats(ex. DD+, TrueHD, etc.).
@@goingto11 yeah. That's constantly being tested by the home theater community. Especially since the launch of the spatial audio calibration toolkit. That's how we've found out that some mixes don't render as we would want them to (e.g. 7.1.2 mixes with 7.1.6 setups) and that different devices actually do things a bit different, and that the user's settings can mess things up without them knowing (e.g. assigning speakers that are located as surround heights as "surround heights" in Denon/Marantz, which will make them completely silent with Atmos content).
@@isak6626So you say, if someone has height speakers, they should be assigned as top speakers not height?
Did you test it? Because as I know Dolby Atmos supports height configuration and many people are happy with that.