I am 100% sure, that given time these series of videos will become a standard in explaining these concepts and will have millions of views. That's how good they are.
Man, these video series are MUST WATCH. Thanks for your efforts! Please know, that even though views are low, the quality is great and the way you explain is easy and simple, all on the point and on the experiment
Thank you so much for these videos. I'm studying for finals for Sound in TV Audio and your videos are so well made. The diagrams and animations paired with your calm voice--you explain really well.
Below 4 bits: Lowest resolution (even more error, like large raining). 8 bits: Lower resolution (more error, white noise). Above 16 bits to 24 bits: Typically the more resolution (no error, much clearer audio).
Thank you very much for these explanation about digital audio! I've been busy with sampling since the COVID19-pandemy, so that's quite a while now. For sampling in particular the more you'll understand how digital sound works, the better and more efficient you can work.
The visualization of these and the clear explanations and examples, is highest quality. I’m just a mere producer but I could grasp the concepts. Elite content 👌🏾
I’m a musician/engineer and I was trying to study on floating point calculations. That’s how I came to your channel, your series on IEEE 754 was amazing, but your knowledge in digital audio principles are mind-blowing!!! Bravo!!! Just wanted to point out one thing, when you tried to illustrate how to extract the quantization noise by shifting the phase, you should have instead flipped the POLARITY of one of the waveforms (turn it upside down on the horizontal axis). This is a common misconception regarding phase shift VS polarity flip, I hope it was simply an overlook from you who created an otherwise almost flawless series.
Thank you for checking out the series, and letting me know! You are totally right, i should've said 'invert' or 'switch polarity' when describing the subtraction process. Phase shifting to achieve subtraction would only work on simple sinusoids.
This is a goated video. Thank you man. This is incredibly helpful. It took me a few other videos to find this one, but this series was exactly what I was looking for. You explained they "why" for everything and the visuals that go along helped to explain it seamlessly.
Doing amazing work, Akash. I hope it reaches as many "audiophiles" as possible. You do a terrific job at undermining ignorant assumptions in a very polite way. Appreciate the work.
@@iggynub thanks very much! I generally try to avoid the task of dispelling myths and instead talk about the science, and let people make up their own minds.
Akash! This is amazing! Phenomenal job on simplifying these complex concepts and making them easier to understand. I am an engineer and still find audio engineering quite confusing until I watched your video! Now, I find these terminologies a lot easier to understand. Thanks!
Sir, This is a terrific Channel. I share it with my students and colleagues, as your explanations are far better than mine :) I would love to hear your input on floating point arithmetic. and more distinctly - its influence on quantization and noise floor. It becomes instrumental when discussing audio record formats, and internal audio engine processing (which go up to 64bit float, what for?). There's also the arithmetic "sublimation" that occurs when your converter samples (and interpolates) at 24bit int, but your DAW's signal flows at 32bit float - what exactly happens there?
thank you so much, you are trully amazing, you've helped me so much in undrestanding this subject better than any teacher ever. thank you for these amazing videos
great material, but ... when extracting quantization noise from a 4-bit signal (8:00), for the effect we hear you should subtract the original signal from it, not vise versa, as the presentation suggests.
These are brilliant! Thank you for the time and effort you've put into this. I watch because I love audio and love when I meet people with a better way of explaining and understanding than myself. Can I ask what software you've done the graphics in? I make this style of learning videos myself and am always wanting to learn new methods.
Thanks for checking it out! I'm always on the lookout for a better, more intuitive way of explaining a concept as well! Most of the animations are done on After Effects. I'm curious to see your videos.
Sir you are amazing I get to knew about you by post of sreejesh Nair sir 's and this is awesome the you clear topic amazing thank you so much for this sir 🙏🙏 keep uploading knowledge sir
Below 4 bits: Lowest resolution (even more error, like large raining). 8 bits: Lower resolution (more error, white noise). Above 16 bits to 24 bits: Typically the more resolution (no error, much clearer audio).
So when I used to sample with my 8-bit Ensoniq Mirage sampler, and I got that alias'ing crunch, was it the bit depth or the sample rate which was giving that character? Probably both.
Bit depth wouldn't cause aliasing as such, it would only introduce quantisation error. It would make it sound noisy or even introduce inharmonic distortion at times (similar to aliasing). But fundamentally, the way the noise is formed or added is different for aliasing distortion vs quantisation distortion through low bit depth.
@@akashmurthy that makes sense. Back in the 90's we would seek out that crunch in the samples we used in our music. We liked the patina. While we called it "aliasing" at the time, we probably were hearing quantization distortion. Good stuff. Love this series.
Hi, thank you for showing us how to quantize 16bit down to 4bits,but do you also know those commandos to convert audio to 2bits and 1bit?? That would be awesome to do some experiments with it😁
The 4-bit resolution was able to relay human speech rather well, I wonder if you could reduce 16-bit audio for educational lectures or in radio to 4-bit to allow faster communication, then design a relatively small neural net to clear up noise and re-generate a semi accurate 16-bit audio similar to the original 16-bit audio. And the AI could easily be extremely small in size, if it could be done then the range for radio could nearly be increased by 4x
The 4 Bit demo is very similar audio when you tuning into a radio station but you are not exactly on station. All you hear is the noise along with some recognizable audio.
Hey! Great Video! Can you make a video on phase? To be precise, EQ and Phase response of a signal and how linear phase eqs overcome that. That's one thing I'm struggling to understand.
Hey, thanks! Interesting suggestion. I didn't know there were linear phase EQ specifically designed to combat this problem. I'm going to be doing a module on filters later on. Most filters we use today are IIRs (Infinite impulse response). But to maintain the same phase relationship we would need a FIR (finite impulse response) filter, which are computationally more expensive. I'll be sure to address your question there. But I'm not sure how long it'll take to get there!
You stated in one of my other comments that there is no relationship between dbFS and dbSPL, but as you are comparing the “SQNRdb at 16 bits” to “driving the track up to 96db to hear the noise” in this video, it seems like you equated -96db in Full Scale to noise recognized at 96dbSPL which sounds like an equal but inverse relationship with something between dbFS and dbSPL. Can you clearly define this relationship? and would it be a different ratio of a potential relationship for higher bit levels like 24 or 36? I ask this because it would be nice to know if i can take the dbSPL scale and all of its reference markers like “60db is a normal conversation) and flip it to use as a sort of guide for mixing in dbFS, where the maximum level is 0dbFS instead of the maximum for DbSPL at 130.
I've mentioned a part of this in a comment on an earlier video. Consider this scenario, you have 2 monitors connected to your mixer, one is a really small low powered monitor, and another is a large stage monitor. If you boost the gain on your mixer by +6dbFS, the pressure level measured from each of these monitors are vastly different. So there is no easy relationship between an arbitrary reference level (dBFS) and physical pressure (dBSPL). It'll have to be dependent on the speaker source. dBFS is not measuring loudness. It's a reference level when producing music. It is used for comparing levels of different instruments. I can say that vocals are at 0dBFS, and guitars are 10dBFS below that. That doesn't mean anything in terms of SPL. If you play this recording loudly, you can have this play at 80dBSPL, or you can play it at low volume at 40dBSPL. The only thing you can say with some accuracy is the relative levels. So in the mixed track, if vocals are played at 80dBSPL, then guitars appear at 70dBSPL, similarly if vocals are played at 40dbSPL, the guitars appear at 30dBSPL. The relative dBFS level and relative dBSPL values can match (with an ideal speaker). So when I'm talking about SQNR here, I'm talking about the dynamic range - a range of values from the maximum signal level that can be represented to the minimum signal level before it merges into noise. This is relative! The higher the bit depth, the higher the RANGE between the 2 levels. So, if you took a 16 bit recording, which fully utilises the entire dynamic range, and you play it back, lets say at around 100dBSPL (there is no relationship here, I'm just cranking up the volume till the measuring instrument records a peak of 100dBSPL). All I was saying was the noise from quantization would be present at 4dBSPL, since 16bit has 96db SQNR, a relative range between the highest possible level and low level noise. Hope that helps.
Thank you for this video! I enjoyed your other videos a lot as well. Could you please also make videos about FFT and LPC (their principle; their application, etc...)? Thank you!
Thanks very much for checking out the series! Yes, I want to do those topics in the future, but I have a few other topics that I feel like I need to get to before that..!
Great question! So, 24bits of "fixed point" audio is all you need when you're playing back audio, when you don't have to manipulate the audio in anyway and just play back the audio as it is. Usually consumer audio is never stored in a bit depth greater than 24bit, because of the reason you mentioned - there is no need for it. But when you need to do signal processing on the audio, for example: when you want to change the gain, apply filtering, or convolution, or any other type of effects, you have to do mathematical operations on the audio data. If you're manipulating audio data stored in "fixed point" format, each mathematical operation, like multiplication or division has a little bit of error associated with it. This is because of the nature of how data is represented in fixed point format, and the limited precision associated with it. With n-order filtering, the signal is fed back into the input, several times, and this error could accumulate over time. This could result in potentially more noise. Because of this, audio programmers - both DSP hardware and computer software programmers use "floating point" representation of data. Because of how it is represented, the floating point data has inherent mechanisms to cope with error, the accumulation of error is minimised. And according to the IEEE standard, a floating point number is generally 32bits or 64bits. And that's why, when you're recording audio into your DAW software, the DAW software will almost always represent your audio data in 32bit float or 64bit float, and never usually in fixed point representation. I'm working on videos at the moment regarding floating point Vs fixed point data, and their pros and cons. Hopefully I can finish it soon.
@@akashmurthy So cleaaar ! Thank you so much for the time you take to answer to my question ! I’m definitely impressed by the quality of your work bro ! 🙏🙏
I don't believe we can move the noise out of the "audible" range in all cases without it affecting your sound quality, because if your speakers respond to that signal your tweeter will never truly settle. You will essentially move the noise into the air around the tweeter and there it will affect audible resolution by affecting its impedence. Same goes for moving it toward bass. The woofer will never steady meaning your timing will be off enormously in the woofer range. This is why some very expensive systems all have multiple woofers, so that the extension required for the same SPL per speaker is much smaller and the woofer returns to dead center much quicker after producing a sound. It's not just about the constant noise floor averaged over time, it's about the instantaneous noise dynamically interfering with your speaker response. Sure we can reduce the file sizes but you'll hear it clear as day on a 100K+ system, and when I say clear as day I mean a difference as large as between a bluetooth speaker and an entry level hifi system.
in the 1990s a revolution happened when soundcards were suddenly able to record in 16bit resolution. I remember my old Gravis Ultrasound could not, it was limited to 8bit 44kHz and it sucked big time.
1 Bit - 2 Amplitude Levels - Minimal Quantiz 2 Bit - 4 Amplitude Levels - Super Low Quaniz 3 Bit - 8 Amplitude Levels - Very Lower Quantiz 4 Bit - 16 Amplitude Levels - Very Low Quantiz 5 Bit - 32 Amplitude Levels - Very Lowean Quantiz 6 Bit - 64 Amplitude Levels - Lower Quantiz 8 Bit - 256 Amplitude Levels - Low Quantiz 10 Bit - 1'024 Amplitude Levels - Lowean Quantiz 12 Bit - 4'096 Amplitude Levels - Lower Mid Quantiz 16 Bit - 65'536 Amplitude Levels - Medium Quantiz 20 Bit - 1'048'576 Amplitude Levels - Mean Quantiz 24 Bit - 16'777'216 Amplitude Levels - Average Quantiz 32 Bit - 4'294'967'296 Amplitude Levels - High Quantiz - Big CPU Load 40 Bit - 1'099'511'627'776 Amplitude Levels - Higean Quantiz - Large CPU Load 48 Bit - 281'474'976'710'656 Amplitude Levels - Higherage Quantiz - Gross CPU Load 64 Bit - 18'446'744'073'709'551'616 Amplitude Levels - Super Quantiz - Grand CPU Load 80 Bit - 1.208'926e24 Amplitude Levels - Very Higean Quantiz - Huge CPU Load 96 Bit - 7.922'816e28 Amplitude Levels - Very Higerage Quantiz - Massive CPU Load 128 Bit - 3.402'824e38 Amplitude Levels - Very Super High Quantiz - Giant CPU Load 160 Bit - 1.461'502e48 Amplitude Levels - Titanic CPU Load 192 Bit - 6.277'102e57 Amplitude Levels - Colossal CPU Load 256 Bit - 1.157'921e77 Amplitude Levels - Hyper High Quantiz - Extreme CPU Load
In terms of programming bit depth is very confusing. It's basically the max and min value of a sample point. Where one sample point of 16bit audio file is between + or - 65,536
It's actually + or - 2^(16 - 1) So, + or - 32768 But generally, in programming, anywhere outside final delivery, audio sample points are handled in floating point precision ( + 1.0 to - 1.0 )
Amazing channel.... subscribed, heck, I even sent you a Facebook friend request... I'd love to hire you to optimize my home audio system but it appears you don't live in the US!
I am 100% sure, that given time these series of videos will become a standard in explaining these concepts and will have millions of views. That's how good they are.
Thank you for your confidence, though I may not share the same views 😅
I've already watched dozens of videos about audio, but this series is the best one mate, keep up the good work
Really glad you think so! More on the way.
You have a talent for explaining concepts clearly and accurately including the use of visuals. Kudos! Don’t stop
Man, these video series are MUST WATCH.
Thanks for your efforts!
Please know, that even though views are low, the quality is great and the way you explain is easy and simple, all on the point and on the experiment
Thanks for the feedback mate!
I am so glad you created this series on digital audio. Thank you very much!
Thanks for checking it out mate!
Thank you so much for these videos. I'm studying for finals for Sound in TV Audio and your videos are so well made. The diagrams and animations paired with your calm voice--you explain really well.
Thanks very much! All the best with your finals!
Below 4 bits: Lowest resolution (even more error, like large raining).
8 bits: Lower resolution (more error, white noise).
Above 16 bits to 24 bits: Typically the more resolution (no error, much clearer audio).
Thank you very much for these explanation about digital audio!
I've been busy with sampling since the COVID19-pandemy, so that's quite a while now. For sampling in particular the more you'll understand how digital sound works, the better and more efficient you can work.
The visualization of these and the clear explanations and examples, is highest quality. I’m just a mere producer but I could grasp the concepts.
Elite content 👌🏾
@@BenCaesar thanks very much mate! A bit self deprecating about being a producer though. That's stuff ain't easy.
I’m a musician/engineer and I was trying to study on floating point calculations. That’s how I came to your channel, your series on IEEE 754 was amazing, but your knowledge in digital audio principles are mind-blowing!!! Bravo!!!
Just wanted to point out one thing, when you tried to illustrate how to extract the quantization noise by shifting the phase, you should have instead flipped the POLARITY of one of the waveforms (turn it upside down on the horizontal axis). This is a common misconception regarding phase shift VS polarity flip, I hope it was simply an overlook from you who created an otherwise almost flawless series.
Thank you for checking out the series, and letting me know!
You are totally right, i should've said 'invert' or 'switch polarity' when describing the subtraction process. Phase shifting to achieve subtraction would only work on simple sinusoids.
This is a goated video. Thank you man. This is incredibly helpful. It took me a few other videos to find this one, but this series was exactly what I was looking for. You explained they "why" for everything and the visuals that go along helped to explain it seamlessly.
Sweet! Thanks for checking it out!
Doing amazing work, Akash. I hope it reaches as many "audiophiles" as possible. You do a terrific job at undermining ignorant assumptions in a very polite way. Appreciate the work.
@@iggynub thanks very much! I generally try to avoid the task of dispelling myths and instead talk about the science, and let people make up their own minds.
amazing, just pure brilliant, you explain it so so well sir
4 bits actually sounded way better than I expected... great video as always
Haha! It was atrocious at loud volumes! 😂 Thanks mate!
Most casettes and top quality tapes were sounding around that range
Fantastic explanation. Very well done.
Akash! This is amazing! Phenomenal job on simplifying these complex concepts and making them easier to understand. I am an engineer and still find audio engineering quite confusing until I watched your video! Now, I find these terminologies a lot easier to understand. Thanks!
That's great to know that these are helpful! I struggled with audio terminologies as well at the beginning.
Crystal clear... very handful of people explain like this
You are the best.😍😍😍😍😍😍
Thanks very much!
Sir, This is a terrific Channel. I share it with my students and colleagues, as your explanations are far better than mine :)
I would love to hear your input on floating point arithmetic. and more distinctly - its influence on quantization and noise floor. It becomes instrumental when discussing audio record formats, and internal audio engine processing (which go up to 64bit float, what for?). There's also the arithmetic "sublimation" that occurs when your converter samples (and interpolates) at 24bit int, but your DAW's signal flows at 32bit float - what exactly happens there?
Excellent suggestion! I have floating point bit depth on my to-do list!
What a very well explained topic about digital audio, excellent!
Thanks so much!
thank you so much, you are trully amazing, you've helped me so much in undrestanding this subject better than any teacher ever.
thank you for these amazing videos
Thanks you very much! :) I'm glad these videos helped
Excellent explanation of why quantisation error, bit depth, is directly related to noise. 6 x bit depth = noise floor.
That was so Good!!! Thank You 😄
Glad you liked it!
great material, but ... when extracting quantization noise from a 4-bit signal (8:00), for the effect we hear you should subtract the original signal from it, not vise versa, as the presentation suggests.
i cant find the words that i should tell u about this amazing content,thnks
Thanks for checking it out!
The best video tutorial ❤
This the best videos that i have seen on this matter. Very informative and very good explained thank very much !!🙂👋
Thanks so much! :) I'm glad you find these useful!
Loved this , As well as the other videos way too informative :D
Cheers! Thanks for checking it out.
@@akashmurthy Can you do a video on explaining 32 Bit float separately ?
@@rohanbenny8632 That's a good point. I'll add information about 32 bit floats when I make a video on Headroom. Thanks!
These are brilliant! Thank you for the time and effort you've put into this. I watch because I love audio and love when I meet people with a better way of explaining and understanding than myself. Can I ask what software you've done the graphics in? I make this style of learning videos myself and am always wanting to learn new methods.
Thanks for checking it out! I'm always on the lookout for a better, more intuitive way of explaining a concept as well! Most of the animations are done on After Effects. I'm curious to see your videos.
So good I wish I could like it twice !
Sir you are amazing I get to knew about you by post of sreejesh Nair sir 's and this is awesome the you clear topic amazing thank you so much for this sir 🙏🙏 keep uploading knowledge sir
Thanks very much. Yes, Sreejesh has been very kindly sharing this.
This is the best. Thank you.
Thanks! And you're welcome.
Below 4 bits: Lowest resolution (even more error, like large raining).
8 bits: Lower resolution (more error, white noise).
Above 16 bits to 24 bits: Typically the more resolution (no error, much clearer audio).
Thanks for this video series.......❤️
You're welcome! :)
How you make this animation videos ?
Adobe After Effects
This is AAA content
So when I used to sample with my 8-bit Ensoniq Mirage sampler, and I got that alias'ing crunch, was it the bit depth or the sample rate which was giving that character? Probably both.
Bit depth wouldn't cause aliasing as such, it would only introduce quantisation error. It would make it sound noisy or even introduce inharmonic distortion at times (similar to aliasing).
But fundamentally, the way the noise is formed or added is different for aliasing distortion vs quantisation distortion through low bit depth.
@@akashmurthy that makes sense. Back in the 90's we would seek out that crunch in the samples we used in our music. We liked the patina. While we called it "aliasing" at the time, we probably were hearing quantization distortion. Good stuff. Love this series.
Hi, thank you for showing us how to quantize 16bit down to 4bits,but do you also know those commandos to convert audio to 2bits and 1bit??
That would be awesome to do some experiments with it😁
The 4-bit resolution was able to relay human speech rather well, I wonder if you could reduce 16-bit audio for educational lectures or in radio to 4-bit to allow faster communication, then design a relatively small neural net to clear up noise and re-generate a semi accurate 16-bit audio similar to the original 16-bit audio. And the AI could easily be extremely small in size, if it could be done then the range for radio could nearly be increased by 4x
The 4 Bit demo is very similar audio when you tuning into a radio station but you are not exactly on station. All you hear is the noise along with some recognizable audio.
Hey! Great Video! Can you make a video on phase? To be precise, EQ and Phase response of a signal and how linear phase eqs overcome that. That's one thing I'm struggling to understand.
Hey, thanks! Interesting suggestion. I didn't know there were linear phase EQ specifically designed to combat this problem. I'm going to be doing a module on filters later on. Most filters we use today are IIRs (Infinite impulse response). But to maintain the same phase relationship we would need a FIR (finite impulse response) filter, which are computationally more expensive. I'll be sure to address your question there. But I'm not sure how long it'll take to get there!
You stated in one of my other comments that there is no relationship between dbFS and dbSPL, but as you are comparing the “SQNRdb at 16 bits” to “driving the track up to 96db to hear the noise” in this video, it seems like you equated -96db in Full Scale to noise recognized at 96dbSPL which sounds like an equal but inverse relationship with something between dbFS and dbSPL. Can you clearly define this relationship? and would it be a different ratio of a potential relationship for higher bit levels like 24 or 36? I ask this because it would be nice to know if i can take the dbSPL scale and all of its reference markers like “60db is a normal conversation) and flip it to use as a sort of guide for mixing in dbFS, where the maximum level is 0dbFS instead of the maximum for DbSPL at 130.
I've mentioned a part of this in a comment on an earlier video. Consider this scenario, you have 2 monitors connected to your mixer, one is a really small low powered monitor, and another is a large stage monitor. If you boost the gain on your mixer by +6dbFS, the pressure level measured from each of these monitors are vastly different. So there is no easy relationship between an arbitrary reference level (dBFS) and physical pressure (dBSPL). It'll have to be dependent on the speaker source.
dBFS is not measuring loudness. It's a reference level when producing music. It is used for comparing levels of different instruments. I can say that vocals are at 0dBFS, and guitars are 10dBFS below that. That doesn't mean anything in terms of SPL. If you play this recording loudly, you can have this play at 80dBSPL, or you can play it at low volume at 40dBSPL. The only thing you can say with some accuracy is the relative levels. So in the mixed track, if vocals are played at 80dBSPL, then guitars appear at 70dBSPL, similarly if vocals are played at 40dbSPL, the guitars appear at 30dBSPL. The relative dBFS level and relative dBSPL values can match (with an ideal speaker).
So when I'm talking about SQNR here, I'm talking about the dynamic range - a range of values from the maximum signal level that can be represented to the minimum signal level before it merges into noise. This is relative! The higher the bit depth, the higher the RANGE between the 2 levels. So, if you took a 16 bit recording, which fully utilises the entire dynamic range, and you play it back, lets say at around 100dBSPL (there is no relationship here, I'm just cranking up the volume till the measuring instrument records a peak of 100dBSPL). All I was saying was the noise from quantization would be present at 4dBSPL, since 16bit has 96db SQNR, a relative range between the highest possible level and low level noise. Hope that helps.
Where do the noise came from
Thank you for this video! I enjoyed your other videos a lot as well. Could you please also make videos about FFT and LPC (their principle; their application, etc...)? Thank you!
Thanks very much for checking out the series! Yes, I want to do those topics in the future, but I have a few other topics that I feel like I need to get to before that..!
thank you!!!!!!
dbFS, dbSPL, dbA are different things!
But this is a pretty good explanation of bit depth still.
Good work. Thank you.
Thanks for watching!
You said 24 bits of resolution takes the noise floor to - 144dBFS so I'm wondering what is the point of having 32bits ? 🤔
Great question! So, 24bits of "fixed point" audio is all you need when you're playing back audio, when you don't have to manipulate the audio in anyway and just play back the audio as it is. Usually consumer audio is never stored in a bit depth greater than 24bit, because of the reason you mentioned - there is no need for it.
But when you need to do signal processing on the audio, for example: when you want to change the gain, apply filtering, or convolution, or any other type of effects, you have to do mathematical operations on the audio data. If you're manipulating audio data stored in "fixed point" format, each mathematical operation, like multiplication or division has a little bit of error associated with it. This is because of the nature of how data is represented in fixed point format, and the limited precision associated with it. With n-order filtering, the signal is fed back into the input, several times, and this error could accumulate over time. This could result in potentially more noise.
Because of this, audio programmers - both DSP hardware and computer software programmers use "floating point" representation of data. Because of how it is represented, the floating point data has inherent mechanisms to cope with error, the accumulation of error is minimised. And according to the IEEE standard, a floating point number is generally 32bits or 64bits. And that's why, when you're recording audio into your DAW software, the DAW software will almost always represent your audio data in 32bit float or 64bit float, and never usually in fixed point representation.
I'm working on videos at the moment regarding floating point Vs fixed point data, and their pros and cons. Hopefully I can finish it soon.
@@akashmurthy So cleaaar ! Thank you so much for the time you take to answer to my question ! I’m definitely impressed by the quality of your work bro ! 🙏🙏
I don't believe we can move the noise out of the "audible" range in all cases without it affecting your sound quality, because if your speakers respond to that signal your tweeter will never truly settle. You will essentially move the noise into the air around the tweeter and there it will affect audible resolution by affecting its impedence. Same goes for moving it toward bass. The woofer will never steady meaning your timing will be off enormously in the woofer range. This is why some very expensive systems all have multiple woofers, so that the extension required for the same SPL per speaker is much smaller and the woofer returns to dead center much quicker after producing a sound. It's not just about the constant noise floor averaged over time, it's about the instantaneous noise dynamically interfering with your speaker response. Sure we can reduce the file sizes but you'll hear it clear as day on a 100K+ system, and when I say clear as day I mean a difference as large as between a bluetooth speaker and an entry level hifi system.
in the 1990s a revolution happened when soundcards were suddenly able to record in 16bit resolution. I remember my old Gravis Ultrasound could not, it was limited to 8bit 44kHz and it sucked big time.
How is this electronically done designed by electrical / electronic engineers designing such circuits?
Logic gates perhaps.
This content is bang on! Consider me subbed.
Cheers mate!
👍
1 Bit - 2 Amplitude Levels - Minimal Quantiz
2 Bit - 4 Amplitude Levels - Super Low Quaniz
3 Bit - 8 Amplitude Levels - Very Lower Quantiz
4 Bit - 16 Amplitude Levels - Very Low Quantiz
5 Bit - 32 Amplitude Levels - Very Lowean Quantiz
6 Bit - 64 Amplitude Levels - Lower Quantiz
8 Bit - 256 Amplitude Levels - Low Quantiz
10 Bit - 1'024 Amplitude Levels - Lowean Quantiz
12 Bit - 4'096 Amplitude Levels - Lower Mid Quantiz
16 Bit - 65'536 Amplitude Levels - Medium Quantiz
20 Bit - 1'048'576 Amplitude Levels - Mean Quantiz
24 Bit - 16'777'216 Amplitude Levels - Average Quantiz
32 Bit - 4'294'967'296 Amplitude Levels - High Quantiz - Big CPU Load
40 Bit - 1'099'511'627'776 Amplitude Levels - Higean Quantiz - Large CPU Load
48 Bit - 281'474'976'710'656 Amplitude Levels - Higherage Quantiz - Gross CPU Load
64 Bit - 18'446'744'073'709'551'616 Amplitude Levels - Super Quantiz - Grand CPU Load
80 Bit - 1.208'926e24 Amplitude Levels - Very Higean Quantiz - Huge CPU Load
96 Bit - 7.922'816e28 Amplitude Levels - Very Higerage Quantiz - Massive CPU Load
128 Bit - 3.402'824e38 Amplitude Levels - Very Super High Quantiz - Giant CPU Load
160 Bit - 1.461'502e48 Amplitude Levels - Titanic CPU Load
192 Bit - 6.277'102e57 Amplitude Levels - Colossal CPU Load
256 Bit - 1.157'921e77 Amplitude Levels - Hyper High Quantiz - Extreme CPU Load
Saya belajar pelan-pelan
In terms of programming bit depth is very confusing. It's basically the max and min value of a sample point. Where one sample point of 16bit audio file is between + or - 65,536
It's actually + or - 2^(16 - 1)
So, + or - 32768
But generally, in programming, anywhere outside final delivery, audio sample points are handled in floating point precision ( + 1.0 to - 1.0 )
Amazing channel.... subscribed, heck, I even sent you a Facebook friend request... I'd love to hire you to optimize my home audio system but it appears you don't live in the US!
Haha, thanks for checking it out! I don't believe I'm a professional audio technician by any means! But thanks for your confidence..
First ... Principles
Without principles, we are animals.
Actually, 4 bits = a Nibble. FYI
Staircase? It doesnt work that way.
You have a talent for explaining concepts clearly and accurately including the use of visuals. Kudos! Don’t stop