DO NOT contact anyone on Telegram (or other service) that is pretending to be me and claiming you won something. I am not giving anything away. They are scammers trying to steal your money and data.
I just want to say that it is impressive that you can say, "...if you're recording mouse farts..." with a completely straight face. Great tests for some practical questions!
ok i literally just clicked on this real fast to A/B test my EQ settings (you always have good audio so i use you for that, sorry lol), but now I'm too invested in whether or not I need to record 24-bit audio to click away hahahaha
From a DAW programmer's perspective, if you're going to go beyond 16b/44.1khz looking for combating aliasing, then you might as well go one, single, little, byte, further and record using 32-bit float recording instead of 24-bit (integer) format, it's literally one single byte per sample difference. The kicker is that when you are operating in any DAW, VST, or processing software, this conversion is happening behind the scenes anyway (so no complaints, YOU MUST transition through this format anyway) in order to be able to do almost all audio effects, machine learning processing, anything. The pre-hoc correctness of the format, the representation being -1.0 to 1.0 for standard values but it can go way beyond that is more native to sound, and the added precision of having an exponent for scaling up and down and normalizing effectively losslessly are all giant positives. In the DSP field, floats are king.
OK but as you know 32float has 24bits of resolution with an 8 bit mantissa, and the conversion is trivial. If you can properly gain stage you will have no difference, if you cant youre going to end up clipping if you dont learn. Unless you are actually trying to record the self noise of the analog chain and any room noises at 24bit detail, while still being able to talk into the mic (ie field recordings or spycraft) there is going to be no advantage in a properly gain staged recording. The biggest effect for recording is the packed file will be larger, which matters a lot in transit. As you say every DAW does the conversion for the processing headroom, and the conversion is trivial even in ASM for the most limited processors. I just dont see the advantage for studio recording?
I think if you do a lot of processing the benefits of a higher dynamic range becomes more apparent. Especially saturation/overdrive/distortion type effects, which add a lot of gain. But even with filters you can end up with pretty significant boosts that bring up the noise floor, especially if you're stacking multiple effects onto each other.
This makes a lot of sense. I was always a "16 bit is fine" kind of guy. And it always has been. But one song I did needed a vocal harmony. So I quickly just record armed a mic that was already plugged in, a drum overhead. Didn't even check the level, just hit record and ran over to the drum set and started singing upward into the mic. Went back to the desk and saw the waveform was super small. Whatever "I'll just boost it" and so I did. When it sat in the mix it was fine, but when I solo'd it to do some compression and EQ on it I heard exactly what we heard in this video. It sounded corrupt. I never understood why until now. Great video and very informative.
Yes, you have to record with a high enough level so as to not get problems, with 16 bit. If you record at a high enough level, this shouldn't be a problem, unless you have a very large dynamic range to what you are doing.
Increased bit depth avoids a type of aliasing called quantisation error (very different to the temporal aliasing you see at the Nyquist limit) - the noise youre hearing in the 16 bit samples is the quantisation error ofc. 24 bit depth also provides some protection against this sort of aliasing during repeated processing (most processing is done in far higher than 24 bit for this reason - this drops any errors below the bit depth output)
Which is why we use dither when outputting 16 bit files. Dither helps to smooth out those errors at the cost of some noise. Way easier on the ears than the squelchy sound you get from quantization errors. Great comment!
@@meistudiony small correction here: if you output 16 bit with a 24 bit DAC, you use oversampling which is a digital low pass filter. when you record 24bit and want to store it in 16 bit, you use dithering. practically you take the error diff value and add it to the next samples. this way you distribute the high frequency error to wider frequency range but with less intensity. it is also sometimes called noise shaping because you change the quantization noise characteristic
@@stefanweilhartner4415 How can you correct something you didn't understand? He said "outputting 16 bit files", and there's no goddam DAC involved in the process of writing a file.
You can hear the differences between them when mixing complex music. Sometimes you want to sit an instrument way down in the mix and while the sound is there, 16 bit tracks can tend to take on a subtle grainy sound, especially if they weren't tracked very hot. 24 bit tracks also are noticably better when pushing them into reverbs and other effects. Honestly, 24 bit 48k is the standard for just about everything now. The only time 16 bit should come into play is when you bounce out your final CD or streaming masters.
If you're super uncertain of what you want to use, while not a normal audio interface, the Zoom F6 will allow you to do 16, 24, or 32 bit or a combo of 16 and 32 or 24 and 32 for direct comparisons if you want. There is one final option of just straight to MP3 because *random chaos*
I think an easy to understand explanation for the dynamic range of different bit depths is the difference of possible sound levels in a sample: 16-bit: 65,536 values vs 24-bit: 16,777,216 values, that's a 256 fold difference.
I record almost everything in 32, but since I'm not an expert watching this comparison of 16 and 24 was actually super clarifying and will help me be less anxious about breaking out the 24-bit devices when I need them.
As someone who's experimenting with antiquated equipment with 16bit audio, from a music production perspective, the practical problem is usually never the volume you record at if you record at an appropriate volume to begin with. The main problem is when you start applying EQ. If you don't have outboard EQ that you can use before the interface and you want some highs, you will quickly run into audible hiss and if said track is one of the more prominent tracks in your mix then you have to find yourself using gates or even worse having to make tonal compromises just so everything else can shine. EDIT: This might also be a problem on highly dynamic recordings. Volume automation or compression could essentially just make things noisier. So in reality it can be a problem depending on what you're recording, how you're processing the tracks and your track count. Gates and hiss removal plugins can be a good middle of the road compromise. Unless your track count is very high or even very low and all of your recordings are very dynamic then it won't be too much of a problem to a well learned mixing engineer given the right tools.
I think 24 bit is also very important if you need to apply a lot of compression and/or distortion to a track cuz that will raise the noise floor at a much higher level, for example a guitar DI track for reamping.
I knew in principle what the difference would be, but it was excellent to hear the demonstration. Thanks! Lots of Zoom kit is settable to 16- or 24-bit, and I always use 24-bit for the reason you stated.
Let me say at the start, I am a dinosaur, who remembers the struggles with 2-inch Ampex tapes. I embraced digital recording immediately, as it eliminated tape hiss, wow and flutter, track bleed, and all the other curses that, for some reason, people are now trying to reproduce. If possible, always try to record at minimum 24 bit, a de facto standard adopted ever since the advent of DVD. It isn't so much about whether you can hear it, (in some cases you can) but because that has been the industry standard for minimum bit-rate for some time now. A 16-but A/D processor can record 65,536 sound levels, whereas a 24-bit A/D processor can record 16,777,216 levels, producing a much more accurate representation. There is also the problem that a 16-bit A/D converter can "alias", producing non-harmonic frequencies that are not part of the input signal. These frequencies are artifacts of the sampling and digitization process itself, called "quantization errors". They often corruption and other unwanted noise, such as what you showed at 4:31. This can also appear in reverb tails and delays, and can be quite audible if replayed through a loud PA system. When CD audio first appeared, listeners complained that the sound was "glassy" and unnatural because of this effect, and so it required extensive processing and "oversampling" to compensate for it. When 24-but recording became available, it eliminated the need for such post-processing, allowing the much wider dynamic range suitable for film and TV. In the "old days" of early sampling, the 16-bit Akai S1000 was all the rage, (except for the very rich, who could afford the $40,000 Fairlight CMi) until low cost 24-bit PC audio cards blew them away. So for simple speech, a 16-bit recording will do, but if there is any discreet music or other sound involved, it will present problems. Also, the price of 24-bit, and even 32-bit float sampling and recording is only a few dollars more than the 16-bit, so I'd say spend for the future, not for the past.
Honestly this is so useful. I'm a noob but while looking for a new mic I was constantly looking for 24bit over 16bit because I thought it would sound better. Now I realise I've wasted my time for no reason and, for my usage, I can safely buy a 16bit mic. Thank you so much
Excellent presentation, thank you. I started recording 62 years ago, 1 mic., 1 mono tape machine, 1 basement. That's how you learn about recording. ....and I still record at 0 dBFS, (0 dBs), just like the old days, no need to throw away 18 dBs of dynamic range. Back then we didn't have 18 spare dBs to toss away. Bill P.
Excellent explanation!! I was wondering the same with 44.1 vs 96 kHz (since we can only hear up to 20kHz anyway). But I always come back to the conclusion that its totally fine to just stay with 44.1kHz and 16bit for me (recording drums)
Hi, Bandrew. Great video. It’s good that you also tested the digitally generated tones. As the tests for boosting the very low recorded 16 Bit recordings with the microphone were suffering more from aliasing issues due to the A/D and filtering of the interface. Your conclusion was spot on. If you are recording your levels properly, no one will ever hear the difference. Especially when it’s a simple acoustic source with limited dynamics such as a close mic’d voice. As for complex acoustic information such as music mixes. I’d say that anyone who is hearing the difference between 16 and 24 in properly recorded and mixed modern music is guessing. As for classical recordings. I’m old enough to remember the crossover from vinyl to CD and what a revelation CDs were with their high dynamic range and lack of noise (44.1/16). It’s also worth bearing in mind that vast majority of commercial streaming music, the older catalogued stuff, would all have been sourced at 16 bit regardless of the processing chain that took it to iTunes, Spotify etc. and no one has a problem with that. Mouse farts on the other hand 😂 Cheers, Dave.
Yes, for quieter or louder sounds, as well as post production, 24bit give you the headroom to process sounds before clipping. You can see this in plugin meters when you push something like saturation. It goes into the red but doesn't clip with 24bit. One main reason to go 24bit in post is because the allowing happens much lower at the Nyquist with in turn give you less noise when using compression and non-linear effects like saturation. However, processing the aforementioned on 16bit sounds terrible, but this has a lot to do with correct gain staging and how loud the signal was coming into. In turn, the negative results will be more dramatic you had to turn up the gain up a lot in the track due to poor gain staging. No one usually mentions that.
Dude-I said I love you in another comment and now I love you even more! I started to watch because I was like “oh, cool, I can learn about the mics history from this awesome guy.” And then you shared your wonderful and vulnerable sobriety story and how much that mic means to you and I just started crying and then you did and MAN! Just AWESOME! So much more than I expected from this 7 minute video. So happy that you exist and that your journey enriches our lives. Oh and I was like-I bought my first ever Shure mic (MV7)from a Guitar center too! Ha! Cool! Anyway-love you even more, man! Keep it up!
@@Podcastage thank you! My comment was meant for your SM7b 50th video. RUclips must’ve started auto playing the 24 bit audio one 😅 so my comment went there lol
I like to think of 16bit as antiquated. It served its purpose. Like you mention in the video, interfaces and hard drives can handle 24bit without problems. Leave 16bit for Mastering for CD. Everything else do at 24bit and above.
At 3:12 the 16 bit examples on the two otherwise clean samples were noticeably distorted compared to the pure sounding 24 bit ones. The 16 bit ones are slightly edgy. And no, up sampling can’t improve 16 bit. It’s weird because CDs can sound so excellent but the demonstrations like this that I’ve seen on RUclips show the differences even though most people don’t hear it. Whether it makes sense or not, I bet it is due to RUclips’s compression which it does kind of remind me of-artifacts.
depends how record. one mic, 48kHz is theoretically enough. but when you know enough about real world analog filters and aliasing effects, you might use 192kHz and do some low pass filtering and optional down conversion in the digital domain. if you record stereo, you might want to capture more detail regarding the space with high sample rates
Most people thinking that they can hear the difference between 16bit and 24bit recordings probably can really hear differences. but the reasons for that are somewhere else than the bit depth. Too quiet level in general, too big dynamic range, some other source of noise changing the sound flow (like a lamp or power source). In the 90s people were doing surprisingly good recordings using only 8, 12 or 15 bits too. Checkout for example the mod tracker scene and how people used early samplers.
Regardless your bit depth, ALWAYS use dither. Why dither? Even a perfect converter has to make a choice between silence and the very lowest level it can make above silence, and it makes that choice in the clumsiest possible way - toggling back and forth between nothing and something. That’s just the nature of the beast with digital conversion. Dither adds very specially tailored noise your ear won’t hear to keep the converter from making that choice. Reverb tails, both synthetic and natural, are improved by dither. If you look at a bit scope of dithered ‘silence’ versus non-dithered ‘silence’ you will see the lowest 2-3 bits flipping in the dithered version but not in the non-dithered version. But when you listen to each at the very tail of a fade gained up to the point of danger and - depending on the material - you might hear a very ugly distortion in the last moments of the non-dithered signal, and a graceful disappearance into extremely quiet noise in the dithered signal, just as in analog recording but far, far quieter. Once you understand the conditions that benefit from dither, you can decide when and where to use it.
Reading the title: Yes. As with photography a greater number of bits allocated to the differentitation between levels allows for greater flexibility after the fact. . The resolution at which you acquire data (light or audio voltage levels: doesn't matter ) cannot be increased later. You CANNOT INCREASE resolution in post. The original "distance between the slices" can't be made smaller (i.e. increase of resolution) . I work on movie sets. It is not always possible to keep the environment optimally quiet. Actors can produce wildly different levels quite unpredictably and try unexpected things each take to keep the reactions fresh . So we use AT LEAST 24bit resolution in the recording and often at a ludicously unnecessary 96Khz (lhuman speech scarcely ever contains frequencies above 8 or 9KHz.) Nyquist limits would allow frequencies accurately represented at up to 22Khz in a 44Khz sampling Freq. system. 96Khz seems silly. But when you are capturing something from a person whose daily wages are in the area of $200,000 or more how silly is it NOT to spend a few pennies more to use [33.33333% x 2(48Khz)] sampling data at the encoding /capture stage?
The thing is that with property set gain the noise floor of either 24 and 16 but formats will be drowned in the noise of preamps (or in case of using condenser mic - also in its self-noise). However for really dynamic recordings (like drums) 24 bit depth would be preferable cause the gain won't be set high and then preamp/mic noise maybe will be comparable to 16-bit format noise floor and in this casebit depth can actually decrease sound to noise ratio, which will result in increased noise floor that can become a problem. But in most of real scenarios there won't be any audible differences between 16 and 24 bit. But sometimes the difference will be. And in some cases 24 bit is better for mixing simply because you have more volume range available
This is really interesting. I was surprised to learn than the Sony FX3 records in 16bit, I would've thought it would have 24bit. There are also some wireless lav kits that do 16bit like the Fulham X5, but then the transmitters are around 1.6x smaller/lighter than the DJI ones that use 24bit. So just so I understand correctly, if in the intro you only boosted by 48dB instead of 58dB you would get similar sound quality with 16bit? -- it's good to know that even if I mess up by 30dB there isn't much between 16bit/24bit.
If you're setting you gain properly, there will still be a difference but unless you're compressing and decreasing the dynamic range by a huge amount it's not going to be noticeable in most cases.
That crackly noise in the 16 bit demo says all. If you set your gain well, compress/limit it on the way in, then 16 bit is fine. 24 bit is just so much easier to work with.
What are the benefits recording of 32-bit float vs 24-bit? As a voiceover actor, I'm always asked to record at 24-bit, but I would imagine having audio in 32-bit float would make it better for engineers to work with in post? Espeicially for voiceover where they want the noise floor eliminated as much as possible. Or is the added information outweighed by the increase in file size? I'm admittedly an amateur/enthusiast when it comes to the engineering side of things, so I'm trying to learn more.
My understanding is that unless you need insane dynamic range in a recording the main use is in processing so any level aliasing drops below the output bit depth and so processing functions can have huge intermediary level changes without data loss
Love your work in Shikimori and as Bell Cranell. I find it super interesting that you're here learning about recording and mics, I'm assuming you record at home mostly now?
@Jonathan Dano I record a bit from home here and there. Most of my Crunchyroll work is in-studio, as im local to Dallas now. but I record the odd game from home as well as ongoing remote work like Yu-gi-oh which is based out of New York
Bryson - I am in no way an expert in 32-bit float, but as Mijc mentioned, I’m sure the conversion noise floor will be next to non-existent. The only noise floors left would be preamp noise and room tone. So it’s essentially making one noise factor irrelevant. The other big benefit that I imagine a lot of people will be running to 32-bit float for is the fact that it doesn’t clip at 0dBFS. If it’s recorded properly, you can recover audio that has exceeded that 0dBFS threshold in post (i demoed this at the beginning of the Rode NT1 5th gen video). Hope this helps.
It depends on whether you have a high dynamic range device with multiple staged pre-amps. Many devices just record in 24 bit integer than map that to 32 bit float. On those there will be little to no difference. But devices like the Zoom F3 and the Rode NT1 use multiple staged preamps so you don't need to get your input levels correctly. They record the full dynamic range and allow you to set your gain in post. If you are using one of those devices, engineers would probably appreciate you recording in 32 bit float.
Great video! Another potential benefit of 24bit would be multi track recordings. A modern song often has 100 or more audio tracks so noise floor gets compounded across all those tracks. 24 bit potentially reduces that noise floor from becoming audible.
Wow that really means a lot. Hindsight 20/20, in the DAW, I should have attenuated the signal even more so it was below the theoretical limit of the 16-bit recording to hear that ,but I didn't! Oh well. Next time!
For those that haven't realized, analog decibel scales are different than the digital decibel scale, in a similar way to how Fahrenheit temperature is a different scale than Celsius. IMO this this the SINGLE MOST IMPORTANT CONCEPT to digital recording. For example, Fahrenheit +32° is equivalent to Celsius 0° (a thirty two degree difference). And your analog preamp will be running at approximately +18dB when your digital meter reads 0dB. Does analog gear SOUND GOOD when you run it at +18dB? Probably not, especially if your goal is maximum clarity. The implication is that you should set your analog gain stages to be in their normal range (around 0dB on the analog meter) which will produce a signal on the digital meter that shows -18dB (because there's typically an 18dB difference between the two, or some ADs are calibrated at a 16 dB difference). This will also give you 18dB of HEADROOM, for peaks. And once you use this method to get a perfectly clean signal into your DAW, you can then normalize your signal so that it's calibrated with whatever level your digital effects are expecting (that's for you to figure out). And finally, as your track is 'mastered', you can get on your tippy toes and reach for as close to 0dB as you can get, on the digital meter.
dB is a ratio, not a scale or unit. There are an incredible amount of differently referenced dB scales even in the analog domain. All dB tells you is that the scale is shown in Log10. For ANY scale you need to know what it is referenced to
@@mycosys Yes, my point is solely focused on the calibration level of an AD converter, which is typically called -18dB or -16dB. I suppose most people here will have their mind fully wrapped around this concept, but I posted in case just one person has never considered the gain staging implications, or is still falling for the 'using all the bits' type myths.
Working in 32-bit float on a 24-bit audio file allows you to make nondestructive edits. In other words, you can undo any edit you've made to the file (if the software allows it, of course). It actually saves space, compared to making a separate backup of the audio file after each major edit, as (prudent) audio engineers had been doing for years.
To anyone reading... that's editing in a 32 bit workspace, not acquisition of sound. Some DAWs default to this workspace by default and then will render to 16 or 24 but audio as needed without issues. It's like editing a 4K video then reducing it to 480P to record onto DVD. No loss.
Make a distinguish for Peaks and RMS in dBFS scale (VU is the RMS reference in dBu scale). Peaks on the recording stage should not exceed -6dBFS and should not clip the red = 0dBFS. Certainly, in post‑production you might use DeClipper plugin and restore those clipped Peaks, but that is only for emergency rescue situations. All in all, a healthy signal on the way in for recording should be between -21 to -18dBFS for the RMS (though Peaks in this case might get close to 0dBFS in some instruments recorded, but most likely the Peaks will float at about -6 to -3dBFS, which is perfectly great!). That is why VU meters usually get calibrated to 0VU = -18dBFS or 0VU = -21dBFS. For classical\acoustic instruments and orchestral ensembles 0VU = -23dBFS (also for such recordings hi-end audio interfaces are used with extremely quiet self‑noise of their converter chips and channels). This is all a good practice for the Recording stage. For the pre-mix and mix stage 0VU = -18dBFS is the usual norm. Here the audio\mix engineer would usually compress the Peaks (for most modern day music, besides acoustic\orchestral!!!), in order to boost the Volume (VU stands for Unit of Volume, Volume Unit) and get the mix to sound louder attaching FXs and such processing. In this stage the average VU level of the mix might even get up to -15dBFS (hitting the "red" of the VU meter, which is acceptable) and the peaks to get optimal maximum of -6dBFS (mostly compressed to that level or extreme maximum of -3dBFS in some percussion\drums\hits loudest sections). Then we go to the Mastering stage, usually setting up a Master bus compressor, Saturation of the high‑mids, gentle EQ balance and final Limiter (with inter‑sample\oversampling options) to achieve a boost levels of about -14LUFS (this is roughly 3dBFS higher than the RMS\VU equivalent, depending on the mix\genre) and Peaks not exceeding -1dBFS. It is not a rocket science but is a form of engineering. That is why it is called Audio Engineering.
Loved the talk and comparisons. TY 8:00 The trash that YT compression introduces makes 24 bit not beneficial in most practical situations. Have you ever listened to quiet sources with headphones on YT? Yuck.
Without having viewed the video,I can say there is a difference between 16 Bit and 24 Bit, but it took a long time for me to be able to hear it. There's more dynamic range with 24 bit and depending on the recording that could be very apparent, or not so much.
Confusius, or Jean-Claude Van Damme, or both, said, maybe, "The "real world experience" depends of "your real own world" " On the other hand Jeremy Clarkson said "Powa is everything". So, maybe it depends on the difference which can be elsewhere. I hope I help on this subject. 😀
Zoom just brought out the new F3 and F6 entry level 32 bit 96 usb capable devices that have no gain controls. VERY INTERESTING INDEED. Having musicians use dynamics as part of being a musician... WHAT A CONCEPT musicians who actually play and sing at different levels instead of having to turn up the gain controls. Sounds like a great option for hall recording with in close mic placement as well as distance mics to include the hall dynamics in a recording. As usual it all comes down to the quality of the mic noise floor the connections and the preamps and especially the musician's ability to take advantage of dynamics and use great mic technique especially if all of a sudden recording is done without worrying about clipping or setting the pre levels for the mics. Would be nice if you could do a review of where the noise floor is at with different mics through the new 32 bit zoom recorders and how to avoid clipping when dithering the recording down to broadcast 16 or 24 bit in a daw.
I have been doing some work with impulse responses lately, and its gotten me thinking of all sorts of nerdy things like transient speed and detain, and the detail in an IR's settling time. I can't say that I tested it, but after looking at things zoomed into the sample level, I am inclined to believe that bit depth and sample rate do make a bit of difference for IR processing. I was waffling between 48kHz/96kHz, and 24bit/32bit floating point, and I was finding some value in the higher resolution, at least in terms of initial capture and processing. I haven't decided if it needs to stay at the higher resolution, as I need to be mindful of the convolution's CPU usage too. Perhaps there is an argument for higher sampling rate in technical cases when transient response (HF response) really does matter a bit more. I also remember a digital control systems design class where we calculated that the system stability could hinge on the sample rate, in terms of the lag in the feedback control mechanisms. The general recommendation there was to sample at 10 times the operating frequency to reduce system instability, which is pretty crazy in audio terms, where we are content to hug nyquist. I'm also curious if bit depth might make a difference in masking, specifically in denser mixes. From what I understand, masking is largely a psychoacoustic phenomenon and perception could vary considerably from person to person. I'm not sure if any research as been done on this and the effects of bit depth, and if it affects peoples ability to discern quieter sources in a denser mix.
if you want the highest dynamic range, export at 24-bit. 16-bit will still sound good though, but if you want to do processing on that exported audio (such as compression, overdrive, etc) then there is a chance that the noise will kinda seep through
If you export at a higher bit and sample rate, most plug-ins can take advantage of it... Even if you recorded at 16-bit 44.1khz... But if you do this, you are not "upgrading" the original recording... You are processing it, and rendering at higher rate with the processing on it... This really only matters for music, in my opinion, but most people can't tell and they won't know the difference. When I'm exporting a mix I'm going to master, I always export at 24-bit, 88.2khz, to let the plugins do more detailed processing... Hitting around -12 peaks... Then once I've done my mastering, I export at 16-bit 44.1khz. hitting -.5 to -1 peaks. I leave the -.5 to -1 to prevent any clipping that may happen with conversation afterward, say to an mp3... Again the -.5 to -1, that's theoretical, really if it clips probly no one will notice and it won't matter, but that's just the extra care you put in sometimes. And losing .5db or 1db of peaks won't make or break your sound so I don't mind the trade off.
I have listened to mp3s for years. We all know most are 16 bit. When someone gave me a copy of his linkin park collectiins on 24Bit FLAC, listened to a stufio headphone, I was blown away. There were details I habe never heard before.
That's not a 16bit/24bit comparison, it's a linear PCM/MP3 comparison. It's well known that MP3 compression introduces artifacts. Compare MP3 with the same 16bit recording that the MP3 was made from and you'll hear pretty much the same differences.
Hard drive space is getting cheap, but it doesn't mean you don't have to pay attention to optimization. 24 bit 96 kHz (or 192) might be good for editing. But in the end you want to use 44.1 or 48 kHz 16bit in video games and streaming services, AAC or OPUS in 256 kbps bitrate to save a lot of bandwith with only a slight loss on quality.
If you do the math on the dynamic range requirements for podcast audio, you'll find that you may not even need 80 dB, particularly on a dynamic mic. This means that given a decent preamp setting, analog noise will be substantially above the converter noise floor and provide tons of dithering for a 16-bit ADC. It thus made sense to develop these entry-level interfaces built around antiquated but cheap 16-bit single-chip USB codecs with a decent mic preamp. Scanning the ranks of cheapie audio interfaces at Thomann leads me to believe that 24-bit USB audio codecs have actually arrived in this space already. If so, you can only expect a modest improvement as the kind of chips you find in microphones tend to deliver no more than about 90 dB worth of dynamic range. Things could get exciting once the likes of the ALC4080 (ALC1220 counterpart in USB) start hitting this market.
interesting. Thanks for sharing your thoughts, that all makes sense. This does assume that everyone recording is setting their gain appropriately. If I've learned anything this is not often the case though.
Please Remember everytime you run your mix though a compressor you reduce your dynamic range lifting the - 96 dB right up into the audable range. Just use 24bit and output 16bit if needed.
Haven't watched your channel in over a year maybe two, i just wanna tell you that you look great dude, no homo lol. Now if you haven't lost weight I'm gonna feel like an idiot but you def look like you have been taking care of yourself. Good stuff.
Oh, thanks for doing this video - maybe I won't have to explain it as often now! I'll just send them here... Love the title interstitials btw - couldn't make them any bigger? LOL
Sir , Your review with explanation is top notch. But sir can you plz make a detailed review on the latest FIFINE SCI AMPLITANK INTERFACE as it is a budget friendly 16 bit interface.
In all almost all the examples shown on my bose soundbar though RUclips's 1080 compression... 100% I could tell the difference each time but notable with the digital signals it was only at the middle and high DB sources... So for my case recording my voice for un cut commentary, Game audio, My own synth Music/SFX in the rare case I use just a tone where it won't make a difference I should still use 24bit or higher to fit with the rest of the video/ game/music/SFX.
6:52 Around that you said 29DB right? Which I could follow that based on the upper pitches of your voice... I'm also Autistic so I have really good pattern recognition and sensitive hearing. But After that I think I hearing sounds but that could be me just lip reading...
the 24 bit files take up 50% more storage space. a 100 MB 16 bit file will be 150 MB in 24 bit. that means your iCloud bill: $10 / month in 16 bit $15 / month in 24 bit Apple recommends 24 bit
Very informative. What should be the volume level (db?) of foley, music and voice (voice over) to aim for so they still be clearly separated and audible AND "standard". Sorry if the answer and terms are stupid, I'm discovering sound for a personal project.
Ok so a little bit of a random question: I have alot of unwanted microphone noise coming from my jack inputs in my pc (coil whine), could an external USB audio device help eliminate these noises coming from inside my pc? one thing: I'm using a headset mic with a 3.5mm jack (stuff like the Scarlett 2i2 is only for professional Microphones and not headset mic) PLEASE HELP, for the love of god I've been spending weeks on trying to fix this
In 2:03 the sinewave in 24 bit sounds a lot more 3-dimensional, it has more girth in the higher mids and it would cut through a mix better than the 16 bit sinewave.
I'm curious how you have your setup for video? Do you have a DSLR/Mirrorless camera hooked up as a "webcam" and then record with the microphone? I realize this isn't a bit question but curiosity to how your video quality is that good. Thanks!
been using 16bit, 44 for like 20 years now - i keep trying to change to 48/24bit but then windows always bounces between the 2 and its annoying. But i'd prefer to be on 48/24bit.
Rainer. They sound fine, but the main reason I’m using them is someone told me they were really comfortable. I had them sitting on the shelf so I threw them on and they are pretty comfortable. They will never replace my HD650’s, and once I’m done recording, I throw the 650’s back on. I just wanted to use closed backs while recording because of the metronome bleed in a few reviews and I was starting to get annoyed by it.
@Bandrew Scott I understand. I had a pair of the Røde NTH-100, and they gained my personal record for getting broken. This might be an isolated case, though. If I want to use closed back headphones for recording, I use either Beyerdynamic DT 1770 or the Sennheiser HD 300 Pro.
@@techmed-rainer Oh yeah. I’ve heard people talk about how they break. I’ve only had them on the stand for 3 days so far, so I’ll talk about it on the podcast if and when they break. If they do break, I’ll go back to the M50x’s which is what I grabbed because they were there before these. How do you like the DT1770s? Do they have the standard Beyer headphone sound where it’s really bright? My baby ears can’t handle that much high end. Haha.
Oh and as far as my record for broken headphones, that goes to the Shure SRH 940s. I wore them 3 times, and on the 4th time when I was putting them on, the ear cup arm snapped right off. Exactly as the reviews on amazing had pointed out, but I wasn’t convinced by them.
Some 24 bit interfaces (typically older ones) only have a dynamic range of about 100 dB, so by setting it to record to 16 bit you'll be losing very little. It's still better than using a 16 bit interfaces, because most of the converters they use fall short of the theoretical 96 dB DR.
Kind of feel like this is a decade or two behind. Yes you can use 16bit for most things, but these days there's just no reason to. The real question should be do you need 32 bit? 32 bit is All the rage this year, but it comes with a lot of asterisks and caveats. Most microphones aren't even rated to use the full potential of 24 bit, so there's the question if 32 bit would actually help. I personally think bitrate is rarely the weakest link, but further testing is needed.
@MF Nickster in theory that's how it should work. But it's not perfect. I'm like I mentioned there's plenty of other caveats. For example if you clip a microphone, no amount of bits can recover that data. And it's not widely supported. So while we can easily say get 24-bit when it's available. 32 feels like more of a gimmick.
Maybe it is a decade late, but this is a question I had, and I wanted to test it out and share it in case anyone else had the question. I've realized I'm too dumb for the theory stuff, I"m just an application kind of person. That means that I can read all about the theoretical benefits of X,Y,Z, and try to interpret that and regurgitate it, but I will screw it up. So instead of that I want to hear the benefit, and demo the benefit in case anyone else struggles with the theory like I do. So yes, maybe it is outdated, but I think hearing the benefit can be helpful in comprehending the benefits of higher bit depths.
I'm an Audiobook producer, I hate those 16bit recording Blue microphones, people keep recording far away, adding acoustics and with low levels, top it with noisy dithering from audacity's export, then we need to amplify like 30dB... and there you go, I prefer people recording at 24bit, end of story xD
Thank you for this even if my conclusion was not the same as your this video was invaluable as I know why I should be using 24 but or great not just because bigger number better.
It's all about finding the settings that we need to get the job done. I always record 24-bit, but I can honestly say that I don't think once I would have noticed if I recorded my podcast in 16-bit.
16 bit was good enough for professionals for decades. For semi pro use it is good enough for sure. If you have a choice go 24 or 32 bit. But dont think 16 wouldn't suffice
24/32 both are better bit rate ? Focusright sound card 24 Bit rate 192 db, Motu Sound card is 32 Bit rate both are wich is better sond card please reply me
DO NOT contact anyone on Telegram (or other service) that is pretending to be me and claiming you won something. I am not giving anything away. They are scammers trying to steal your money and data.
Does that mean I won't get the vintage U47 I paid postage for?!
(Just kidding.)
What about My Daw that Exports at either 16 or 24 bit?
I just want to say that it is impressive that you can say, "...if you're recording mouse farts..." with a completely straight face.
Great tests for some practical questions!
ok i literally just clicked on this real fast to A/B test my EQ settings (you always have good audio so i use you for that, sorry lol), but now I'm too invested in whether or not I need to record 24-bit audio to click away hahahaha
That 16 bit static sound was .... pleasant.
It sounded great, as I watched in 144p.
@@hxhdfjifzirstc894 144P sound the same as 4K
@@DarkTrapStudio tho in the past YT would o put in in mono
@@danzirvine I don't know Ive never seen this since I use it, what era ?
@@DarkTrapStudio around 2017 it used to do it but by 2018 it changed
From a DAW programmer's perspective, if you're going to go beyond 16b/44.1khz looking for combating aliasing, then you might as well go one, single, little, byte, further and record using 32-bit float recording instead of 24-bit (integer) format, it's literally one single byte per sample difference. The kicker is that when you are operating in any DAW, VST, or processing software, this conversion is happening behind the scenes anyway (so no complaints, YOU MUST transition through this format anyway) in order to be able to do almost all audio effects, machine learning processing, anything. The pre-hoc correctness of the format, the representation being -1.0 to 1.0 for standard values but it can go way beyond that is more native to sound, and the added precision of having an exponent for scaling up and down and normalizing effectively losslessly are all giant positives. In the DSP field, floats are king.
OK but as you know 32float has 24bits of resolution with an 8 bit mantissa, and the conversion is trivial. If you can properly gain stage you will have no difference, if you cant youre going to end up clipping if you dont learn. Unless you are actually trying to record the self noise of the analog chain and any room noises at 24bit detail, while still being able to talk into the mic (ie field recordings or spycraft) there is going to be no advantage in a properly gain staged recording.
The biggest effect for recording is the packed file will be larger, which matters a lot in transit.
As you say every DAW does the conversion for the processing headroom, and the conversion is trivial even in ASM for the most limited processors.
I just dont see the advantage for studio recording?
"more native to sound"
I think if you do a lot of processing the benefits of a higher dynamic range becomes more apparent. Especially saturation/overdrive/distortion type effects, which add a lot of gain. But even with filters you can end up with pretty significant boosts that bring up the noise floor, especially if you're stacking multiple effects onto each other.
Yes I like to keep noice floor I don't gate most of the times, It benefit the track in my opinion in what I do. I may expand tho
or just heavy compression
@@RusAD-gb9jk Yes saturation is a form of compression :)
@@DarkTrapStudio hm, never thought of it that way, but I guess. Still, isn't it better to specify the broader class of effects?
@@RusAD-gb9jk Yes he said "lot of processing"
This makes a lot of sense. I was always a "16 bit is fine" kind of guy. And it always has been. But one song I did needed a vocal harmony. So I quickly just record armed a mic that was already plugged in, a drum overhead. Didn't even check the level, just hit record and ran over to the drum set and started singing upward into the mic. Went back to the desk and saw the waveform was super small. Whatever "I'll just boost it" and so I did. When it sat in the mix it was fine, but when I solo'd it to do some compression and EQ on it I heard exactly what we heard in this video. It sounded corrupt. I never understood why until now. Great video and very informative.
Yes, you have to record with a high enough level so as to not get problems, with 16 bit. If you record at a high enough level, this shouldn't be a problem, unless you have a very large dynamic range to what you are doing.
But really, the lesson is: don't record without checking your levels first.
Increased bit depth avoids a type of aliasing called quantisation error (very different to the temporal aliasing you see at the Nyquist limit) - the noise youre hearing in the 16 bit samples is the quantisation error ofc.
24 bit depth also provides some protection against this sort of aliasing during repeated processing (most processing is done in far higher than 24 bit for this reason - this drops any errors below the bit depth output)
Which is why we use dither when outputting 16 bit files. Dither helps to smooth out those errors at the cost of some noise. Way easier on the ears than the squelchy sound you get from quantization errors. Great comment!
@@meistudiony small correction here: if you output 16 bit with a 24 bit DAC, you use oversampling which is a digital low pass filter.
when you record 24bit and want to store it in 16 bit, you use dithering. practically you take the error diff value and add it to the next samples. this way you distribute the high frequency error to wider frequency range but with less intensity. it is also sometimes called noise shaping because you change the quantization noise characteristic
@@stefanweilhartner4415 How can you correct something you didn't understand? He said "outputting 16 bit files", and there's no goddam DAC involved in the process of writing a file.
@@hxhdfjifzirstc894 outputting 16 bit files usually means outputting through a DAC. if he means writing a file he should have used the word "writing"
@@stefanweilhartner4415 If i were just outputting AUDIO, yes you're correct, if the output is a FILE.... Your point is valid though.
You can hear the differences between them when mixing complex music. Sometimes you want to sit an instrument way down in the mix and while the sound is there, 16 bit tracks can tend to take on a subtle grainy sound, especially if they weren't tracked very hot. 24 bit tracks also are noticably better when pushing them into reverbs and other effects. Honestly, 24 bit 48k is the standard for just about everything now. The only time 16 bit should come into play is when you bounce out your final CD or streaming masters.
If you're super uncertain of what you want to use, while not a normal audio interface, the Zoom F6 will allow you to do 16, 24, or 32 bit or a combo of 16 and 32 or 24 and 32 for direct comparisons if you want. There is one final option of just straight to MP3 because *random chaos*
I think an easy to understand explanation for the dynamic range of different bit depths is the difference of possible sound levels in a sample: 16-bit: 65,536 values vs 24-bit: 16,777,216 values, that's a 256 fold difference.
I record almost everything in 32, but since I'm not an expert watching this comparison of 16 and 24 was actually super clarifying and will help me be less anxious about breaking out the 24-bit devices when I need them.
As someone who's experimenting with antiquated equipment with 16bit audio, from a music production perspective, the practical problem is usually never the volume you record at if you record at an appropriate volume to begin with. The main problem is when you start applying EQ. If you don't have outboard EQ that you can use before the interface and you want some highs, you will quickly run into audible hiss and if said track is one of the more prominent tracks in your mix then you have to find yourself using gates or even worse having to make tonal compromises just so everything else can shine.
EDIT: This might also be a problem on highly dynamic recordings. Volume automation or compression could essentially just make things noisier. So in reality it can be a problem depending on what you're recording, how you're processing the tracks and your track count. Gates and hiss removal plugins can be a good middle of the road compromise. Unless your track count is very high or even very low and all of your recordings are very dynamic then it won't be too much of a problem to a well learned mixing engineer given the right tools.
I think 24 bit is also very important if you need to apply a lot of compression and/or distortion to a track cuz that will raise the noise floor at a much higher level, for example a guitar DI track for reamping.
I knew in principle what the difference would be, but it was excellent to hear the demonstration. Thanks!
Lots of Zoom kit is settable to 16- or 24-bit, and I always use 24-bit for the reason you stated.
Glad it was helpful! It was fun for me to actually do the testing and hear the difference.
Let me say at the start, I am a dinosaur, who remembers the struggles with 2-inch Ampex tapes. I embraced digital recording immediately, as it eliminated tape hiss, wow and flutter, track bleed, and all the other curses that, for some reason, people are now trying to reproduce.
If possible, always try to record at minimum 24 bit, a de facto standard adopted ever since the advent of DVD. It isn't so much about whether you can hear it, (in some cases you can) but because that has been the industry standard for minimum bit-rate for some time now.
A 16-but A/D processor can record 65,536 sound levels, whereas a 24-bit A/D processor can record 16,777,216 levels, producing a much more accurate representation.
There is also the problem that a 16-bit A/D converter can "alias", producing non-harmonic frequencies that are not part of the input signal. These frequencies are artifacts of the sampling and digitization process itself, called "quantization errors". They often corruption and other unwanted noise, such as what you showed at 4:31. This can also appear in reverb tails and delays, and can be quite audible if replayed through a loud PA system.
When CD audio first appeared, listeners complained that the sound was "glassy" and unnatural because of this effect, and so it required extensive processing and "oversampling" to compensate for it. When 24-but recording became available, it eliminated the need for such post-processing, allowing the much wider dynamic range suitable for film and TV.
In the "old days" of early sampling, the 16-bit Akai S1000 was all the rage, (except for the very rich, who could afford the $40,000 Fairlight CMi) until low cost 24-bit PC audio cards blew them away.
So for simple speech, a 16-bit recording will do, but if there is any discreet music or other sound involved, it will present problems. Also, the price of 24-bit, and even 32-bit float sampling and recording is only a few dollars more than the 16-bit, so I'd say spend for the future, not for the past.
Honestly this is so useful. I'm a noob but while looking for a new mic I was constantly looking for 24bit over 16bit because I thought it would sound better. Now I realise I've wasted my time for no reason and, for my usage, I can safely buy a 16bit mic. Thank you so much
Excellent presentation, thank you.
I started recording 62 years ago, 1 mic., 1 mono tape machine, 1 basement.
That's how you learn about recording.
....and I still record at 0 dBFS, (0 dBs), just like the old days, no need to throw away 18 dBs of dynamic range.
Back then we didn't have 18 spare dBs to toss away.
Bill P.
Excellent explanation!! I was wondering the same with 44.1 vs 96 kHz (since we can only hear up to 20kHz anyway). But I always come back to the conclusion that its totally fine to just stay with 44.1kHz and 16bit for me (recording drums)
Hi, Bandrew. Great video. It’s good that you also tested the digitally generated tones. As the tests for boosting the very low recorded 16 Bit recordings with the microphone were suffering more from aliasing issues due to the A/D and filtering of the interface. Your conclusion was spot on. If you are recording your levels properly, no one will ever hear the difference. Especially when it’s a simple acoustic source with limited dynamics such as a close mic’d voice. As for complex acoustic information such as music mixes. I’d say that anyone who is hearing the difference between 16 and 24 in properly recorded and mixed modern music is guessing. As for classical recordings. I’m old enough to remember the crossover from vinyl to CD and what a revelation CDs were with their high dynamic range and lack of noise (44.1/16). It’s also worth bearing in mind that vast majority of commercial streaming music, the older catalogued stuff, would all have been sourced at 16 bit regardless of the processing chain that took it to iTunes, Spotify etc. and no one has a problem with that. Mouse farts on the other hand 😂 Cheers, Dave.
Yes, for quieter or louder sounds, as well as post production, 24bit give you the headroom to process sounds before clipping. You can see this in plugin meters when you push something like saturation. It goes into the red but doesn't clip with 24bit. One main reason to go 24bit in post is because the allowing happens much lower at the Nyquist with in turn give you less noise when using compression and non-linear effects like saturation. However, processing the aforementioned on 16bit sounds terrible, but this has a lot to do with correct gain staging and how loud the signal was coming into. In turn, the negative results will be more dramatic you had to turn up the gain up a lot in the track due to poor gain staging. No one usually mentions that.
Excellent video and spot on. Preamps on a mic that requires 50dB of gain only has a 46dB potential noise floor before it falls to dead quiet.
Dude-I said I love you in another comment and now I love you even more! I started to watch because I was like “oh, cool, I can learn about the mics history from this awesome guy.” And then you shared your wonderful and vulnerable sobriety story and how much that mic means to you and I just started crying and then you did and MAN! Just AWESOME! So much more than I expected from this 7 minute video. So happy that you exist and that your journey enriches our lives. Oh and I was like-I bought my first ever Shure mic (MV7)from a Guitar center too! Ha! Cool!
Anyway-love you even more, man! Keep it up!
Thank you so much for the super kind words Mike, it really means a lot to me. I'm honored that you gave my videos a chance. Happy recording.
@@Podcastage thank you! My comment was meant for your SM7b 50th video. RUclips must’ve started auto playing the 24 bit audio one 😅 so my comment went there lol
I think you need to make a video about 32-bit floating point now
I like to think of 16bit as antiquated. It served its purpose. Like you mention in the video, interfaces and hard drives can handle 24bit without problems. Leave 16bit for Mastering for CD. Everything else do at 24bit and above.
At 3:12 the 16 bit examples on the two otherwise clean samples were noticeably distorted compared to the pure sounding 24 bit ones. The 16 bit ones are slightly edgy. And no, up sampling can’t improve 16 bit. It’s weird because CDs can sound so excellent but the demonstrations like this that I’ve seen on RUclips show the differences even though most people don’t hear it. Whether it makes sense or not, I bet it is due to RUclips’s compression which it does kind of remind me of-artifacts.
Finally i am getting some lessons from this channel.
24 bit recording might come in handy if you are on a festival and you need to turn that mic very low to not have it distort.
Awesome video. Would love to see a similar one dispelling myths around sample rate.
depends how record. one mic, 48kHz is theoretically enough. but when you know enough about real world analog filters and aliasing effects, you might use 192kHz and do some low pass filtering and optional down conversion in the digital domain.
if you record stereo, you might want to capture more detail regarding the space with high sample rates
Most people thinking that they can hear the difference between 16bit and 24bit recordings probably can really hear differences. but the reasons for that are somewhere else than the bit depth. Too quiet level in general, too big dynamic range, some other source of noise changing the sound flow (like a lamp or power source).
In the 90s people were doing surprisingly good recordings using only 8, 12 or 15 bits too. Checkout for example the mod tracker scene and how people used early samplers.
Regardless your bit depth, ALWAYS use dither. Why dither? Even a perfect converter has to make a choice between silence and the very lowest level it can make above silence, and it makes that choice in the clumsiest possible way - toggling back and forth between nothing and something. That’s just the nature of the beast with digital conversion. Dither adds very specially tailored noise your ear won’t hear to keep the converter from making that choice. Reverb tails, both synthetic and natural, are improved by dither.
If you look at a bit scope of dithered ‘silence’ versus non-dithered ‘silence’ you will see the lowest 2-3 bits flipping in the dithered version but not in the non-dithered version. But when you listen to each at the very tail of a fade gained up to the point of danger and - depending on the material - you might hear a very ugly distortion in the last moments of the non-dithered signal, and a graceful disappearance into extremely quiet noise in the dithered signal, just as in analog recording but far, far quieter. Once you understand the conditions that benefit from dither, you can decide when and where to use it.
I was wondering about this the past week, good timing!
Reading the title: Yes.
As with photography a greater number of bits allocated to the differentitation between levels allows for greater flexibility after the fact. . The resolution at which you acquire data (light or audio voltage levels: doesn't matter ) cannot be increased later. You CANNOT INCREASE resolution in post. The original "distance between the slices" can't be made smaller (i.e. increase of resolution) .
I work on movie sets. It is not always possible to keep the environment optimally quiet. Actors can produce wildly different levels quite unpredictably and try unexpected things each take to keep the reactions fresh . So we use AT LEAST 24bit resolution in the recording and often at a ludicously unnecessary 96Khz (lhuman speech scarcely ever contains frequencies above 8 or 9KHz.) Nyquist limits would allow frequencies accurately represented at up to 22Khz in a 44Khz sampling Freq. system. 96Khz seems silly. But when you are capturing something from a person whose daily wages are in the area of $200,000 or more how silly is it NOT to spend a few pennies more to use [33.33333% x 2(48Khz)] sampling data at the encoding /capture stage?
interesting not for photographers - the distortion youre hearing is the equivalent of a moire pattern - level aliasing/quantisation error
The thing is that with property set gain the noise floor of either 24 and 16 but formats will be drowned in the noise of preamps (or in case of using condenser mic - also in its self-noise).
However for really dynamic recordings (like drums) 24 bit depth would be preferable cause the gain won't be set high and then preamp/mic noise maybe will be comparable to 16-bit format noise floor and in this casebit depth can actually decrease sound to noise ratio, which will result in increased noise floor that can become a problem. But in most of real scenarios there won't be any audible differences between 16 and 24 bit. But sometimes the difference will be.
And in some cases 24 bit is better for mixing simply because you have more volume range available
This is really interesting. I was surprised to learn than the Sony FX3 records in 16bit, I would've thought it would have 24bit. There are also some wireless lav kits that do 16bit like the Fulham X5, but then the transmitters are around 1.6x smaller/lighter than the DJI ones that use 24bit. So just so I understand correctly, if in the intro you only boosted by 48dB instead of 58dB you would get similar sound quality with 16bit? -- it's good to know that even if I mess up by 30dB there isn't much between 16bit/24bit.
If you're setting you gain properly, there will still be a difference but unless you're compressing and decreasing the dynamic range by a huge amount it's not going to be noticeable in most cases.
That crackly noise in the 16 bit demo says all. If you set your gain well, compress/limit it on the way in, then 16 bit is fine. 24 bit is just so much easier to work with.
What are the benefits recording of 32-bit float vs 24-bit? As a voiceover actor, I'm always asked to record at 24-bit, but I would imagine having audio in 32-bit float would make it better for engineers to work with in post? Espeicially for voiceover where they want the noise floor eliminated as much as possible. Or is the added information outweighed by the increase in file size?
I'm admittedly an amateur/enthusiast when it comes to the engineering side of things, so I'm trying to learn more.
My understanding is that unless you need insane dynamic range in a recording the main use is in processing so any level aliasing drops below the output bit depth and so processing functions can have huge intermediary level changes without data loss
Love your work in Shikimori and as Bell Cranell. I find it super interesting that you're here learning about recording and mics, I'm assuming you record at home mostly now?
@Jonathan Dano I record a bit from home here and there. Most of my Crunchyroll work is in-studio, as im local to Dallas now. but I record the odd game from home as well as ongoing remote work like Yu-gi-oh which is based out of New York
Bryson - I am in no way an expert in 32-bit float, but as Mijc mentioned, I’m sure the conversion noise floor will be next to non-existent. The only noise floors left would be preamp noise and room tone. So it’s essentially making one noise factor irrelevant. The other big benefit that I imagine a lot of people will be running to 32-bit float for is the fact that it doesn’t clip at 0dBFS. If it’s recorded properly, you can recover audio that has exceeded that 0dBFS threshold in post (i demoed this at the beginning of the Rode NT1 5th gen video). Hope this helps.
It depends on whether you have a high dynamic range device with multiple staged pre-amps.
Many devices just record in 24 bit integer than map that to 32 bit float. On those there will be little to no difference.
But devices like the Zoom F3 and the Rode NT1 use multiple staged preamps so you don't need to get your input levels correctly. They record the full dynamic range and allow you to set your gain in post. If you are using one of those devices, engineers would probably appreciate you recording in 32 bit float.
Great video!
Another potential benefit of 24bit would be multi track recordings. A modern song often has 100 or more audio tracks so noise floor gets compounded across all those tracks. 24 bit potentially reduces that noise floor from becoming audible.
Salute sir, you are the man. I've got a pristine understanding. Your experiments and conclusion are acute.
Great video! Best comparison video I’ve seen so far.
Wow that really means a lot. Hindsight 20/20, in the DAW, I should have attenuated the signal even more so it was below the theoretical limit of the 16-bit recording to hear that ,but I didn't! Oh well. Next time!
I use Shure SM7B with Cloudlifter + DBX286 + Roland Rubix22 at 44.1KHz and 16-Bit for professional speech recording, and it's more than enough!
For those that haven't realized, analog decibel scales are different than the digital decibel scale, in a similar way to how Fahrenheit temperature is a different scale than Celsius. IMO this this the SINGLE MOST IMPORTANT CONCEPT to digital recording.
For example, Fahrenheit +32° is equivalent to Celsius 0° (a thirty two degree difference). And your analog preamp will be running at approximately +18dB when your digital meter reads 0dB. Does analog gear SOUND GOOD when you run it at +18dB? Probably not, especially if your goal is maximum clarity.
The implication is that you should set your analog gain stages to be in their normal range (around 0dB on the analog meter) which will produce a signal on the digital meter that shows -18dB (because there's typically an 18dB difference between the two, or some ADs are calibrated at a 16 dB difference). This will also give you 18dB of HEADROOM, for peaks.
And once you use this method to get a perfectly clean signal into your DAW, you can then normalize your signal so that it's calibrated with whatever level your digital effects are expecting (that's for you to figure out). And finally, as your track is 'mastered', you can get on your tippy toes and reach for as close to 0dB as you can get, on the digital meter.
dB is a ratio, not a scale or unit. There are an incredible amount of differently referenced dB scales even in the analog domain. All dB tells you is that the scale is shown in Log10. For ANY scale you need to know what it is referenced to
Thank you, yes.
@@mycosys Yes, my point is solely focused on the calibration level of an AD converter, which is typically called -18dB or -16dB. I suppose most people here will have their mind fully wrapped around this concept, but I posted in case just one person has never considered the gain staging implications, or is still falling for the 'using all the bits' type myths.
Working in 32-bit float on a 24-bit audio file allows you to make nondestructive edits. In other words, you can undo any edit you've made to the file (if the software allows it, of course). It actually saves space, compared to making a separate backup of the audio file after each major edit, as (prudent) audio engineers had been doing for years.
To anyone reading... that's editing in a 32 bit workspace, not acquisition of sound. Some DAWs default to this workspace by default and then will render to 16 or 24 but audio as needed without issues. It's like editing a 4K video then reducing it to 480P to record onto DVD. No loss.
Make a distinguish for Peaks and RMS in dBFS scale (VU is the RMS reference in dBu scale). Peaks on the recording stage should not exceed -6dBFS and should not clip the red = 0dBFS. Certainly, in post‑production you might use DeClipper plugin and restore those clipped Peaks, but that is only for emergency rescue situations.
All in all, a healthy signal on the way in for recording should be between -21 to -18dBFS for the RMS (though Peaks in this case might get close to 0dBFS in some instruments recorded, but most likely the Peaks will float at about -6 to -3dBFS, which is perfectly great!). That is why VU meters usually get calibrated to 0VU = -18dBFS or 0VU = -21dBFS. For classical\acoustic instruments and orchestral ensembles 0VU = -23dBFS (also for such recordings hi-end audio interfaces are used with extremely quiet self‑noise of their converter chips and channels). This is all a good practice for the Recording stage.
For the pre-mix and mix stage 0VU = -18dBFS is the usual norm. Here the audio\mix engineer would usually compress the Peaks (for most modern day music, besides acoustic\orchestral!!!), in order to boost the Volume (VU stands for Unit of Volume, Volume Unit) and get the mix to sound louder attaching FXs and such processing.
In this stage the average VU level of the mix might even get up to -15dBFS (hitting the "red" of the VU meter, which is acceptable) and the peaks to get optimal maximum of -6dBFS (mostly compressed to that level or extreme maximum of -3dBFS in some percussion\drums\hits loudest sections).
Then we go to the Mastering stage, usually setting up a Master bus compressor, Saturation of the high‑mids, gentle EQ balance and final Limiter (with inter‑sample\oversampling options) to achieve a boost levels of about -14LUFS (this is roughly 3dBFS higher than the RMS\VU equivalent, depending on the mix\genre) and Peaks not exceeding -1dBFS.
It is not a rocket science but is a form of engineering. That is why it is called Audio Engineering.
Loved the talk and comparisons. TY
8:00 The trash that YT compression introduces makes 24 bit not beneficial in most practical situations. Have you ever listened to quiet sources with headphones on YT? Yuck.
Without having viewed the video,I can say there is a difference between 16 Bit and 24 Bit, but it took a long time for me to be able to hear it. There's more dynamic range with 24 bit and depending on the recording that could be very apparent, or not so much.
Confusius, or Jean-Claude Van Damme, or both, said, maybe, "The "real world experience" depends of "your real own world" "
On the other hand Jeremy Clarkson said "Powa is everything".
So, maybe it depends on the difference which can be elsewhere.
I hope I help on this subject. 😀
Well-structured and informative video.
Zoom just brought out the new F3 and F6 entry level 32 bit 96 usb capable devices that have no gain controls. VERY INTERESTING INDEED. Having musicians use dynamics as part of being a musician... WHAT A CONCEPT musicians who actually play and sing at different levels instead of having to turn up the gain controls. Sounds like a great option for hall recording with in close mic placement as well as distance mics to include the hall dynamics in a recording. As usual it all comes down to the quality of the mic noise floor the connections and the preamps and especially the musician's ability to take advantage of dynamics and use great mic technique especially if all of a sudden recording is done without worrying about clipping or setting the pre levels for the mics. Would be nice if you could do a review of where the noise floor is at with different mics through the new 32 bit zoom recorders and how to avoid clipping when dithering the recording down to broadcast 16 or 24 bit in a daw.
I have been doing some work with impulse responses lately, and its gotten me thinking of all sorts of nerdy things like transient speed and detain, and the detail in an IR's settling time. I can't say that I tested it, but after looking at things zoomed into the sample level, I am inclined to believe that bit depth and sample rate do make a bit of difference for IR processing. I was waffling between 48kHz/96kHz, and 24bit/32bit floating point, and I was finding some value in the higher resolution, at least in terms of initial capture and processing. I haven't decided if it needs to stay at the higher resolution, as I need to be mindful of the convolution's CPU usage too.
Perhaps there is an argument for higher sampling rate in technical cases when transient response (HF response) really does matter a bit more. I also remember a digital control systems design class where we calculated that the system stability could hinge on the sample rate, in terms of the lag in the feedback control mechanisms. The general recommendation there was to sample at 10 times the operating frequency to reduce system instability, which is pretty crazy in audio terms, where we are content to hug nyquist.
I'm also curious if bit depth might make a difference in masking, specifically in denser mixes. From what I understand, masking is largely a psychoacoustic phenomenon and perception could vary considerably from person to person. I'm not sure if any research as been done on this and the effects of bit depth, and if it affects peoples ability to discern quieter sources in a denser mix.
Great demo, very informative! Thanks!
7:36 which you can raise the noise floor to get rid of also you can the fact it is anolog heardware not a digital one.
Really interesting video. Great Job!❤
Fun, interesting and helpful. Thank you Bandrew!
2:04 I'm getting "ZX Spectrum game is about to start loading" vibes 🕹️😊
Great video, Bandrew. Perfect explanation
.
So this video was about recording in 16-bit or 24-bit but how about exporting 16-bit vs. 24-bit
if you want the highest dynamic range, export at 24-bit. 16-bit will still sound good though, but if you want to do processing on that exported audio (such as compression, overdrive, etc) then there is a chance that the noise will kinda seep through
If you export at a higher bit and sample rate, most plug-ins can take advantage of it... Even if you recorded at 16-bit 44.1khz... But if you do this, you are not "upgrading" the original recording... You are processing it, and rendering at higher rate with the processing on it... This really only matters for music, in my opinion, but most people can't tell and they won't know the difference.
When I'm exporting a mix I'm going to master, I always export at 24-bit, 88.2khz, to let the plugins do more detailed processing... Hitting around -12 peaks... Then once I've done my mastering, I export at 16-bit 44.1khz. hitting -.5 to -1 peaks. I leave the -.5 to -1 to prevent any clipping that may happen with conversation afterward, say to an mp3... Again the -.5 to -1, that's theoretical, really if it clips probly no one will notice and it won't matter, but that's just the extra care you put in sometimes. And losing .5db or 1db of peaks won't make or break your sound so I don't mind the trade off.
Thanks for the explanation! How about 32bit float vs 24 bit? can you do a similar video compare the two?
nice demo. and the best Mic channel
I have listened to mp3s for years. We all know most are 16 bit. When someone gave me a copy of his linkin park collectiins on 24Bit FLAC, listened to a stufio headphone, I was blown away. There were details I habe never heard before.
That's not a 16bit/24bit comparison, it's a linear PCM/MP3 comparison. It's well known that MP3 compression introduces artifacts. Compare MP3 with the same 16bit recording that the MP3 was made from and you'll hear pretty much the same differences.
@@anahatamelodeon hmmmm..makes sense. Thanks. I stand corrected. 😁
Hard drive space is getting cheap, but it doesn't mean you don't have to pay attention to optimization.
24 bit 96 kHz (or 192) might be good for editing. But in the end you want to use
44.1 or 48 kHz 16bit in video games and streaming services, AAC or OPUS in 256 kbps bitrate to save a lot of bandwith with only a slight loss on quality.
If you do the math on the dynamic range requirements for podcast audio, you'll find that you may not even need 80 dB, particularly on a dynamic mic. This means that given a decent preamp setting, analog noise will be substantially above the converter noise floor and provide tons of dithering for a 16-bit ADC. It thus made sense to develop these entry-level interfaces built around antiquated but cheap 16-bit single-chip USB codecs with a decent mic preamp.
Scanning the ranks of cheapie audio interfaces at Thomann leads me to believe that 24-bit USB audio codecs have actually arrived in this space already. If so, you can only expect a modest improvement as the kind of chips you find in microphones tend to deliver no more than about 90 dB worth of dynamic range. Things could get exciting once the likes of the ALC4080 (ALC1220 counterpart in USB) start hitting this market.
interesting. Thanks for sharing your thoughts, that all makes sense. This does assume that everyone recording is setting their gain appropriately. If I've learned anything this is not often the case though.
Can you do a test of 24 bit D/A conversion at low levels verses 16 bit D/A conversion at low levels? Throw in some reverb trails into testing as well.
Please Remember everytime you run your mix though a compressor you reduce your dynamic range lifting the - 96 dB right up into the audable range.
Just use 24bit and output 16bit if needed.
now what about 64 bit? my Nintendo says 64 bit on it
LOL "capturing mouse farts" your delivery there was good...
Haven't watched your channel in over a year maybe two, i just wanna tell you that you look great dude, no homo lol. Now if you haven't lost weight I'm gonna feel like an idiot but you def look like you have been taking care of yourself. Good stuff.
I record at 24/88 . It sounds great and files are not mega huge.
Oh, thanks for doing this video - maybe I won't have to explain it as often now! I'll just send them here... Love the title interstitials btw - couldn't make them any bigger? LOL
Sir , Your review with explanation is top notch. But sir can you plz make a detailed review on the latest FIFINE SCI AMPLITANK INTERFACE as it is a budget friendly 16 bit interface.
Very helpful indeed, thanks Bandrew.
In all almost all the examples shown on my bose soundbar though RUclips's 1080 compression... 100% I could tell the difference each time but notable with the digital signals it was only at the middle and high DB sources... So for my case recording my voice for un cut commentary, Game audio, My own synth Music/SFX in the rare case I use just a tone where it won't make a difference I should still use 24bit or higher to fit with the rest of the video/ game/music/SFX.
But...what about 32 bit float?
6:52 Around that you said 29DB right? Which I could follow that based on the upper pitches of your voice... I'm also Autistic so I have really good pattern recognition and sensitive hearing. But After that I think I hearing sounds but that could be me just lip reading...
the 24 bit files take up 50% more storage space.
a 100 MB 16 bit file
will be 150 MB in 24 bit.
that means your iCloud bill:
$10 / month in 16 bit
$15 / month in 24 bit
Apple recommends 24 bit
Very informative.
What should be the volume level (db?) of foley, music and voice (voice over) to aim for so they still be clearly separated and audible AND "standard".
Sorry if the answer and terms are stupid, I'm discovering sound for a personal project.
Ok so a little bit of a random question:
I have alot of unwanted microphone noise coming from my jack inputs in my pc (coil whine), could an external USB audio device help eliminate these noises coming from inside my pc?
one thing: I'm using a headset mic with a 3.5mm jack (stuff like the Scarlett 2i2 is only for professional Microphones and not headset mic)
PLEASE HELP, for the love of god I've been spending weeks on trying to fix this
In 2:03 the sinewave in 24 bit sounds a lot more 3-dimensional, it has more girth in the higher mids and it would cut through a mix better than the 16 bit sinewave.
Smooth highs, not too much lows - which mic fits description?
I'm curious how you have your setup for video? Do you have a DSLR/Mirrorless camera hooked up as a "webcam" and then record with the microphone? I realize this isn't a bit question but curiosity to how your video quality is that good. Thanks!
been using 16bit, 44 for like 20 years now - i keep trying to change to 48/24bit but then windows always bounces between the 2 and its annoying. But i'd prefer to be on 48/24bit.
My ears thank you for the loud countdowns😅
Dear Bandrew, how do you, as a passionate Sennheiser user, like the Røde NTH-100 headphones?
Rainer. They sound fine, but the main reason I’m using them is someone told me they were really comfortable. I had them sitting on the shelf so I threw them on and they are pretty comfortable. They will never replace my HD650’s, and once I’m done recording, I throw the 650’s back on. I just wanted to use closed backs while recording because of the metronome bleed in a few reviews and I was starting to get annoyed by it.
@Bandrew Scott I understand. I had a pair of the Røde NTH-100, and they gained my personal record for getting broken. This might be an isolated case, though. If I want to use closed back headphones for recording, I use either Beyerdynamic DT 1770 or the Sennheiser HD 300 Pro.
@@techmed-rainer Oh yeah. I’ve heard people talk about how they break. I’ve only had them on the stand for 3 days so far, so I’ll talk about it on the podcast if and when they break. If they do break, I’ll go back to the M50x’s which is what I grabbed because they were there before these. How do you like the DT1770s? Do they have the standard Beyer headphone sound where it’s really bright? My baby ears can’t handle that much high end. Haha.
Oh and as far as my record for broken headphones, that goes to the Shure SRH 940s. I wore them 3 times, and on the 4th time when I was putting them on, the ear cup arm snapped right off. Exactly as the reviews on amazing had pointed out, but I wasn’t convinced by them.
@@BandrewScott Oh yes, I heard about the issues with those Shures. I never had them, though. So I don't have any personal experience.
Add an inline pad for loud sources and turn up the preamp?
I like how the camera is focused on the mic flag
Thanks for making this comparison 😊
Some 24 bit interfaces (typically older ones) only have a dynamic range of about 100 dB, so by setting it to record to 16 bit you'll be losing very little. It's still better than using a 16 bit interfaces, because most of the converters they use fall short of the theoretical 96 dB DR.
Very good insight. Thank you for sharing.
Great explanation, thank you!
Studer A800 dinosaur here... have you guys thought of having not that insanely big headroom?
great video!
How about doing a 24-bit vs. 32-bit one of these?
Kind of feel like this is a decade or two behind. Yes you can use 16bit for most things, but these days there's just no reason to. The real question should be do you need 32 bit?
32 bit is All the rage this year, but it comes with a lot of asterisks and caveats. Most microphones aren't even rated to use the full potential of 24 bit, so there's the question if 32 bit would actually help. I personally think bitrate is rarely the weakest link, but further testing is needed.
@MF Nickster in theory that's how it should work. But it's not perfect. I'm like I mentioned there's plenty of other caveats. For example if you clip a microphone, no amount of bits can recover that data. And it's not widely supported.
So while we can easily say get 24-bit when it's available. 32 feels like more of a gimmick.
Maybe it is a decade late, but this is a question I had, and I wanted to test it out and share it in case anyone else had the question. I've realized I'm too dumb for the theory stuff, I"m just an application kind of person. That means that I can read all about the theoretical benefits of X,Y,Z, and try to interpret that and regurgitate it, but I will screw it up. So instead of that I want to hear the benefit, and demo the benefit in case anyone else struggles with the theory like I do.
So yes, maybe it is outdated, but I think hearing the benefit can be helpful in comprehending the benefits of higher bit depths.
I'm an Audiobook producer, I hate those 16bit recording Blue microphones, people keep recording far away, adding acoustics and with low levels, top it with noisy dithering from audacity's export, then we need to amplify like 30dB... and there you go, I prefer people recording at 24bit, end of story xD
Thank you for this even if my conclusion was not the same as your this video was invaluable as I know why I should be using 24 but or great not just because bigger number better.
"if you are doing design and you care capturing mouse farts" => very good point
The higher quality always pays off imo, when you need that extra bit to work with that 16 just doesn’t give you
It's all about finding the settings that we need to get the job done. I always record 24-bit, but I can honestly say that I don't think once I would have noticed if I recorded my podcast in 16-bit.
Can you review 32bit interfaces? Or make a comparison with 16. 24 and 32bit / 32bit float? Please?
16 bit was good enough for professionals for decades. For semi pro use it is good enough for sure. If you have a choice go 24 or 32 bit. But dont think 16 wouldn't suffice
I learned something today!
24/32 both are better bit rate ?
Focusright sound card 24 Bit rate 192 db, Motu Sound card is 32 Bit rate both are wich is better sond card please reply me