This Video is So Boring, You Probably Won't Even Make it Through

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 183

  • @johnsingleton8570
    @johnsingleton8570 Год назад +157

    @audiohaze The reason 44.1kHz was picked is historical and an artifact of PCM adapters that were developed at the time. At the time the PCM adapters found in video recorders (tape) were somewhat crude but functional. Different manufacturers fought to establish this standard but ultimately PCM adapters that functioned in the following way were used. The PCM adapter worked by taking 3 samples per line for 245 lines @ 60Hz. So 3 x 245 x 60 = 44,100 = 44.1 khZ. This is also handy of course because it would easily allow for the same PCM adapter to be used in an application utilizing 44.1 audio and 60hz video at the time.

    • @johnsingleton8570
      @johnsingleton8570 Год назад +19

      So in essence there is no good technical reason -- just an artifact of engineering at the time which was implemented in that way to keep the circuitry small. And of course all engineering is a trade off with complexity. The only requirement is that it is more than 2x 20khz.

    • @AfferbeckBeats
      @AfferbeckBeats Год назад +1

      I wonder if 50hz regions were doing... 36.75khz?

    • @johnsingleton8570
      @johnsingleton8570 Год назад +10

      @@AfferbeckBeats In 50Hz video there are 37 lines of blanking, leaving 588 active lines per frame, which is 294 per field. So in a similar calculation you get: 50 x 294 x 3 = 44,100 Hz = 44.1 kHz

    • @henrikpetersson3463
      @henrikpetersson3463 Год назад +1

      Exactly! Started to write that answer but scolled down and saw yours. ☺

    • @AudioHaze
      @AudioHaze  Год назад +16

      Amazing thank you so much for sharing!

  • @aliceduser6347
    @aliceduser6347 Год назад +60

    I will go back to watching my paint dry now.

  • @matthewchanmh
    @matthewchanmh Год назад +41

    That's sound recording 101 and that's such a needed video for all including professionals to revise a bit

  • @henrikpetersson3463
    @henrikpetersson3463 Год назад +60

    Higher sample rates are useful for processing. Many plugins upsample the signal before processing because it delivers a better result. This is especially used to combat aliasing that occurs when harmonic distortion is introduced.
    So it can make sense to record in higher sample rates than your delivery format.
    Also, higher bitrates does not really give you more headroom in that sense (well 32bit float somewhat does). It only gives you more dynamic levels. But that will allow you to lower the input signal without raising the noise floor, thus giving you more headroom before clipping. If you record at 16 bits it's wise to keep the input level as high as possible without clipping to get the cleanest signal. 24 bits allows you to back it off a bit and still get a clean signal.
    32 bit float is a bit special as it goes beyond 0. You can still in theory clip the signal, but in practice you won't. 32bit float converters are actually multiple ADCs in parallell with individual gain circuits. Pretty much the same way as HDR photography works. It samples the audio at different gain levels to deliver a signal with an extremely high dynamic range that is practically impossible to clip.

    • @AudioHaze
      @AudioHaze  Год назад +8

      Appreciate the info and distinction! Really good stuff thank you Henrik :)

    • @fakerainier5302
      @fakerainier5302 Год назад +5

      Higher sample rates like 88.2kHz and 96kHz are also really common in film post-production and sound design as the higher resolution allows for more options as you process the audio, since you have more information in your recorded audio you can really stretch and warp your clips with significantly less artifacting

  • @wiloghby
    @wiloghby Год назад +8

    Your videos are amazing. I hope you continue to grow the audience you deserve

  • @EdEditz
    @EdEditz Год назад +1

    LOL @ 6:09 Cat is trying to open the door in the background. That's so cool ^____^

  • @AJOrpheo
    @AJOrpheo Год назад +13

    Random note on sample rates: higher sample rates have less latency at the cost of higher cpu usage. Also, though we can’t hear above 20khz, I believe the myth that higher rates sounds better is a consequence of less aliasing in analog plugin emulations. So it’s not necessarily that higher sample rates make it sound better, it makes your plugins operate better because there’s less aliasing. If you were mixing mostly with analog outboard and linear plugins, like some fabfilter plugins, there would be no difference sound wise between 44.1 and 96khz.

    • @AudioHaze
      @AudioHaze  Год назад +1

      Love this! Thank you for expanding the discussion on sample rates my friend

    • @gr500music6
      @gr500music6 Год назад +1

      Agree. I sometimes find 96 kHz recordings of the same source less satisfying than 48 in many situations - probably because the computer is just struggling more. Call it computational distortion. And this is unnecessary, because the Nyquist Theorem doesn't suggest that twice the highest frequency you need to reproduce is "good enough", it states that nothing above that can offer any improvement.
      The above is so because we are counting waves as they come "at us" - head on, such as when we sit in a chair at the beach and count waves per minute. This count is frequency, pure and simple and all that matters is peaks and troughs. So, in terms of counting frequency, very high sample rates don't translate to a smoother picture. This is counter-intuitive, so it takes some wrapping of the head around. Sample rate is not part of trying to reproduce a wave as we picture it from "the side" - such as they appear on a scope. It's weird but, to reproduce a sound, perfectly, all we need to measure is frequency of peaks over time and amplitude over time.
      BTW, Ricky, IHMO the RE 20 continues to kill on your voice. Still my fave,

    • @AJOrpheo
      @AJOrpheo Год назад +3

      @@gr500music6 yeah, no, not what I was saying at all. Functionally this is no difference in sound quality. This is a bit to deep. A computer struggling (which btw doesn’t cause distortion unless it starts to screech and tear which means you are killing your CPU with plugins typically then) doesn’t create any type of distortion, audio or information wise.
      What I am saying is the aliasing is what people perceive as sounding bad, not the sample rate. You can’t hear above 20khz. period. EQ curves above 20 can be heard but that’s only because part of the curve are audible. Recording in 96khz will literally sound no different than 44.1khz. It’ll capture the audible frequencies and the non-linearities of your analog front end without aliasing because aliasing is produced when a plugin is emulating stuff above nyquist.
      So. No lol sample rate will not effect tracking and is much more a consideration in mixing and mastering because of analog emulations and maybe some linear eq cramping. You could make a small argument that interactions above 20khz are audible but I’ve never found those to make any significant impact on the sound.

    • @gr500music6
      @gr500music6 Год назад

      @@AJOrpheo Thanks for the reply!

    • @Strange-Songs
      @Strange-Songs Год назад

      Higher sample DO sound better with DSD (direct stream digital) recording for sure. It is a different method than the "CD quality" PCM method. It uses a sampling rate of about 2.8 million (2.8224 MHz) samples per second vs. 44.1k (44,100 sample per second) or 48k (48,000 spc). but it uses a 1 bit depth...not 16 bit or 24 bit. The sound is incredibly detailed. Listen to a well mastered SACD and compare it to a standard CD of the same music...or watch RUclips videos with a live recording that switches back and forth...even through compressed RUclips audio, the difference is incredible. Who cares about Nyquist and "people can't hear above 20k anyway"? It sounds better WITHIN the normal range of human hearing! (DSD naysayers have got to be functionally tone deaf...check it out for yourself!) "CD quality" is fine for the car and earbuds...but in a good listening environment...it's DSD for the win!

  • @fuzzylogickben
    @fuzzylogickben Год назад +1

    44.1kHz was chosen because it fitted well with scan rates etc used with ntsc television. They knew it had to be over 44kHz to get the filter design to work well and cheaply enough with existing methods. The extra 0.1 was to do with making it play nicely with television standards in the US.

  • @eliotball655
    @eliotball655 Год назад +2

    Nice job Audio Haze! I made it through and my mixes dont suck anymore

  • @chrzanowychomik
    @chrzanowychomik Год назад +1

    I couldn’t focus on the topic when I saw how great solo this trumpet player at 9:13 is throwing at me. Nothing suspicious at all. 😂😂😂

  • @PercyPanleo
    @PercyPanleo Год назад +6

    Something with phase: If you have inverted phase in a stereo recording (One channel canceling the other channel) you will end up with a very strange sound (Basically it'll sound like the noise is coming from inside your head and it will sound completely different between speakers and headphones. Also singular speakers that mix both channels down into one, like smart speakers, won't handle it well)

  • @steverok67
    @steverok67 Год назад +1

    I'm an electrical engineer so this was cool to listen to. Good job. Nyquist Theorem states that, by sampling a signal at a rate that is at least two times the maximum frequency or bandwidth B, the original analog signal can be re-created from the samples, however, to achieve this lower limit, an ideal, brick-wall lowpass filter would be required, which is not realizable, therefore, by sampling at a rate a little higher than 2B, we relax the lowpass re-construction filter requirement into something that is practically realizable. I once applied this theory to relationships and came up with "The Sampling Theorem for Chicks". Good times.

  • @guilon219
    @guilon219 Год назад +1

    What a great video, really. Keep being one of the best music channels of yt.

  • @WalkinonSunshyne
    @WalkinonSunshyne Год назад +3

    Good video. I think you should be looking at aliasing when you talk about sample rate. Really important with digital synths, digital saturation, digital clipping and digital compression. I guess the other thing is large pitch changes. 96khz will do all 4 of these things better than a low sample rate. Internal oversampling isn’t always an option. We can’t afford to always work at 96khz but in most situations it will sound better.
    Bit depth, 32 but is always desirable. Accidentally clip a performance, well not really in 32 bit, you can turn it down. Noise floor is lower again.
    Now playback on the other hand is a different story. For that 44.1 and 24bit is significantly greater quality than everything on Spotify, and more than adequate.
    But I think Phase and DB points were interesting and adequate. Good points for the majority of listeners.

  • @JamesLewis
    @JamesLewis Год назад +3

    @audiohaze Not boring at all, but I wanted to add that the reason for increasing the bit depth, and thereby the dynamic range is not really as you mentioned to increase the high end, but to reduce distortion of low level sounds.... since human perception of volume is a logarithmic scale (the reason for using dB of course), very quiet sounds are necessarily very low level, so while you have 16 bits at the top of the volume scale, it could be that you only have 10 or 12 bits for quiet sounds, which will increase distortion greatly... thus using 24 or 32 bit samples leaves more resolution on the table for quiet sounds.
    Also, increased sample rate, I believe does offer advantages for digital processing, leading to less distortion being created when pure digital effects and mixing are used.

  • @bolotskih675
    @bolotskih675 Год назад +1

    When you recreate your signal from digital back to analog you get multiple specters of the original signal. And their mid freqs are spaced one from another by the whole the spectrum range of the original signal. So if you use only a double freq to sample your signal you will get these copies going back to back. Then you must filter out all copies and leave the first one, your recreated signal. BUT you cant filter perfectly. You will get aliasing in high freqs. That why you need some extra space and more higher sample rate.

  • @orfious
    @orfious Год назад

    love the anti-clickbait title... I did make it through to the end, although I was hoping to find out some ways in which misunderstandings of these concepts relates to mixes sounding bad

  • @EJohnDanton
    @EJohnDanton Год назад

    Making the imponderable ponderable! Really well done.
    Over the years (I'm 57) I've picked much of this up via trial and so much error. This will give people a head start if they want to delve deeper.
    I had to learn phase without the digital benefit of being able to move the waves with my bro's 2 track. Ugh!

  • @wesleybrehm9386
    @wesleybrehm9386 6 месяцев назад

    I’m a musician, but my job is as a sound designer. Most film and TV shows are recorded at 24 or 23.976 frames per second, in the US. This pairs very nicely with 48kHz. The standard spec I am asked to provide sound for film and TV is 48kHz/24-Bit. I actually record most of my SFX at 96kHz/32-bit. This is because when using a lot of processing, the tools we use can more effectively manipulate the higher sample rate/bit depth then convert later. Basically, an explosion recorded at 96kHz/32-bit FP will sound better when processed and played back at 48kHz/32-bit FP than a signal originally recorded at 48kHz/32-bit FP. Human voices usually sound horrible at 96kHz. I would argue most musical instruments sound worse at 96kHz as well.

  • @apolstudios
    @apolstudios Год назад

    LOVE IT! But still, I was waiting for more mixing tips :P (I studied Audio Engineering in Vancouver 3 years ago but still fighting with my mixes... :( ) Great Channel btw :)

  • @---pp7tq
    @---pp7tq Год назад +1

    I think it's good to keep in mind that if you experiment with phase on purpose e.g. with bass using some stereo plugins which does such things, when you mono the signal like the music is played e.g. in clubs, it might cancel what you have done similarly how you showed, so bass suddenly disappears when played in mono.

  • @jprnn
    @jprnn 8 месяцев назад

    Coming from film sound design perspective, we sometimes record sound (for effects use) in 96kHz. I've found that material recoded that way responds better to time stretching and pitch shifting than 48kHz (the standard). This is because of better resolution, like you mentioned. And I'm not an expert in this area, but I've heard that classical music tends to be recorded in 96kHz.

  • @jonashellborg8320
    @jonashellborg8320 Год назад

    The phase thing I knew, it’s happened to me way too many times. I actually prefer using a single microphone when I can. It’s easier to get a full sound with that, than 5 microphones and you’re not sure about phase cancellation. And this wasn’t boring! The most boring music exercise Ive done is listen to white noise in frequency ranges, to train my ear to hear different zones of pitch. Very useful though, much like this video.

  • @RocknRollkat
    @RocknRollkat Год назад

    This presentation points out why I recommend that everyone learning recording start out with ONE microphone, LEARN how to record an entire group that way, then graduate to two mics.
    Most people these days start out with 50+ channels 'hot' on a DAW and wonder where the problems are.
    That's how I started learning in 1961, one mic, one mono tape machine, one basement and friends that could actually PLAY music.
    Best regards,
    Bill P.
    Studio 'A' nonlinear

  • @TaylorEnloeMusic
    @TaylorEnloeMusic 9 месяцев назад

    Love your videos dude, always helpful, thought provoking, and inspiring

  • @c1ph3rpunk
    @c1ph3rpunk Год назад +2

    Meh, I do understand phase. I’m at the phase where I like watching boring videos. Hah. He thinks he could punk me. The part I didn’t get were the sign waves, yield? Stop? Speed limit? I’ve never seen them wave at me.
    Oh, I’m also at the phase where I just tell bad jokes to waste time between meetings.

    • @AudioHaze
      @AudioHaze  Год назад +1

      I like this phase. This is a good phase.

  • @TheGourmetRabbit
    @TheGourmetRabbit Год назад

    Thank you for posting mate 🤙

  • @ilyouslug
    @ilyouslug Год назад

    your videos are very calming to me for some reason i can’t put my finger on it tho

  • @Jakepf
    @Jakepf Год назад

    Oh oh! I also wish you talked about the buffer size. Even if it wasn't super relevant, touching on it woulda been interesting.

  • @greenman9123
    @greenman9123 Год назад

    I like your channel man, really informative. Have a good day!

  • @RocknRollkat
    @RocknRollkat Год назад

    Hello,
    Nice presentation, let me clear up the dB haze for you.
    A decibel is the smallest CHANGE is sound amplitude that can be detected by the average human being in either direction (louder or softer).
    Example --- A signal raised to 78 dB from 77 dB or lowered to 76 dB from 77 dB is the minimum detectable change in loudness.
    Which explains why I crack up when these youtube cowboys talk about increasing or decreasing a filter response curve by 1/10 dB.
    By definition, the change is INAUDIBLE !
    Best regards,
    Bill P.

  • @randomxiaomain1269
    @randomxiaomain1269 Год назад

    Great videooo!!! Super interesting to hear about and i love learning about audio

  • @kevintracy9969
    @kevintracy9969 11 месяцев назад

    96k was a hot debate when I was shopping for my first interface. In 2002.

  • @bevans224
    @bevans224 10 месяцев назад

    I've been told that although you can't hear 96kHz, you can feel it. If you have the proper equipment, and all the proper systems, etc. But at the moment that's still quite an investment. I think monitors that can capture 96kHz is still in the multiple thousands.
    I personally don't think 96kHz will even catch on unless the hardware for it gets cheaper. (to my understanding, that also includes wires, monitors, systems, etc.)

  • @topa1798
    @topa1798 Год назад

    6:10 cat want to leave,i stay tuned

  • @DemilOfficial
    @DemilOfficial Год назад

    The term decibel has always been a bit confusing to me. Thanks for clarifying!

  • @zetdotpi
    @zetdotpi Год назад +1

    Really good video. However I found bit-depth explanation a bit confusing. One way to look at bit-depth as you say is dynamic range - on that I agree. But I prefer to view bit-depth more as precision - with higher bit-depth you can measure smaller differences in signal amplitude. So, for example, when you have extremely low volume recording with 16-bit depth, after amplification (normalization) you will get a lot of loud digital noise and with 24-bit there will be much less of such noise. TLDR - higher bit-depth good for processing audio and bad for bigger size files.

  • @Jakepf
    @Jakepf Год назад

    Oh shit 2:00 This is good info for making ASMR audio mixes as well.

  • @michaelartistyoutube
    @michaelartistyoutube Год назад +1

    this is something i needed to watch. boring eh idk i quite enjoyed it. explained systematically clear. thank you.

  • @brandonharkins6900
    @brandonharkins6900 Год назад

    I never noticed that MS-20 back there, I'd love to see a video with fhat!

  • @michaeljam4818
    @michaeljam4818 Год назад

    One of the best vids on youtube ❤️(subbed)

  • @RecordingStudioLoser
    @RecordingStudioLoser Год назад

    Jeez dude. Your videos are so good.

    • @AudioHaze
      @AudioHaze  Год назад +1

      Ey thanks dude!! Love your vids as well actually :) I’m a sub!

  • @siggiarabi
    @siggiarabi Год назад +1

    0dB spl isn't really no sound. 0dB is the limit of our hearing at 1kHz, some people can even hear a good bit into the negatives at around 3-5kHz

  • @Shawnee845
    @Shawnee845 Год назад

    When you record to vinyl you can't call no further than ten thousand kilohertz. So that should be the golden number

  • @isakadzemusicc
    @isakadzemusicc Год назад

    To expand on the topic of kilohertz: if you are writing music for CD or the movie, it is better to use 48 kHz because when you need to convert to 44.1 kHz, the loss will be small (divide 48 by 44.1 and you will see the result), but if you are writing music in 44.1 kHz, it is better to stay at this frequency and not change it.

  • @ethanparr
    @ethanparr Год назад

    This was super interesting. Very well explained as well. Thanks!

    • @AudioHaze
      @AudioHaze  Год назад

      Glad I could help thanks Ethan!

  • @valley-artifact
    @valley-artifact Год назад

    higher bit depth actually has no real impact on dynamic range, it just determines how loud your noise floor is, and 16 bits or more will get you a noise floor that's entirely inaudible unless you're playing it RIDICULOUSLY loud or are dithering multiple times (at 24 bit even that doesn't matter): technically higher bit depths do increase your headroom relative to the noise floor but that metric is basically useless

  • @JohnPaquette
    @JohnPaquette Год назад

    10 decibels (or 1 bel, actually) means a ratio of 10:1 in a value. When you double a value, that's the same as increasing it by 3.0103 dB. Doubling the amplitude of a sine wave *quadruples* its power, so its *power* goes up by 6.0206dB (it's doubled twice). So the dynamic range (in power) of 16-bit audio is 6.0206*16, or 96.3296dB.

  • @nskeip
    @nskeip Год назад

    Comming into mixing from math, I press like just for mentioning Shannon-Nyquist-Kotelnikov theorem :))

  • @jezmez68
    @jezmez68 Год назад +1

    I would be interested in seeing a video with this phase cancellation on a real track. I mean, I'm not sure I'm having those issues with Garageband, but it would be cool to see if I'm messing something up, and of couerse, an example of how to fix it. I'm a noob when it comes to mixing, and most of the time, I don't even know where to start.

  • @mareklesniak8768
    @mareklesniak8768 Год назад

    What I've heard is that 88.2/96kHz and above benefits from moving any quantization noise above the hearing threshold. What my practical take is, higher frequencies (think, percussion/hi-hat) sound worse at 44.1 vs 96kHz. Not sure about 48kHz though (maybe it's good enough?).

  • @galefraney
    @galefraney 10 месяцев назад

    Fantastic video!!

  • @wildhorsemusic1111
    @wildhorsemusic1111 Год назад

    Making me really happy i record 1channel directly from my keyboard for all my music lol

  • @Stephen-zx4uf
    @Stephen-zx4uf Год назад

    How many times does it take to remember the Phase lesson?
    Phase CANNIBALIZATION!
    Perfect! That phase phrase might stick. Thanks!

  • @GruffBillyGoat
    @GruffBillyGoat Год назад +9

    One note about bit depth: It’s true that bit depth impacts dynamic range and that dynamic range is a product of bit depth, but the “24 bit” in “24 bit, 48 khz” isn’t talking about dynamic range as such. What bit depth actually means is how much data we have available to describe each of those 48,000 samples. I.e. each and every one of those samples contain 24 bits of information.
    Then to further confuse things, “8 bit music” doesn’t really have anything to do with bit depth. It’s just an aesthetic inspired by the sounds from 8 bit game consoles and computers like the Nintendo Entertainment System or Commodore 64. And the “8 bit” here refer to the capabilities of the CPU, and so we’re getting into computer science rather than audio engineering 😅 Those machines couldn’t really do sampled or recorded audio at all, the sounds they made were generated by synthesis in real time.

    • @AudioHaze
      @AudioHaze  Год назад +1

      Thanks for the distinction! And yes regarding 8bit I was referring to the true 8bit compositions from retro consoles rather than the modern digital recreations :) absolutely 8bit music these days is just a style, but back in the day it wasn't!

    • @Slurkz
      @Slurkz Год назад +1

      Thanks, Billy. Thanks, I was about to add a comment like this, but you did a better job!

  • @Andwooooo
    @Andwooooo Год назад

    Its amazing how he used reverse psychology to make us actually want to watch the whole video despite how boring it is

  • @TUKMAK
    @TUKMAK Год назад

    It all makes sense now

  • @NemouseJurado
    @NemouseJurado Год назад

    SUPERB MATERIAL! Thank you!.

  • @AurinOgen
    @AurinOgen Год назад

    You're kinda the Rhett Shull of audio and I like it

  • @morten1
    @morten1 Год назад

    I left the khz crazy talk long time ago and settled on 44.1 forever.
    Bit depth seems far more important. I leave it at 24

  • @ChrisSchaffer
    @ChrisSchaffer Год назад

    I mean yes, these are super boring reasons.... but the Nyquist Theorem excites me deep down in my basal ganglia

    • @AudioHaze
      @AudioHaze  Год назад

      It is a good theorem :))

  • @stevenewtube
    @stevenewtube 10 месяцев назад

    Pretty cool. One question; do you use VU meters?

  • @user-lc5jl6fg5b
    @user-lc5jl6fg5b Год назад

    You tricked us all to watch this😅 loving the vids

  • @JohnPaquette
    @JohnPaquette Год назад

    Why 44.1kHz sampling rather than 40kHz? Because proper recording of digital audio requires using a low-pass filter to prevent *aliasing*. Without the filter, a (supersonic) 43.1kHz tone would end up sounding (audibly!) like a 1kHz tone. The supersonic content needs to be filtered out of the audio before it is sampled.
    Filters cause phase shift. The higher the frequency of the low-pass filter, the less it will affect the phase of the audible frequencies. So 44.1 is better than 40.

  • @Jakepf
    @Jakepf Год назад

    Wait bro, I don't recall but do you have a video on how to record vocals?

  • @paulmccabe2966
    @paulmccabe2966 Год назад

    Not boring, very interesting

  • @ridingdriving
    @ridingdriving Год назад

    not an audiophile nor can afford to be one but i enjoyed that somehow...maybe the cat contributed a bit, thanks, thumb up

  • @RegebroRepairs
    @RegebroRepairs Год назад

    Not boring, and also, I didn't make it through because I already knew it. 😀 And no, that's not why my mixes suck at all. I think I don't really understand how to use compressors on a mix, and also, my room sucks so anything acoustic sounds like shit. Or perhaps it's my mics.

  • @vacation_generation
    @vacation_generation Год назад

    Brilliant video - love the humour (apologies humor....probably 🙂)

  • @MrRzk600
    @MrRzk600 Год назад +1

    I play guitar, electric one. So when I use 44.1 rate I hear some dirt in high-end, It goes away only using 96

    • @AudioHaze
      @AudioHaze  Год назад +1

      I would challenge that this may be in your head. That said it seems as though sample rate may have an effect on latency? Maybe this is what you're experiencing?

  • @necroticpoison
    @necroticpoison Год назад

    I think depth doesn't matter outside of mixing/mastering and SR is just for oversampling or whatever it is. Going from 320 to 1,411 does seem like quite a jump though.

  • @bmoklsc
    @bmoklsc Год назад

    Dumb question about phase issues: how do you actually check it? Nudge one recording a smidge and listen to hear if it gets louder or softer?

    • @shahiiran
      @shahiiran Год назад

      there is a free plug; span by voxengo that u can check if its in phase or not. its on the bottom right. if u do smidge a lil, it does sound like you're "double tracking" (proper term is duplicating track) it, but it will be an issue when it is summed into mono. also it'll be +3db louder

  • @nicholasbessette2001
    @nicholasbessette2001 Год назад

    I like to record voice over at 96khz so if I need to use a pitch shifter for voice effects I dont degrade the source too much when I need to slow it down but thats juste for voice effect post production.

  • @zwsh89
    @zwsh89 Год назад

    The title made me think “You’re so bored, you probably think this videos about you”

  • @germanboza
    @germanboza Год назад

    I watched until the end hoping to find out why my mixes suck. I am an electronic engineer and I know all these concepts really well, but still, my mixes suck.

    • @AudioHaze
      @AudioHaze  Год назад +2

      I’m sorry man :( probably not as bad as you think though! We’re our own greatest critic

  • @BulletproofGroove
    @BulletproofGroove Год назад

    AUDIOHAZE, Would you buy a Shure MV7X or a Shure SM57 as a "1 mic for everything" type of deal?

  • @suk1
    @suk1 4 месяца назад

    thanks for this

  • @subtonic24
    @subtonic24 Год назад +1

    I've been using 88.2kHz sample rate lately for the higher resolution and because it is mathematically double 44.1. I hear very subtle differences between 88.2 and 44.1 but I have no idea if I am actually hearing it or if I am just telling myself there is a difference lol.

    • @AudioHaze
      @AudioHaze  Год назад +1

      I would challenge that you're hearing distinct differences but who knows!

    • @Stephen-zx4uf
      @Stephen-zx4uf Год назад

      Thinking about the question in reverse. Could you detect 22k? 11k? What about the choices made for MP3 of a quick recording? When would 320m be used and when might 128m be just fine?
      Factors for considering the use of higher sampling:
      Quality of the source
      Quality of gear.. mic, cables, pres, etc
      Hearing ability (which changes over time and frequency)
      Disk space
      And with a bit of humor, it may depend on whether you wish to also “taste” the ice cream slurp… gotta say the new Raspberry 192k interface sounds delicious ;)

  • @Lillalolmc
    @Lillalolmc Год назад +1

    Wait, why wouldn't moving the other mic farther away fix the phase?

    • @AudioHaze
      @AudioHaze  Год назад

      It won’t necessarily! Check phase before hand, if the mics are out of phase, adjust from there and listen to the signals again and repeat :)

  • @ahanna8984
    @ahanna8984 Год назад

    not me sampling at 96 kHz for the birds and nature to listen to my music loollll

  • @tomlevitt4133
    @tomlevitt4133 Год назад

    what happens if I record a vocal take, copy and paste it onto a new track, so there's two of the same take. pan one left and one right and then slightly nudge one take, it wont phase right? and saves me doing two takes?

  • @jimmichaels5058
    @jimmichaels5058 Год назад

    Zero dBSPL is NOT no sound! Similarly, Zero degrees Fahrenheit ( or Celsius ) is not no temperature. Zero dBSPL is very quiet, near the threshold of human hearing (Your Mileage May Vary). There is a an Anechoic room with a background noise level of -20.6 dBA. Actual Silence would be minus Infinity dBSPL or dBA

  • @LordTheeProducer
    @LordTheeProducer Год назад

    I’m fw your channel

  • @alexlechanteur3606
    @alexlechanteur3606 Год назад

    Nice explanation, next time someone ask me to explain sample rate to them i'll just send this XD better to explain it thant me XD. That DB stuff is still confusing for me even after all this time though 😵‍💫🤣

    • @AudioHaze
      @AudioHaze  Год назад

      Hahaha well thanks for sending it to them! Yeah decibels will never not be confusing

  • @saikousocial
    @saikousocial Год назад

    You underestimate my power!

  • @kevinacres1699
    @kevinacres1699 Год назад +1

    Bingo! 2nd video I watched of your channel. Im returning, often. Oh I subd

  • @ZndBornMixedIt
    @ZndBornMixedIt Год назад

    How do we get this video to as many people as possible?!

    • @AudioHaze
      @AudioHaze  Год назад

      Liking and subscribing I guess?? lol

  • @resound7
    @resound7 Год назад

    Not boring at all!

  • @azzydog
    @azzydog Год назад

    nice video and thanks for that explanation! In case of which sample rate to choose I learned the following: Most video productions use a sample rate of 48khz and most audio productions like on a CD use 44,1khz. so it's not a question of quality but a question of for what format to produce. Does anyone know if this statement is still true or relevant?

    • @AudioHaze
      @AudioHaze  Год назад

      Never heard this one! I do know CDs are 44.1kHz but I've never really known the reason?

    • @azzydog
      @azzydog Год назад

      @@AudioHaze I heard these were arbitrary choices for standards like you said in your video.

  • @fluphybunny930
    @fluphybunny930 Год назад

    Fell asleep. Still didn't learn anything.
    Could you do this with a cat or other cute furry animal?

    • @AudioHaze
      @AudioHaze  Год назад

      Hahahaha will make a mental note for next time

  • @ShinilPayamal
    @ShinilPayamal Год назад

    Thanks for the video. Really interesting, wasn't boring at all.😊
    P.S.: I use a Rode VideoMicro hooked into my Zoom H1n to record to Podcast episodes. The audio sounds 'loud' enough when I play it on my H1n but not loud enough when I open it in Audacity. Not sure why this happens. Sorry for being a bit off topic, but I would be grateful if you could suggest a solution.
    P.P.S: I do 'Amplify' the sound but still not very satisfied with the final result.

    • @AudioHaze
      @AudioHaze  Год назад +1

      It may be a product of the volume levels on your zoom? If you're cranking the monitor volume to hear it louder on the zoom and not actually adjusting the input signal, then it may appear louder when its not :)

    • @ShinilPayamal
      @ShinilPayamal Год назад

      @@AudioHaze Thanks for the reply. Yes, I thought so too. The volume level on my Zoom H1n is 80.

    • @---pp7tq
      @---pp7tq Год назад +1

      That's generally unrecommended, but you can capture the sound from maybe headphones jack of Zoom if it has one, using external interface or even line-out in your computer if it has one

    • @ShinilPayamal
      @ShinilPayamal Год назад

      @@---pp7tq Thanks for the reply.

  • @orlandonieves8723
    @orlandonieves8723 Год назад

    Very helpful

  • @latestcoder
    @latestcoder Год назад

    thanks

  • @Bedroomstew
    @Bedroomstew Год назад

    What about flipping the phase? Like when it comes to drum recording, sometimes, when a snare is being recorded with a top and bottom mic, the bottom mic sometimes needs its phase flipped so low end and or body is lost. Anything notes on this compared to other instrument application?

    • @AudioHaze
      @AudioHaze  Год назад +1

      HEY BUDDY. Well in this scenario, both the bottom and top snare are still technically two different distances from the sound source, and the top mic is likely much closer. So the reason the bottom gets flipped is to better approximate those "peaks and troughs" as they appear on the top mic. You could achieve the same effect by simply nudging the snare bottom mic's timeline a millisecond or two in the DAW to align them better rather than flipping the phase in an EQ or something :)

    • @Bedroomstew
      @Bedroomstew Год назад

      @@AudioHaze ah! This was super helpful, thanks!

  • @imlskr
    @imlskr Год назад

    DBW sounds kinda freaky ngl 😳

  • @foxhoundz07
    @foxhoundz07 8 месяцев назад

    No, this was not boring!

  • @luciusblackmail8129
    @luciusblackmail8129 Год назад

    What I did not understand is wow you check Phase and how you adjust? Probably I am just too stupid to get it.

    • @luciusblackmail8129
      @luciusblackmail8129 Год назад

      how

    • @AfferbeckBeats
      @AfferbeckBeats Год назад +2

      The most common time I'm checking and adjusting phase and definitely the easiest to see and hear is when I'm layering different kick drums in a DAW. When they're out of phase, they will be much quieter, and sound much thinner with far less low end. So you zoom in and drag one around until some of the bigger peaks and valleys match. Playing them again they will be much louder and have much fuller low end. Sometimes that actually sounds worse though and you might prefer the thinner sound, but you usually don't.
      Quickest and easiest way to check phase at least in Ableton but I'm sure other DAWs are similar is to throw the Phase Invert Utility effect on one of the tracks in question and compare the master level with it on and off.

    • @AudioHaze
      @AudioHaze  Год назад +1

      It's definitely a skill you learn! You'll learn to hear when something lacks body and low end, eventually you'll start to recognize what phase issues sound like. Its a pretty characteristic sound. Maybe try taking a project you have and doing what I showed in this video? Take a signal recorded by two microphones, and nudge one signal slightly over and analyze the results. You'll start to pick up on that distinct tone.

  • @NoCoverCharge
    @NoCoverCharge Год назад

    Nice synth

  • @bayleecannon9499
    @bayleecannon9499 Год назад

    you were right i’m sorry

    • @AudioHaze
      @AudioHaze  Год назад

      HAHA well hey thanks for trying lol

  • @bolotskih675
    @bolotskih675 Год назад

    Depth is NOT about clipping.