What SAMPLE RATE Should You Record At? | Why HIGHER Can Be WORSE!

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024
  • Why recording at 96/192Khz can sound WORSE than recording at 44.1/48, and why audio sample rate bears no relationship to frame rate in video. We bust some myths in an easy-to-digest video that explains the science behind sample rates, aliasing, and bit rate - as well as Mark playing with his new 55" iPad pro!
    Many thanks to The Goodbye Look for permission to use their track - check em out!
    www.thegoodbye...
    If you want a deeper dive into the science, check out this awesome video from plugin developers FabFilter:
    • Samplerates: the highe...
    Download the test tones at:
    www.presentday...
    Check out our FREE test master service at:
    www.presentday...
    Join our new discord server at:
    www.presentday...
    To receive a 7% discount (and earn us a tiny commission) on a DistroKid subscription and increase your chances of higher ranking on Apple Music and Spotify playlists (assuming Mark has mastered your tracks, that is!) and help use this link:
    distrokid.com/...
    If you would like to send us new and interesting products for review, or are interested in a collaboration, please email us at:
    info@presentdayproduction.com
    If you enjoy our content and would like to make a small PayPal donation to the channel (it gets pretty expensive pretty quickly to make these videos!), then we would be eternally grateful, and give you a shout-out in the next video!
    www.cosmic-aud...
    If you'd like to use Epidemic Sound's extensive library of well-recorded music as a fantastic learning tool, as well as being able to use it in your own content for RUclips, Facebook or Instagram, follow this link (We earn a small commission which supports the channel):
    share.epidemics...
    Alternatively, if you'd like to try out a personal Epidemic Sound subscription at a reduced price, follow this link:
    share.epidemics...
    We are also now Waves Ambassadors and we will be bringing some epic content alongside Waves!
    Please follow this link for our PresentDayProduction partnership with Waves plugins (We earn a small commission which supports the channel):
    waves.alzt.net/JrqZ47

Комментарии • 1 тыс.

  • @Producelikeapro
    @Producelikeapro 4 года назад +236

    Another fantastic video Mark! Marvellous work!

    • @PresentDayProduction
      @PresentDayProduction  4 года назад +13

      Thank you my friend, much respect!

    • @iainparamor1482
      @iainparamor1482 3 года назад +7

      Warren are you planning to also get a robot in your studio that calls you names? :)

    • @Producelikeapro
      @Producelikeapro 3 года назад +22

      @@iainparamor1482 Eric does that for free!

    • @ximre2
      @ximre2 3 года назад

      You can have these Audio Files if you want to see what we're talking about...THERE IS A DIFFERENCE THAT ANYONE CAN HEAR!!!

    • @Electricowlworks
      @Electricowlworks 3 года назад

      @@Producelikeapro RoboEric, haha.

  • @Pericles777
    @Pericles777 3 года назад +47

    The ONLY reason I use higher sample rate is for audio stretching. And it’s a rare occasion. Usually some sort of flown in sample that I want to stretch.

    • @BlakeMlungisi
      @BlakeMlungisi 3 года назад +6

      Same here. Time stretching at 48kHz is okay if you have very good plugins like X-form for Pro Tools but 96kHz with standard plugins works much better for me. I do audio stretching all the time because I work with vocals groups that aren't necessarily professional.

  • @officialyourdad2342
    @officialyourdad2342 3 года назад +35

    Learned more from you in a half hour than from my university audio engineering courses tried to explain over a week. Well explained and thanks for showing examples. People like me learn from seeing and hearing examples not just having definitions thrown at them 🤘🏼

  • @Whiteseastudio
    @Whiteseastudio 3 года назад +33

    Excelent video! I agree fully that ADDA is always the most comfortable on 48kHz... However, I would like to see your take on the processing sample rates. I've experimented a lot with this (oversampling in the plugins etc.), and from my perspective it does make a lot of sense to have those parts of your chain running in higher sample rates.

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +7

      We may do a dedicated video for this in particular, however, yes, oversampling in plugins is definitely useful. We found it sounded great on the FabFilter plugins we tested: ruclips.net/video/bgqWwJ3kip4/видео.html . We'll look into oversampling into plugins in particular in a future video. Thanks for your comment!

    • @eliaszerano3510
      @eliaszerano3510 3 года назад +1

      whats wrong with 44.1 ?

    • @caseykittel
      @caseykittel 3 года назад +2

      @@eliaszerano3510 sure you pass the nyquist test in human hearing, but you get less artifacts from the plugins at the higher 48k 24-bit.

    • @kyron42
      @kyron42 3 года назад

      @@eliaszerano3510 there's nothing wrong with 44.1 if you're recording distorted guitars and drums.

    • @eliaszerano3510
      @eliaszerano3510 3 года назад

      @@caseykittel how about 88.2 then ?

  • @FlockofAngels
    @FlockofAngels Год назад +4

    Great video, lots of interesting info. I record vocals and other instruments at 32-bit so I don't have to set my levels at all and I never distort a take. I can sing very loud and very soft and never have to worry about my input level and where it is. The files are bigger but I have yet to have a project take up 1gb of audio data. That is with lots of vocal tracks and takes. And my 18TB drives are accommodating to these levels of data. Also, the 32-bit sounds better to my ears.😊

    • @m9shamalan
      @m9shamalan 9 месяцев назад

      but your converters are still 24 bit, so you can still definitely clip them..

  • @MarcMcElroy
    @MarcMcElroy 3 года назад +2

    "...and that!, ladies and gentlemen, is why ADATs sound better than 192k"

  • @ryanmaroney4793
    @ryanmaroney4793 4 года назад +73

    Best explanation I've ever seen on this topic.

    • @PresentDayProduction
      @PresentDayProduction  4 года назад +4

      Thanks, we’re glad you found it useful, it’s a tough topic to cover!

  • @coisasnatv
    @coisasnatv Год назад +1

    Using the same principle of Nyquist, isn't better to record at 24bit 96kHz and export the final project as 24bit 48kHz?

  • @BigHugeYES
    @BigHugeYES 3 года назад +2

    Could be placebo, but I hear 96k responding better to elastic audio, pitch correction, and some plugins.

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +1

      Higher sample rates can help with some time stretching software, and non linear plugins that don’t oversample (or do but don’t have it turned on!)

  • @robdm9838
    @robdm9838 2 года назад +2

    I would literally pay for content like this

  • @duncanmcneill7088
    @duncanmcneill7088 3 года назад +11

    When doing the 48k vs 96k null test, it would be useful to have a spectrum analyser (Voxengo SPAN is my goto) to show where in the frequency domain the additional harmonics are occurring.
    Testing plugins for how well their oversampling algorithms work can be quite revealing - some are awful and the aliasing becomes readily apparent on loud high frequency content (particularly sustained tonal sources like Glockenspiel, Celeste and Triangle etc).
    Non-linear processing in the digital domain has definitely improved a lot with higher processor speeds allowing for higher oversampling rates.

  • @jye.acoustic
    @jye.acoustic Год назад +2

    I use a focusrite 212 audio interface via a mixer to my phone via a usb adaptor. One thing i have noticed is the focusrite 212 operates at 192 24bit.. fine for recording.. but unfortunately can't adjust the sample rate of the focusrite.. although it is deemed class compliant, i have noticed the differences in social platforms when live streaming.. for example - live streaming to youtube the sound is as you say distorted somewhat.. yet live streaming on tik tok its much better.. live streaming to facebook i have found changes dependent upon their latest updates.. which is frequently. This video is very useful in helping me understand whats going on.. and i have to agree that lower would be better for audio. Trying to figure out now how i could improve my audio between online platforms without using plug ins or Daw etc... as trying to keep my set up minimalist. Hope you could do a video with regards the above for best live streaming audio set ups.. in relation to different social platform requirements & using audio interfaces would be very helpful. Thankyou for the post. Very helpful. 🤗👍

  • @txtrader512
    @txtrader512 3 года назад +11

    should render out the difference between the 192k and 44k versions and pitch shift it down. would be interesting to see what that sounds like.

    • @nexusobserve
      @nexusobserve 3 года назад

      Pitch shifting is interesting for monitoring, it gives you time to think. We've all press halftime in Pro tools.

  • @RecordingStudio9
    @RecordingStudio9 3 года назад +13

    I also record and mix at 48/24. The only reason I would record at a higher rate of 96khz is that my audio interface latency drops down to less than 3ms round trip, allowing me to use VST effects during recording with virtually no latency. Maybe that is a topic you can take on next?

    • @g.m.6417
      @g.m.6417 3 года назад

      Depending on what daw your using you can fix the latency even better for recording & play back. If you have an option to set the i/o buffer to a higher value for both. This is not to be confused with the buffer latency slider etc…

    • @RecordingStudio9
      @RecordingStudio9 3 года назад

      @@g.m.6417 Not sure I get what you are referencing. And, how a higher buffer size, in the driver or within your DAW can reduce latency.
      Think of it this way. If you have boxes lined up and at one end someone fills them with papers and push them to the other side. More boxes (higher buffer size) means it will take longer before the first full box reaches the other end for the papers to be taken out, hence introduce latency. This also depends on how quickly a box is filled and emptied on the other end (CPU power). If only a few boxes, then the CPU may not have enough speed to empty all papers in time for the next full box arriving (crackles and pops).
      Hope this simple analogy will help you out.

  • @letsallbe-friends1120
    @letsallbe-friends1120 4 года назад +27

    *I feel like I'm watching the audio engineer equivalent of "Red Dwarf" (I love RD BTW!😏)* 😄👌

    • @PresentDayProduction
      @PresentDayProduction  4 года назад +14

      Letsall Be-Friends Funny you should say that just as I’m putting the finishing touches on mastering the new Craig Charles (Dave Lister) Trunk Of Funk album! 😎

    • @letsallbe-friends1120
      @letsallbe-friends1120 4 года назад +4

      @@PresentDayProduction Wow! That's amazing! His character was the mentor for all my adult slovenliness! 😅😅
      I'll definitely keep and ear out for the release. 🔉🎶👂🕺✨
      Fantastic channel BTW! 🙌🙌🙌

    • @TerryMaplePoco
      @TerryMaplePoco 3 года назад +1

      I also got the RD vibe and I'm thrilled about it

    • @TerryMaplePoco
      @TerryMaplePoco 3 года назад +1

      @@PresentDayProduction amazing!

  • @I_Can_Relate
    @I_Can_Relate 2 года назад +2

    I think my eyes need a higher frame rate😅

  • @ibleasse
    @ibleasse 3 года назад +15

    I mostly use 48k at 24 bit. There are instances where I either need higher rates or higher bits. I use higher rates for audio restoration when digitizing and restoration work. It allows me to see all the problems clearly when I have to zoom right in. But it has no effect on sound. For bits higher than 24 on the other hand, I only use to record sound in the field (sound effects). I again, only use it rarely when recording extremely loud sounds (gun shots, cannons, explosives, jet engines, rockets, Formula 1). Higher rates and bits, I assume are also useful for scientists who use sound and acoustics in their studies (bioacoustics, stellar acoustics). But for musical purposes, 48k@24 is perfect.

    • @IsaacJDean
      @IsaacJDean 3 года назад

      Can the AD/DA converters you use actually handle the extra dynamic range though (for the examples you gave of reasons to use higher bit depth)?

    • @ibleasse
      @ibleasse 3 года назад

      @@IsaacJDean Steinberg AXR4U can handle it

  • @5fiveyearmission5
    @5fiveyearmission5 3 года назад +15

    I have a music technology degree but have forgotten so much as the years have passed. Your channel is top notch!

  • @Pipelyd
    @Pipelyd 3 года назад +11

    Ultra sonics are definitely generating some extreme Steely Dan sounds in this track :) Thanks for the well explained tech. stuff.

  • @caseykittel
    @caseykittel 3 года назад +1

    thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you!
    before I watched this I was a little sad that my computer and my digital mixer top out at 48K, 24-bit. now I'm very happy and am very much more knowledgeable. I do want to download some of those test audio tracks to test my converters. thanks again.

  • @chrisfeatherstone9691
    @chrisfeatherstone9691 3 года назад +11

    In Electrical Engineering school we learn all about this. Everything you said was spot on. Nice work!

  • @valoelios40
    @valoelios40 2 года назад +2

    According to Ian Shepherd, dithering actually matters also at when bouncing audio to 24 bits. He points out that audio in a DAW is processed at s higher bitrate (32 bit floating point) than 24 bits, eventhough the material was recorded at 24 bits. Henceforth dithering would be a necessity also when bouncing audio to 24 bits. Please correct me if this is not correct.

  • @Deluxeta
    @Deluxeta 3 года назад +7

    I am using 96KHz for editing purposes when I'm capturing audio or working on a project within a DAW. It's much easier to varispeed a 96K file at extreme percentages without it sounding off. As for capturing audio, it's much easier to edit out crackles and pops on a needledrop or cassette recording than it would be at 48KHz. When it comes to getting stuff out of the box, I always use 48K.
    Home at Last sounds fantastic regardless of the sample rate.

  • @myyt4382
    @myyt4382 2 года назад +2

    Hi there, I am sorry if I am a mental snail here, but could you please elaborate a bit on how did you test the converters? I do understand you have some test tone there but was there some loop estabilished or..what did you did exactly there please? I am really really curious to find out and test for my grandpa converter here :) thanks a bunch in advance for your time!

    • @PresentDayProduction
      @PresentDayProduction  2 года назад +2

      Use the test tones on our website - link is in the description. That will tell you if your converters are working well at higher sample rates. If you can’t hear anything but can see it on the meters then you’re good 👍

  • @mrnelsonius5631
    @mrnelsonius5631 3 года назад +11

    There’s so much misinformation about this subject! Thank you for a great video. Also: just because a plugin offers oversampling options doesn’t mean higher is better. The quality of oversampling can vary and often sound worse than defaults. Always trust your ears, not the numbers :)

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +4

      Yes, that’s a really good point! And ‘always trust your ears, not the numbers’ is so very true. Thanks for your comment!

  • @Mrspkey
    @Mrspkey 3 года назад +7

    Thank you, thank you, thank you. Despite having a degree in audio tech and have been recording my own music for the last 25 years, I became almost fixated with the idea of more is better. As a result, I always struggled to optimise my setup to match the continuously evolving marketing requirements and pressure from peers. I think you have just demonstrated in layman's terms the reason why going as high as 96K, 24bit is not only unnecessary but potentially, even harmful. I am now hoping that going back to 48K, 24bit will give my computer another couple of productive years whilst I'll also perhaps enjoy the process more now I won't need to keep optimising my setup for unreasonable specs.

  • @dmc5747
    @dmc5747 3 года назад +20

    Thanks for the video, and thank you for the invitation to disagree in the comments (as long as we include some supporting evidence). I totally agree that 16 bits at 44.1k is perfectly good for music reproduction. However, I have twice recently NEEDED to record at 88.2k. Both times it was because Fiverr session violinists had sent me recordings that were unusable due to their recording hardware not coping with the surprisingly high amount of ultrasonic energy from their violins.
    The first session player was using their computer's onboard audio input, and after I heard it I viewed its spectrogram to see what was causing the harsh sound. The aliasing was clearly visible because the violinist's vibrato made the harmonics wiggle quite a lot, and the aliased wiggles were all upside-down. I asked for a revision recorded at 88.2k and then I down-sampled the file to 44.1k on Pro Tools so I could import it into my session. This removed the ultrasonic frequencies digitally and left me with a surprisingly good recording. The second session musician claimed to be using a Focusrite Scarlett 2i2 (I have no way of knowing for sure) and their recording displayed the same upside-down vibrato when recording at 44.1k.
    So, the moral of the story is that if the audio interface has not got what it takes to filter out ultrasonic energy, then it is better to record at 88.1k or 96k so that the Nyquist frequency is raised above that energy. Afterwards, you can rely upon digital filtering to remove it while converting to a sensible sample-rate.

  • @o5try708
    @o5try708 Год назад +1

    Great channel. This video is the best I've seen on this subject. Far ahead. I'm wondering... why 48k instead of 44.1k then? Even if the difference in distortion would be marginal wouldn't it come with only more benefits? Assuming my listener can't hear above 22kHz

  • @nickager2008
    @nickager2008 4 года назад +78

    I record audio for film and tv and have been asked to record sound effects such as explosions at higher sample rates as post production want the option for time stretching with no artefacts. But still record 99% of our work at 24/48

    • @dwindeyer
      @dwindeyer 3 года назад +4

      As in time stretching while retaining the pitch? How does a higher sample rate result in less artefacts in that situation?

    • @mwdiers
      @mwdiers 3 года назад +7

      @@dwindeyer Less interpolation.

    • @ovonisamja8024
      @ovonisamja8024 3 года назад +1

      @@dwindeyer Not retaining the pitch.

    • @codyrap95
      @codyrap95 3 года назад +6

      That's what I expected from the video to explain. Pretty disappointing.

    • @AnnaVannieuwenhuyse
      @AnnaVannieuwenhuyse 3 года назад +2

      @@codyrap95 the distortion you get from exceeding nyquist itself is less important than the fact most recording converters just really suck at high sampling rates. I prefer 1080p 60fps over 4k 15fps. There is a sacrifice somewhere

  • @harvesteroftone5473
    @harvesteroftone5473 3 года назад +1

    I record at 96kHz because the latency is so much better on my interface. I can get 3.4ms RT at 128 samples.
    I don't bother down-sampling until I bounce my mix for mastering.

  • @nayaleezy
    @nayaleezy 3 года назад +30

    6:40 that 96khz beep had so much more warmth & presence, was it recorded on an Neve 1073 preamp?

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +19

      Come on Lee, it was an API! You should have heard that! 🤪 The transients!!!

    • @gareth432
      @gareth432 3 года назад +6

      @@PresentDayProduction even though you should have compressed the beep to glue it together ;-)

    • @d.lafollette
      @d.lafollette 3 года назад +7

      @@gareth432 It's all about the beep glue.

    • @ruidanin_rocker
      @ruidanin_rocker 3 года назад +1

      Ahahahah! Very good!

    • @mthomas1091
      @mthomas1091 3 года назад +1

      Omg like butter!

  • @sanidhyachauhan5578
    @sanidhyachauhan5578 2 года назад +1

    iPhone is actually 60 fps and now it's 120!!!

  • @phiprion
    @phiprion 3 года назад +10

    24/96 is great when you need to pitch/tune vocals or samples; Analog emulation plugins give a much better result espicially at higher freqs (anything that brings saturation), it's very clear with synths or cymbals or overdriven sounds

    • @antigen4
      @antigen4 3 года назад

      Actually it’s in the low frequencies that we separate the wheat from the chaff with converter quality and where the higher sample rate converters really shine

    • @Wizardofgosz
      @Wizardofgosz 3 года назад

      @@antigen4 Explain.

    • @antigen4
      @antigen4 3 года назад

      @@Wizardofgosz better converters tend to have the BIG payoff in the LOW frequencies at least as much as in the highs ... what more is there to explain?

    • @Wizardofgosz
      @Wizardofgosz 3 года назад +1

      @@antigen4 let's see some science on this please. I call total BS. Since lower frequencies are not the hard frequencies to reproduce.
      Or is this just religion?

    • @antigen4
      @antigen4 3 года назад

      ​@@Wizardofgosz - gotta get up PRETTY EARLY in the morning to fool YOU huh?? haha ... ok don't take my word for it - you wanted the explanation. go listen to some proper converters sometime and let's see what you think. go demo a BURL of Forsell or something in the 20-30K range and compare to an inexpensive soundcard or what have you. this is exactly what i was talking about when i mentioned (in another comment) people dwelling on the superficial ...

  • @gordongurley3982
    @gordongurley3982 3 года назад +1

    Hmm, latency is lower at 96k, any pitch shifting or tuning sounds better. That’s enough for me to use 96k.

  • @rajeshnair4399
    @rajeshnair4399 3 года назад +8

    Hi. I’ve been wondering how you played back files of different sample rates from the same logic project. Am I missing something here?

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +2

      With GREAT difficulty... in the end we upsampled the 44.1 and 48 to 96

    • @EpithetMusicTV
      @EpithetMusicTV 3 года назад

      @@PresentDayProduction what do you use for SRC?

  • @___David___Savian
    @___David___Savian 3 года назад +1

    At 24:07 96 Khz was clearly thicker and sounded better than the 48 Khz LOL So much for this supposed deep desire to brainwash us. I do agree that 192 KHz is too much. But, 96 is the sweet spot. Also, I think that the majority of songs this guy works on is not today's top urban music that is really deep bass wise and synth wise. Remember that this guy's ears are used to music sounding more analog and lower in bit rate. But, today's music and tomorrow's music will require higher sampling at around 96 Khz. Especially, when current technology improves, 44 Khz will sound dull.

  • @jayddd4946
    @jayddd4946 3 года назад +6

    Interesting, but something I don't understand. You said this song was recorded at 24/96. What exactly do you mean? All individual instruments and vocal sources were recorded at 24/96? Then, you re-recorded each track identically in a new project at 24/48 and then at 16/44? Obviously not, thats impossible. So could you verify what the original source was recorded at, how exactly the variations were created, and if the analog track sources contained information above 20K? Also, I think recording 30 tracks of stuff at 24/96, and recording 30 tracks of 16/44 would yield different results when combined, compared to a 24/96 stereo mix being re-recorded or downsampled. And you don't hear digital information in a Logic file - you hear what is output through the D/A converters, amplifiers and speakers, though the air into you ears, then your brain processes it. I think there are quite a few more variables in this than it might seem. Is recording at 24/48 good enough, yep, for sure. But insinuating that it's pointless to use anything above that, as a general statement, is not accurate. (with respect, love your channel, and correct me if I'm misunderstanding anything).

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +6

      Hi Jay, the song was originally recorded and mixed at 96K. We then took all the plugins off, converted the individual mix channel files to the lower sample rates and mixed it again at the lower rates using the same plugins on the same settings. And yes, you hear the output through your converters and amplifiers and speakers, which are generally designed to reproduce 20hz-20khz or thereabouts. Now I didn’t hear any difference on my system between the lower and higher rates, but I’ve got £30K invested in a playback system - a lot of people have commented that on cheaper monitoring systems they can hear a difference. And that’s my main issue with higher sample rates. If your electronics are making it sound ‘better’ or more ‘analogue’ then that’s not good - because that isn’t happening in Logic, it’s happening in your monitoring system, and only you are listening back on that! That’s why we included the test tones. And that’s why a lot of people try 96 or 192 because they assume ‘higher is better’ and hear a difference when, in actual fact, for them, on their system, higher is worse.

    • @fuglbird
      @fuglbird 3 года назад

      @@PresentDayProduction So you are actually not comparing three recordings of an analogue signal with different sample rates. You are just comparing correctly down-sampled versions of the same digitized signal, which in my opinion makes the entire comparison invalid. Try sending the sum of two sine waves to a speaker - one 9.5 kHz and one 19 kHz. Make sure that the sound pressure are the same from both frequencies. You may need to adjust the signal to the speaker a couple of times. The purpose is to test the phase at frequencies close to half the Nyquist frequency. Now add a third sine wave at 28.5 kHz. The amplitude of this frequency is of no importance. The purpose is only to check the aliasing filter. Finally take the recordings at your three sample rates and compare the phase of the recordings - or just do the subtraction again. This a very crude and simple test; but it will focus on the A/D part of the process.

    • @sandernightingale
      @sandernightingale 3 года назад

      @@fuglbird If you record at 44.1Khz no "third sine wave at 28.5Khz" is coming through. It's not in the signal, it doesn't get captured. What are you trying to prove? It's very very very simple. A 44.1Khz recording will be absolutely accurate up to 20Khz. You can prove this mathematically. The converters are also easy to make unlike 96Khz or higher ones. You cannot hear anything more. There is simply no reason as far as audio quality goes to go beyond 44Khz or 48Khz.

    • @fuglbird
      @fuglbird 3 года назад

      @@sandernightingale Then prove it mathematically - here! In analog to digital conversion you make a lot of compromises. You need to avoid aliasing and you need to preserve your amplitude and phase properties up to 20kHz. You cannot do both 100% when you are sampling at 44.1kHz. Please do two things: 1. Show your mathematical prove here. 2. Try testing it yourself with a simple tone generator. There is one reason to go above 44 kHz and that is to get better anti aliasing filter response below 20 kHz. Off course you will down-sample to 44.1 kHz or 48 kHz. Of course we cannot hear the tones above 20KHz; but we can reduce the distortion of the signal below 20 kHz caused by the anti aliasing filter.

  • @KeenanCrow
    @KeenanCrow 9 месяцев назад +1

    If you're modeling any kind of non-linear hardware, though, there are benefits to going higher, or at least oversampling those models.

  • @AWidgetIHaveNot
    @AWidgetIHaveNot 3 года назад +6

    Five thumbs up from me. I live in London. Can I be your tea boy? I'm not going to tell you my hearing is shot from too many Zappa plays Zappa gigs because who needs hearing when you have a DAW? And thank you Alan Parsons for my tinnitus.

    • @iainparamor1482
      @iainparamor1482 3 года назад

      I genuinely can't hear a thing above 16.5k haha

  • @teashea1
    @teashea1 Год назад +1

    excellent content and style and production values

  • @berndkiltz
    @berndkiltz 3 года назад +15

    Wow. 30 Years recording digital and now I understand it. Kudos to you!

  • @launchpadmcquack98
    @launchpadmcquack98 2 года назад +1

    I use 44.1 because its the default in my DAW, and all of my previous recordings are at that rate.

  • @aristosxanthus514
    @aristosxanthus514 4 года назад +12

    Underrated Channel. Great editing and explanation of concepts with the graphics. Keep it up!

  • @IconicPhotonic
    @IconicPhotonic 3 года назад +1

    I'm not sure if this is fact or fiction, but I was under the impression that if you were to do any extreme time stretching, having a higher sample rate would give you "more to work with". I know this is an edge case, and likely should not factor into standard advice for most people. I'm wondering now if this would also factor into recording IR's, as increasing the decay on a convolution plugin might also be time stretching the IR.

  • @firstnamesecondname5341
    @firstnamesecondname5341 3 года назад +7

    🤔 20:25 but could the ultrasonics be issuing secret ‘instructions’ 😉 🤦🏻‍♂️

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +4

      Damn, you’ve sussed it!! 😱 If people would stop recording at 192 it would be the end of the current pandemic too!

    • @mrnelsonius5631
      @mrnelsonius5631 3 года назад +3

      Well, we know dogs can hear them.... and dogs eat poop. So maybe it’s a good thing we aren’t getting those instructions? ;)

    • @2112jonr
      @2112jonr 3 года назад

      Only the cats can hear it. Messages from the home planet.
      Why do you think they're always hanging around in synth studios ;-)

  • @rockstarjazzcat
    @rockstarjazzcat Год назад +1

    Nice use of the video analogs to illustrate the bad assumptions! Kind regards, Daniel

  • @RS-pp7ng
    @RS-pp7ng 3 года назад +7

    This is easily THE greatest explanation on this subject. Ever - and with a great dose of hilarious British humor! Thank you so much guys, we're forever grateful for this. Bless you all.

  • @Nobody-NoOne
    @Nobody-NoOne 2 года назад +2

    Finally I found someone speaking reality, follows physics, and agrees with me, instead of snake oil formulas.

  • @reverendcarter
    @reverendcarter 4 года назад +5

    pitch shifting is much smoother at higher latencies. halving the pitch of a 96k file still means its at 48k resolution so its not a grainy and artificial sounding. also the latency at the same buffer size is lower. but i do agree about everything else.

    • @peetiegonzalez1845
      @peetiegonzalez1845 3 года назад +1

      Yes. As an "audiophile" I'm perfectly happy with 16/44, but as a DJ I would much rather have a much higher resolution so I can pitch-shift and beat match without it sounding weird or introducing artifacts.

    • @mwdiers
      @mwdiers 3 года назад +1

      Yes, in mastering and mixing there is often an advantage at working at 96Khz when you are doing things like pitch shifting, etc., particularly when using plugins that do not do oversampling (Soundtoys, I'm looking at you). But upsampling a 48 or 44.1 file to 96 will give you the same advantage without the risk of distortion, because of the intrinsic bandwidth limit. Live DJing with such effects is essentially equivalent to mixing ITB. In a final master, though, it makes no sense to distribute anything over 48.

  • @damon_aaron
    @damon_aaron 16 дней назад +1

    I just watched a "professional" video about sound on film. They got sample rate wrong, conflating it with frame rate. My inner Mark was yelling at my screen,

  • @timnordberg7204
    @timnordberg7204 3 года назад +4

    Finally, a definitive answer--that merits a sub. Thanks for the great video. I do have one use-case-specific question in mind: if I were to record a sound (let's suppose the mooing of a cow) with the intention of playback at 0.25 speed (think sound design, not music) would this be a case where recording at 192k would be the preferred method--given that the sound will still have 48,000 samples in each second after it's been stretched to quadruple length?

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +6

      Is the cow mooing at 96khz? Are there bats in the vacinity that need to be captured alongside the cow? If not, then you don’t need a higher sample rate. If the cow is mooing at, say, 2khz and you slow that down to 0.25, it’s now mooing at 500hz. So a higher sample rate is only useful if there is information higher up in the frequency range you wish to capture. Will you get audible artefacts if you record it at 48khz and slow it down to 0.25? No, because you only need a sample rate of 0.25 - 12khz - to play that back. Hope that makes sense!

  • @justlooking813
    @justlooking813 2 года назад +2

    I use the higher sample rates with my Sanken CO-100k and hydrophones for capturing sound effects to be played back at 48K. Pretty niche use case, but absolutely necessary for high frequency capture.

  • @Schwermetall
    @Schwermetall 3 года назад +5

    Lol, please use a different kind of signal the next time for the sampling explanation.
    Try it with a square wave instead a sine wave a do it at a higher frequenz (more than 10kHz).
    With a SF44 every kind of 14kHz will be a sine wave.
    That's a part of the Fourier-Analyse!
    But don't get me wrong. To hear it is a different thing.
    Best regards, Alex

    • @benjaminjoeBF3
      @benjaminjoeBF3 3 года назад +1

      yeah too few points at 12k to reproduce anything than a sine. This can reduce hi freq details, def in the hearing range. Oh well digital sucks anyway ;)

    • @benjaminjoeBF3
      @benjaminjoeBF3 3 года назад

      @MorbidManMusic Remains Cant stand people having different opinions? Whats there that make your bum burn lol and best regards

    • @TheJonHolstein
      @TheJonHolstein 3 года назад

      square wave, is the same as a sine wave with sine wave overtones, so when hitting the frequency limit the shape will change, as the overtones are removed. Our ears do the same thing, so as long as the frequency represented passes our hearing, there should be no difference. But without anti aliasing filters, some issues can occur.

    • @Schwermetall
      @Schwermetall 3 года назад

      @@TheJonHolstein "there should be no difference"... well, that's the question

    • @TheJonHolstein
      @TheJonHolstein 3 года назад +1

      @@Schwermetall if all you hear is the pure fundamental frequency of a square wave because it sits at the upper limit of your hearing, that is a sine, no matter what it would look like on an oscilloscope capable of showing frequencies above your hearing, but filtering perfectly to match your hearing, the waveform on the oscilloscope would be identical to what you hear. There is no difference if it is correctly handled, but that requires filtering down (anti aliasing filter), to avoid the wave foldback, that occur when passing the nyquist limit. Our hearing though isn't like a brick-wall filter, and losses of frequency hearing, doesn't necessarily happens in a precise order, so it would take a lot of experimentation and loud playback, to find the aboslut top frequncy of ones hearing.

  • @teabreakbeats
    @teabreakbeats 3 года назад +1

    I make music for dogs. They prefer 192

  • @KariKauree
    @KariKauree 3 года назад +3

    In all explanations of audio sampling theory/the Nyquist Theorem that I've seen, a simple sine wave is always used. Would be nice to see it illustrated using a highly complex waveform for a change. I think it would put skeptics in a more accepting state of mind before going into all the other explanations and demonstrations.

  • @Hugoknots
    @Hugoknots 3 года назад +1

    This shall be the last video i watch on the subject! thank you

  • @juap
    @juap 3 года назад +3

    192khz is better you don’t know nothing, more is always better! ...
    I was joking, I’m currently recording at 44khz because that’s default on the Apollo Twin, and I read that if I record at 48 then export my mix to, for example Spotify, they downgrade to 44 and that process can change a little the sound... don’t know, that’s true? Thanks a lot!

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +1

      Thanks for your comment! Yes, I generally send masters back at whatever the ‘destination’ frequency is, so I know what it sounds like, and aren’t reliant on any conversion their end

  • @a.r.stephen2835
    @a.r.stephen2835 3 года назад +1

    Thank you outstanding video

  • @AnonymSuperhero
    @AnonymSuperhero 3 года назад +3

    Wow so much useful information! Especially the AES Test Tone part to put it into perspective.
    I did this test myself and I'm shocked that I can clearly hear the unwanted noise in my monitors at 96+kHz for both test tones. Considering I use a typically well regarded audio interface around 250€ this alone definitely convinced me to stay with 48kHz for my recording, mixing and mastering. Thank you for this!
    I'll dive deeper into this topic and I'm looking forward to more videos :)
    Best regards from Germany

  • @joeMW284
    @joeMW284 3 года назад +2

    You can't hear past 20khz therefore 24 bit/48k is high enough for me to be comfortable that I'm getting the best perceivable quality. No reason to waste drive space and CPU power.

    • @martifingers
      @martifingers 3 года назад

      It's much worse than that! Sadly for most people over 20 or so 15Khz is the limit.

  • @davewestner
    @davewestner 3 года назад +4

    5:30 everyone watching this video is waving their hand in front of their face

  • @sunglint
    @sunglint 3 года назад +1

    Why does the Goodbye Look cover sound like they started playing Home at Last and just changed the words?

  • @Limbiclesion
    @Limbiclesion 3 года назад +3

    Really good explanation of the quality issues related to various bit depth and sampling rates ...24 48 is certainly my way forward. 👍🦄🙏🎩

  • @TheSakuraGumiLTD
    @TheSakuraGumiLTD 3 года назад +1

    great video about this subject

  • @freakkyt
    @freakkyt 3 года назад +16

    One factor that would benefit from higher sampling rate is I/O latency, especially for those who rely on non-hardware monitoring while recording. A common example are guitar players recording through IR speaker simulators in the DAW. As long as CPU and HW resources in general allow it, 96Khz usually guarantees the 5ms or less latency most player need to get a realistic response during a recording session, without having to lower the buffer too much and incur in stuttering or artefacts.
    Apart from this potential reason for higher sampling rate (which is specific to recording rather than distribution), all the considerations in the video are agreeable, especially since you pointed out the internal oversampling often done by plugins dealing with high harmonic content, such as compression and saturation DSPs. Thanks for the vid, keep up the good work!

    • @minimalmayhem
      @minimalmayhem 3 года назад

      Yes... I've found that working at the higher sample rate makes my studio latency much snappier. Should I then be individually re-sampling these files recorded with higher sample rates to the lower rate, or will they sort themselves out when I change the whole project down to 48K for the final mix down?

    • @ezrashanti
      @ezrashanti 3 года назад

      @@minimalmayhem Not worth it. 96k is good. The fact that it can record frequencies up to 48 K is actually an advantage. There will not be aliasing of the frequencies between 24K and 48k as they are recorded accurately. Aliasing occurs when harmonics above the Nyquist frequency bounce back down below it.

    • @freakkyt
      @freakkyt 3 года назад +1

      ​@@minimalmayhem That depends a lot on HW resources. Personally, I do simple stuff for home use, nothing pro or commercial, so working at 96KHz and exporting the mixdown at 48Khz works fine. Some DAWs (Cubase for sure, don't remember Logic) prompt you asking if you want to change the sample rate of all recorded samples when changing the project sample rate after recording, even asking if you want to move or keep samples at their location, so if you want to record at 96+ for low latency, but prefer to mix at 48Khz togo light on CPU/Mem/Storage, that's totally doable. The main limitation is that if want to record new parts after the samplerate switch, you'll have to do it on the new "mixing" setting, with 48Khz and probably lots of plugins loaded, so the I/O latency will be higher.

    • @freakkyt
      @freakkyt 3 года назад +3

      @@Indrid-Cold Interesting video, but we need to be careful when looking at research data, as it's very easy to be deceived into deducing incorrect absolutes. The capitalisation in "FASTEST transit rate and ANY nerve impulse" appears to suggest that it's humanly impossible to perceive audio latency under 80ms, and we all know that's not the case. Stimuli perception has been studied in depth and there's plenty of interesting literature on it. Perception has been found to be different for visual stimuli (the 80ms Vsauce is talking about, some studies estimate it more around 50ms), touch (approx 50ms with that interesting nose vs toe delta explained in the video) and audio stimuli (approx 10ms), with tests showing substantial differences between casual listeners and musicians. Those times are always averages, not physical constants. Back to audio latency, let's also keep in mind that the I/O latency indicated in the DAW is pure processing time (bufferSize/sampleRate=512/48k=10.67ms), which is only the portion of total perceived latency contributed by DSP. In our machines there's additional buffering and delay happening at drivers layer (interrupts in audio and USB drivers) and can contribute an additional ~10ms, or as low 3ms with latest USB C drivers at 96KHz (again, latency improves with higher sample rate).

    • @mikemckernan1076
      @mikemckernan1076 3 года назад

      @@Indrid-Cold I think you may be mixing up m/s (a speed) and ms (a unit of time).

  • @LozaAlexander
    @LozaAlexander Год назад

    Im sorry but i dont know how people cant hear the difference between 96 and 48... 96 is way more clear period and it has better top in its clear as day??? I think some people just cant hear that high and some can... only issue is 96 dont work well on the computer for large projects but it definitely soubds better its really noticeable come on yall

  • @TarekTawakol
    @TarekTawakol 3 года назад +4

    I love the video, it's been a dilemma for me for so many years. I've settled to work 48khz/24bit for almost 5 years now. However, there is an aspect you are disregarding and as a mixing engineer i have experienced it first hand. Plugins- Plugins run algorithms. Algorithms processing higher numbers (sets of data inputed) yield higher results (or at least different ones). So running a vocal file recorded at 48khz through a seventh heaven reverb plugin Itb sounds very different than running a 96khz one. Processed reverb might sound smoother or more "lush" simply because the data coming out of it is higher. In photo editing for example, photos are finally viewed on a screen with a maximum size of like 30x25cm, prob jpeg compressed at 1280x720p. But in order to get a perfect edited photo, the original content was MUCH much bigger so editors can zoom in and craft details like eye lashes or pixel editing. The higher the resolution, the more control they get over altering the content. In higher sample rate sessions, especially with plugins processing, the story seem to be similar. Try it with simple waves rbass plugin on a kick, you will have more control with the higher sample rate. With multiple tracks, now you see a difference. Once you finally mix all that down to a 44.1 it just sounds so sharp and tight, just like that big res photo you edited down to jpeg... Try it out and lemme knw - also love from Egypt y'all!!

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +1

      Thanks for your comment! Most plugins upsample for the reason - check out the excellent video from fabfilter, that explain that part much better than me! 😉 love right back at ya! ❤️

    • @Fox_is_Fox
      @Fox_is_Fox 3 года назад

      As an "award winning" music producer I agree with you. There's a reason why high profile engineers work at high resolution. That aside, in my line of work music must be the center of attention, a good song overcomes any small technical fail so, to relieve my cpu I work at 24/44.1 ;-) Thanks for the input PDD!

  • @AudioReplica2023
    @AudioReplica2023 3 года назад +2

    I've always fine 24bit/48hz. I don't care about anything else but putting my music out there. My gf, uncle or who ever listen to it it's not worry about sample rate at all... So why should I put to much attention to it? Why worry about something they can't hear and not something actually can? It's dumb

  • @bigmacmillerlite8775
    @bigmacmillerlite8775 4 года назад +4

    If you guys keep this up then you will be Kingz of teh internetz. Excellent channel Bong Friends.

  • @dnldr4386
    @dnldr4386 3 года назад +1

    But why do people use higher rates than necessary?
    Just like the question why does a dog lick his own balls? ....
    BECAUSE HE CAN!!!!!

  • @MrMeluckycharms
    @MrMeluckycharms 3 года назад +3

    Magically delicious fun facts found in this video!

  • @sebve9399
    @sebve9399 3 года назад +1

    The reason why sampling a 1kHz sine wave at 2, 4, 10, 48 or 96kHz sounds the same is because you have a low-pass filter in every DAC to smooth it out. Otherwise you would need a much greater sampling rate than twice the frequency.
    In reality, if you measure a sine wave at doulbe its frequency, you get a square wave... Thanks to the LPFs in the DAC, you'll hear a sine wave out of the speakers (hopefully).
    This visual at 7:12 makes my eyes bleed, this is a sine wave with some dots on it. Not a sine wave being sampled. A sampled wave looks like a flight of stairs because of the sample-and-hold nature of the ADC process.
    For general uses I agree, you don't need a sampling rate higher than 40kHz, though for heavy frequency-domain processing like pitch-shifting and such, you might enjoy your 96kHz a lot. Think of an ultrasound (say at around 40kHz), now you can bring it down to the audible range by pitching it down two octaves to 10kHz. It would be impossible to do that at 48kHz.

    • @sebve9399
      @sebve9399 3 года назад

      @ReaktorLeak dots

  • @beatweezl
    @beatweezl 3 года назад +4

    I busted out laughing at least three times in this video. You Brits are hilarious. Love ya.

  • @czdot
    @czdot 3 года назад +1

    I got a iFi Zen DAC (plays files even above 192 kHz and also DSD) with a pair of used Sennheiser HD600. I have listened to some HiRes files, and I COULD NOT TELL A DIFFERENCE. The only difference was getting a balanced cable for the HD600, which Zen DAC supports. I think it sounds more spacious, and that's pretty much it. That was the one upgrade that made sense to me. I don't regret getting that DAC. It sounds better than any other sound source available to me.
    At least I discovered some new (meaning old) music I wouldn't normally listen to, like David Bowie, and I actually sat down and listened to a whole Beatles album for once. HiRes is not usually available for metal. :-)

    • @PresentDayProduction
      @PresentDayProduction  3 года назад

      Thanks for your comment, Martin! It makes total sense - and often ‘hi-res’ versions are mastered very differently, so punters think they’re hearing (and paying more for) higher resolution when they’re actually paying for an EQ tweak! And yes, I don’t know why so many hi-fi guys and girls are spending £1800 on high end phono cables when spending £30 on balanced cables in a balanced system would yield far better results!

  • @gammakeraulophon
    @gammakeraulophon 3 года назад +4

    Great in depth video.. thankyou..
    There's only a couple of areas you neglected to address, such being the use of higher sampling rates where audio recorded is intended to be repitched in music composition.. once such audio is pitched down by 2 octaves then surely theoretically the stuff we don't want which is floating outside the upper limits of human hearing can be brought down into the audible spectrum.. thus making higher sampling rates more effective at keeping the audible band clear if sound is destined to be mapped across a keyboard of several octave range.
    Also though.. as I understand it.. high sample rates are used in such pursuits as Oceanographic Audio Research... But for the same reason.. that there are sounds captured which are of interest to the scientific research, but which lay outside the human audible band.. whereby such sounds are again destined to be repitched in order to bring them into audibility.. By oversampling at 96 or 192kHz, one can again repitch downward by up to 2 octaves without bringing the sampling frequency itself down into the audible spectrum.

    • @weschilton
      @weschilton 2 года назад

      People keep saying this nonsense and I have yest to see one single example of this being done anywhere. How can this method even be practical?? How exactly do you use these mythical ultrasonics? I mean you can't HEAR them so how do you even know what you're getting or even IF you are getting when you pitch them down?

    • @gammakeraulophon
      @gammakeraulophon 2 года назад

      @@weschilton
      Is not nonsense.. is marine science..
      If marine science is using it then it is to some intelligent and practical purpose.
      Just because you yourself do not understand something.. or it is outside of your sphere... does not merely make that something 'nonsense'. You only present yourself as ignorant.
      As for everything I said about recording samples for intended repitching in musical composition. Such is patently true. It is important to keep the clock signal out of the audio band.
      Either you undestand a subject or you do not. Takes no effort at all to call something nonsense, merely because you cannot be bothered to make some effort towards understanding.
      Casual and lazy.

  • @mikeoak54
    @mikeoak54 3 года назад +1

    thanks for the video!

  • @gabriel_kyne
    @gabriel_kyne 3 года назад +3

    great! Can't believe you only have 14k followers, it's like a professionally produced television show!

    • @PresentDayProduction
      @PresentDayProduction  3 года назад

      Thanks Gabriel! Thanks for your support, we are working hard on some great new content for next year!

  • @chrislysiak9561
    @chrislysiak9561 3 года назад +1

    Well... I may be the only TRULY DEAF person here but I DEFINITELY hear the difference between 96k played at 24:13 and 48k at 24:20. Just listen to the snare drum. It lost all of its openness at 48. On top of that, the snare went more "back" in the mix. I'm not sure if that's the youtube sound issue but I hear it pretty clearly. For the very same reason, all productions I do are at 96k. I hear a dramatic difference, especially with Acustica Audio plugins as well as virtual synths like Omnisphere. To add the cherry to the top, my Universal Audio 2912 audio converter which I use for my recording just shines at 96k compared to 48k. I just might be cursed hearing those differences, but unfortunately, I do and no amount of "cancellation" evidence will change that. BTW, I always do blind tests as those are the only true tests.

  • @sekritskworl-sekrit_studios
    @sekritskworl-sekrit_studios 3 года назад +3

    My FIRST video with you... and you ALREADY SOLVE one of my biggest curiosities as I am a NOOB to Audio.
    Thank you. And, Happy Holidays!

  • @roberthedin909
    @roberthedin909 2 года назад +2

    I seem to be settling on 24/48, before watching this video, for reasons mentioned in the video, and because I like having creative options, sometimes I like the sound at 48k for virtual instruments and plugins, and sometimes I like the sound at 96K when I use Metaplugin to x2 the sample rate. So, make things easier on the computer, and have more sound options.

  • @seattlegroovescene
    @seattlegroovescene 3 года назад +3

    Excellent thanks for this video!

  • @DaveBessell
    @DaveBessell 3 года назад +1

    I'm not an audio professional or expert of any sort. Just an experienced musician. Subjectively to me higher bit depths make more difference to the audio fidelity than higher sample rates. I've no idea why that is - subjectively higher bit depths sound more solid and closer to the acoustic source. I understand from a theoretical point of view then bit depth should just affect noise floor but I hear something else going on as well. Any ideas as to what I am hearing?

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +1

      Higher bit depths also give you greater dynamic range, and while you could argue that isn’t audible, you are getting more measurements per sample, and that makes a far greater difference than higher sample rates.

  • @javytorres94
    @javytorres94 3 года назад +10

    This video is free?! My God the information! Thank you guys! Great content! New Sub!

  • @off1off1
    @off1off1 3 года назад +1

    This is just a remake of the fab filter video about sample rates , and it says the opposite of what you say , which is that sample rate matter if your dynamic plugins are not anti aliasing carefully ... ( it is all depending on what plugins you are using , many Waves plugging are:was not oversampling depending on what versions you have )
    www.gearslutz.com/board/music-computers/1330228-testing-aliasing-plugins-measurements.html
    "Thanks for your analysis.
    Like you, many times I have commented what you say:
    - big software developers (waves, plugin alliance, etc) continue to develop plugins that do not oversampling and that show an indecent amount of aliasing. That aliasing was acceptable years ago, but at present it is not understandable due to the great power of current processors"
    You forget also the fact that experimental down pitching (-2/3 octaves) samples is better on high sample rate files since you have something in the very high frequencies with high samples rates files ;)
    And yes of course you need to use a strong cpu to work at high samples rate otherwise the sound is going to be worse than at lower sample rate :)
    I think you should not conclude that high sample rates are useless since it’s not , because now, that’s what people watching this video are going to believe ...
    Aliasing are creating low frequencies where they was none with many plugins at low sample rates causing phase issues with your bass/kick stems etc ...
    However it’s only with dynamic treatments such as compressors and drive fxs.
    Last thing, loud signals in digital plugins are creating drive, without anti aliasing system.
    If you are not gain staging your plugins carefully , you gonna have more aliasing, with any plugins you are using, even those with antialiasing system !

  • @Wawisupreme
    @Wawisupreme 3 года назад +4

    As an audio engineer myself, I should point that most of what you said is true, nevertheless, there is some utility in high sample rate recording, specially in audio design and postproduction.
    Being so much above nyquist gives pitch headroom to manipulate (lower by several octaves depending the sample rate) audio without losing detail. In that regard ultrasonic information is usefull.

    • @Wizardofgosz
      @Wizardofgosz 3 года назад

      Ummm, WHAT?

    • @Wawisupreme
      @Wawisupreme 3 года назад +2

      @@Wizardofgosz it is a bit tricky to explain, but if you record something @ 48kHz and then you pitch that audio file an octave lower (but in a traditional way, not with a process that adds interpolation like some plugins do) you are basically dividing the number of samples in half, and doubling the duration of the audio file. By doing that the real sample rate of the resulting file would be equivalent to 24kHz, which means that you are now not able to reproduce audio information in the whole audible spectrum do to the Nyquist Theorem, the half of which is the highest frequency you can accurately represent ( in this case 12 kHz). If you record at higher sample rates, even the crazy 384kHz, you should be able to pitch down several octaves before running out of pitch headroom.
      Some audio applications could benefit from this fexibiliy.

    • @weschilton
      @weschilton 2 года назад

      @@Wawisupreme Did you seriously just say "pitch headroom"??? hahaha!

    • @natdenchfield8061
      @natdenchfield8061 Год назад

      What's not to understand?
      It's a great analogical phrase that simply describes what they are talking about.

  • @Tumanic1996
    @Tumanic1996 Год назад +1

    cheers for this video

  • @JeremyHalterman
    @JeremyHalterman 3 года назад +3

    Is a higher sample rate / bit depth beneficial for tempo stretching? The shorter length and higher dynamic range could theoretically maintain the smoothness of the audio as it’s stretched, but if the DAW oversamples prior to the stretch, the outcome should be the same-? I might be putting my ignorance on display here 🤪

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +1

      It depends on what you’re doing - theoretically, yes, and theoretically, no! The best way is to experiment with different sample rates in your own projects and see what works/sounds best to you. I’ve time stretched vocals 10-15% for remixes at 44.1 or 48khz, and there have been artefacts, but not because of the sample rate, because I should have re-recorded the vocal at the new tempo!

  • @peterbrandt7911
    @peterbrandt7911 2 года назад +1

    How did I miss this? Great video! It'll give some audiophiles a heart issue, but you can't fight religion, so what the heck.

  • @danield2000
    @danield2000 4 года назад +3

    It all comes down to one thing, what sample rate is the final master going to be? Any conversion in digital audio is less than ideal and you should record at the same rate you are going to playback.

    • @rdoursenaud
      @rdoursenaud 3 года назад

      While this was true in the early ages of digital audio, this is not really the case anymore. Sample rate converters have come a long way and produce distinguishable results now.

  • @frp_freddy
    @frp_freddy 2 года назад +1

    When I up my sample rate to 96k my loop back latency round trip is lower when when in 48k ... Im not sure how more information would take less time ? 😕

    • @PresentDayProduction
      @PresentDayProduction  2 года назад

      The amount of time it takes to sample, for example, 48,000 samples takes exactly one second at 48KHz sample rate.
      At 96KHz, it takes half a second to process 48,000 samples as 48,000 = half of 96,000.
      So, the amount of time to process a certain number of samples - ie. the buffer size you’re using in your DAW, such as a value of 32, 64 or 128 samples, will be halved, as the rate of samples per second is increasing.

  • @griffin8062
    @griffin8062 3 года назад +3

    There is an advantage to higher sample rates in live sound, because a mixing console at 96k has half the input-output latency of a mixing console at 48k.

    • @AnnaVannieuwenhuyse
      @AnnaVannieuwenhuyse 3 года назад

      Can you explain how this is true? I use mostly analog consoles, so I have no idea why. I'd like to know. :)

    • @griffin8062
      @griffin8062 3 года назад

      @@AnnaVannieuwenhuyse Yeah, so digital mixers need to convert a signal from analog to digital, process it and convert is back to analog. When processing a signal, a buffer is required for the specific operation. Let's say the EQ requires a buffer of 8 samples, so there is an 8 sample delay between input and output. Each sample is 1/96000ths of a second at 96khz but 1/48000ths of a second at 48k. This means the EQ would take about .083 milliseconds at 96k but .166 milliseconds at 48k

  • @seybsnilksz
    @seybsnilksz 3 года назад +2

    Hey guys, mr. Know-it-all here. I've really been enjoying your videos since finding them today. Very informative, very well explained, and the British accent is superior :)
    I did disagree with one thing here though where you said that no dither is required when exporting at 24 bit. Since most (all?) DAWs operate internally at 32 or 64 bit floating point, the bit depth is indeed lowered when exporting something to even 24 bit. Of course it's debatable if it's ever audible, but truncation distortion is in fact introduced. And in my opinion, a bit of inaudible dither is better than potential unwanted errors in the audio.
    Again, love your videos and it is one of my dreams to work with a team of such nice people some day! :)

    • @PresentDayProduction
      @PresentDayProduction  3 года назад +2

      Thanks for your great comment! You’re right, but as you said, if it’s audible is another question in itself!
      Glad you enjoy our videos, this one is one of our favourites!

    • @seybsnilksz
      @seybsnilksz 3 года назад +1

      @ReaktorLeak You mean that the noise from any component during recording will be acting as dither? That wouldn't work since the noise is printed and "frozen" with the file when recording and is then no longer random. Hence why dither would be needed when rendering/exporting a new file with the material.

    • @seybsnilksz
      @seybsnilksz 3 года назад

      @ReaktorLeak When it's recorded, it's printed into the file and then it's the same every time.

  • @DaskaiserreichNet78
    @DaskaiserreichNet78 3 года назад +3

    Would you make an exception in case of recording for a sound design where the plan is to time stretch the recorded sound in order to pitch it down by one or two octaves?

    • @PresentDayProduction
      @PresentDayProduction  3 года назад

      No, if you pitch 10khz down by two octaves it becomes 2.5khz, and you only require a lower sample rate to reproduce that. The only reason to go for higher would be if you actually want to pick up ultrasonics, or if it just ‘sounds better’ to you - that’s all that matters at the end of the day!

  • @olegoleg1838
    @olegoleg1838 3 года назад +2

    96 does sound richer. i don't know how but i watched a video of a blind test here on youtube and when you start hearing those with higher sample rate the effect is like you hear the real thing more and more as opposed to a "reproduction". it is actually a very satisfying feeling

    • @gordongurley3982
      @gordongurley3982 3 года назад +1

      @ReaktorLeak that demo is only testing playback. I’d argue that when recording music, higher sample rates are beneficial, the most basic reason being that there is less latency through the system.

    • @gordongurley3982
      @gordongurley3982 3 года назад

      @ReaktorLeak Maybe not you, but PDP video is arguing that there is no reason to record at anything higher than 44.1 or 48k.

    • @myproductionadvice
      @myproductionadvice Год назад

      96 and 192 do sound much better on a poor clock audio devices as the faster speed rates would decrease the jitter effect.

    • @petrub27
      @petrub27 Год назад

      It doesn't. The sound wave was already reproduced correctly.

  • @tommibjork
    @tommibjork 4 года назад +4

    AMEN!!! Thanks for this, this is what I've known and promoted for ages. 24/48 since 2004. Still holds true today.

    • @PresentDayProduction
      @PresentDayProduction  4 года назад

      Thanks Tommi, and thanks for watching 👍

    • @tommibjork
      @tommibjork 4 года назад

      @@PresentDayProduction it's odd there are many known producers telling people to "back up your material in highest possible frequency", in this context that makes no sense...

    • @PresentDayProduction
      @PresentDayProduction  4 года назад +2

      Tommi Björk Everyone knows 192khz stores better! Only if you keep your hard drives in the fridge though 😂

    • @dweezz
      @dweezz 4 года назад

      @@PresentDayProduction and use the correct cables

  • @sebastianobertola9014
    @sebastianobertola9014 2 года назад +1

    Thank you soo much man!

  • @pgstudio
    @pgstudio 3 года назад +4

    One thing to keep in mind is that for Sound Design, there are some mics that can record up to 100Khz and It is a reason to record at 192Khz to pitch down A LOT a source and still have some crispy high end.

    • @tomislavsekerija1957TN
      @tomislavsekerija1957TN 3 года назад

      And you still hear only up to 20khz. The point is?

    • @barthe7606
      @barthe7606 3 года назад +1

      @@tomislavsekerija1957TN If you pitch down a sound a lot, the ultrasonic informations will come to the audible range. If those informations don't exist (typically if you recorded at 44.1kHz), the pitched down sound may miss them to sound good across the spectrum, and it can even cause a sort of ugly distorsion that we have to filter out (the same sort you have when using a bitcrusher effect to artificially reduce the samplerate). In sound design for film or video games, the pitch down effect is very very common. That being said, for most production scenarios I agree with what's being said in the video.

    • @barthe7606
      @barthe7606 3 года назад +1

      ​@@Indrid-Cold You already found 2 examples that contradict your first statement. Some crickets also make sounds that go beyond the audible range that can be beautiful when pitched down, and transients from everyday sounds and instruments can go well beyond 20kHz. The fact that you don't find a use in them doesn't mean they're of no interest to anyone (typically audio-naturalists, field recordists, sound designers, sound artists, etc., for study as well as for sound creation). One example of such a microphone would be the Sanken CO-100K, but there are others going up to 50kHz (Sennheiser, Primo).

    • @barthe7606
      @barthe7606 3 года назад +1

      @@Indrid-Cold Hi Indrid, I'm sorry you took those comments (Paulo's and mine) this way. Maybe this is the problem of RUclips comments not translating the tone. What I saw in Paulo's original comment wasn't that it contradicted the video's main point (it didn't and didn't claim to). It just answered the video's invite to "comment" and brought further (and interesting) informations. I don't get why you need to feel vexed about it to the point of calling it "silly", "know-it-all" and using sarcasm : it wasn't false, it wasn't insulting, it wasn't aggressive, it wasn't contemptuous, it wasn't written so as to embarrass anyone. From my point of view it doesn't undermine the great quality of this video or this channel, which I just discovered and which I think is awesome.
      As for context, I will gladly agree with you on your point, you're absolutely right : in 99% of pop and traditional music genres, these sounds won't be used. However, I stand by what I said earlier. The fact you don't use them doesn't mean no one does (even in the entire music industry). Think of the ambient scene, sound art, movie soundtracks; loads of creative ideas come from niches and other fields.
      By the way, this comment is no attempt to antagonize you, I'm just sharing thoughts about music genres I like. I've never tried yet making music with ultrasonic materials, this kinda makes me want to. If I ever make something good out of it I'll post it here.
      Peace !

    • @TheJonHolstein
      @TheJonHolstein 3 года назад

      @@barthe7606 using really high frequency for sound design, seems cumbersome, unless it is down-pitching crickets, and similar sounds for the audience that no longer can hear them. So the use for typical listeners, would be quite limited. If that was the kind of sound designer, the original commenter was thinking of, it was a bit unclear, so I get why people would feel sketical to seing such a comment on a video, that points out, that higher sample rates aren't useful for music.

  • @nachtaktiv
    @nachtaktiv 3 года назад +2

    i'm using 44.1/24. t.m.k. spotify also uses 44.1 and almost every sample that i have is 44.1. i don't want to convert every sample that i'm importing. in some DAWs (like ableton) you don't notice it, but in cubase there's a convertion that takes a few seconds and it generates a new file with the new sample rate. and i think, upscaling ends not in better quality.

  • @austinbuttenob4849
    @austinbuttenob4849 4 года назад +3

    As a video guy, the comparisons between higher sample rate and frame rate was great!

  • @jonlieberman997
    @jonlieberman997 Год назад +1

    Great study