Logic Pro #67 - Digital Audio Fundamentals

Поделиться
HTML-код
  • Опубликовано: 28 авг 2024

Комментарии • 52

  • @MusicTechHelpGuy
    @MusicTechHelpGuy  8 месяцев назад +3

    In this video I give an overview of digital audio fundamentals, including analog vs. digital audio, digital audio conversions, sampling, sample rate, bit depth, dynamic range, floating point bit depths, roundtrip latency and I/O buffer. Enjoy!
    Support the sponsor of this video, SKAA Wireless Audio ➛ www.skaa.com
    Check out the available SKAA production here ➛ www.skaastore.com
    Download my 35-part Logic Course here! ➛ www.logicproguide.com
    For mixing/mastering work, contact me at my website ➛ carneymediagroup.com
    Follow MusicTechHelpGuy on Instagram ➛ instagram.com/musictechhelpguy
    Support the channel on Patreon ➛ patreon.com/musictechhelpguy
    Chapters:
    0:00 Overview
    1:44 SKAA Sponsor Segment
    2:59 Analog vs. Digital
    6:23 Digital Audio Conversion
    8:27 Sample Rate
    14:04 Bit Depth & Dynamic Range
    20:11 Floating Point Bit Depths
    22:40 Latency & I/O Buffer

  • @tomlewis4748
    @tomlewis4748 7 месяцев назад +3

    When I first got in to recording in Logic, your videos were the most helpful of anything else-other vids, courses, books, etc. This video is a great place for newbies to start, bc yes, a lot of this can be confusing at first blush. So, kudos. This is really well thought out, and I'm sure helpful.
    The most mysterious part of digital audio for me is 32-bit float. Somehow you can magically track above zero (!) in Logic at least when the source is not external (which explains the orange rather than red indicators) with no clipping penalty or noise floor penalty or change to the sound. I've heard it said that the theoretical dynamic range is 1500 dB, which blows my mind. I can't begin to wrap my brain around how that can be possible (but then I usually slept through math class). Logic since about v10.3 processes at 32-bit by default, yet I was not aware that Logic could record analog inputs using 32-bit float, until right now.
    Two other things about digital audio that folks transitioning from the analog world should understand is that 1) in digital, wherever you set a level only resets the dynamic range, or how far down the noise floor is, and since it starts at -144 in 24-bit, setting a level at say, -18 is really not going to be any different than -2. The difference will be inaudible. (I still track just under zero when possible)
    In analog, the goal is to either a) stay within the transfer curve (which is not flat at the extremes) or b) peak into the curvy part of the transfer curve to change the sound by creating saturation. That, and that the noise floor being more like -60 in analog makes that very critical. This changes level-setting techniques considerably when moving to digital, bc unlike in digital, where you peak the signal in analog changes how it sounds beyond just the loudness level.
    Then, there is this: 2) the noise floor in analog is thermal noise hiss, which is benign and pleasing to the ear in limited amounts, while the noise floor in digital is more like a buzz saw, bc it is quantization noise error. And peaking above 0 dBfs in digital also can sound really nasty, while pushing into -3 VU to +3 VU (depending on how that is calibrated) in analog can add harmonic distortion (saturation) that is also pleasing to the ear, as it pushes into that curvy part of the transfer curve.
    So the extremes in analog are very different than the extremes in digital, and this fundamentally changes how we set levels. In digital, saturation can only be added after the fact.
    One last point: a single instrument recorded at 24-bit will be inaudibly different from recording it at 16-bit. But when you have 80-100 tracks, since the raise in the combined noise floor is exponentially cumulative, the effective dynamic range becomes a factor which will be 48 dB lower at 24, and that can make an audible difference with lots of tracks.
    16-bit is nothing more than a relic of CD tech in 1982 (although in compressed form and streaming services, it lives on and isn't a problem, bc that is dealing with just one stereo track). But it can be a problem in a mix-this exact problem. Go 24-bit always. Or 32.

  • @RichMoyers
    @RichMoyers 8 месяцев назад +2

    Bravo Josh, spent more than 20 yrs in Analogue Studios, from manual 4 track to automated 48, as a pro Rec Artist / session musician / record producer '61-'82 ...I truly appreciate your articulate, concise, and readily understood presentation of often arcane concepts and technical nomenclature in this video.... an invaluable service to both "newbies" and those of us old dogs, still struggling to adapt our chops / execution and technical understanding of the whole "Digital Audio -in the Box" world where most music recording occurs today..Learned so much , Thank You!

  • @bassManDavis1953
    @bassManDavis1953 7 месяцев назад +2

    Wow Josh! defo one of the most interesting concepts I have ever watched? Absolutely brilliantly delivered and so worth watching, thank you so much, I learnt so much. Gary

  • @DanMia-cd1bm
    @DanMia-cd1bm Месяц назад

    We are all watching a Great Teacher...

  • @5kMagic
    @5kMagic 8 месяцев назад

    I’m 6 mins in and already such a brilliant presentation of pro audio basics - better than anyone who’s ever done it! Bravo sir.

  • @steensvanholm
    @steensvanholm 8 месяцев назад

    Great start of the long awaited audio series. Concise and useful explanations as always. Looking forward to the coming episodes.😊

  • @A.JayWeber
    @A.JayWeber Месяц назад

    Looking forward to that genres mix course !

  • @cameronpatrickscott
    @cameronpatrickscott 8 месяцев назад +1

    A Welcome return, thanks Josh

  • @garygimmestad4272
    @garygimmestad4272 8 месяцев назад

    I came to this video with some general knowledge of these concepts but very limited knowledge regarding how they interact. It’s beginning to make sense thanks to your excellent teaching. I’ll be re-watching this one and take notes! Thanks!

  • @BenKrisfield
    @BenKrisfield 8 месяцев назад +2

    Great work, very generous with your knowledge, cheers. 🔥🔥

  • @jamiesmith7046
    @jamiesmith7046 8 месяцев назад

    Thanks mate, love learning new things..🙂

  • @tubegee2833
    @tubegee2833 8 месяцев назад

    Absolutely brilliant!

  • @johanlundkvist1388
    @johanlundkvist1388 8 месяцев назад

    Thank you for making these videos man! 🙏🏻👍🏻

  • @thormusique
    @thormusique 8 месяцев назад

    This is brilliant, thank you!

  • @enisilhansmellsgood
    @enisilhansmellsgood 8 месяцев назад

    Thank you so much for your effort! It’s huge!

  • @maplestrat
    @maplestrat 8 месяцев назад

    Thank you, I am learning a lot about digital recording from your presentation.

  • @budgetkeyboardist
    @budgetkeyboardist 8 месяцев назад +1

    Good stuff. Regarding latency, I think it is super dependent on the musician and context. If I'm singing a vocal with headphones, I can definitely tell 5ms or more, so I record with an Apollo Solo and monitor (with effects) through that. That gives me around 2 ms. If I'm playing guitar, I'm fine with 5ms, but I'd definitely be annoyed by 20ms. I think singing in headphones is the worst case scenario for latency. Sometime about the way your head is vibrating as you sing. I think it makes even the smallest amount (say 5m) "feel" a little weird. Or I might be weird myself.

  • @carlgrainger2669
    @carlgrainger2669 8 месяцев назад

    Excellent

  • @easyvelvet77
    @easyvelvet77 8 месяцев назад

    Amazingly clear for this amount of information! Thxxx!

  • @SaschaTayefeh
    @SaschaTayefeh 8 месяцев назад +1

    There's another very good reason to make use of higher sample rates, and you also mentioned it: Aliasing caused referring to the Nyquist sampling theorem. This is in particular necessary when using saturation, so also lower frequencies are important if saturated, because saturation often does produces high frequencies. So I tend to say: Best quality storage/performace trade of for me is 96kHz, just because of that ;-)

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +2

      Exactly. Aliasing foldover gets "baked" into the recording at 44.1 and 48. If you track at 88.2 or higher you can typically avoid the foldover, and then when you mix and bounce the final master at 44.1 kHz, the aliasing filter in the DAW takes care of the foldover from the conversation. Whereas if you recorded at 44.1, the foldover is permanently there. I really only hear it in cymbals, synthesizers, sometimes in higher pitched guitar solos, or orchestral or chamber recordings.

    • @SaschaTayefeh
      @SaschaTayefeh 8 месяцев назад

      @@MusicTechHelpGuy It depends on your genre. Some rely heavily on (over)saturation of virtually each and every track. In such a case, a mixing engineers ears may hurt from badly written saturation plugins at

  • @mathias9517
    @mathias9517 8 месяцев назад +1

    Such a great video and everything was so well explained. If you had chosen to include aliasing, Nyquist and so forth I would have gladly watched that as well. 😀 Thank you so much!

    • @leefoster4171
      @leefoster4171 8 месяцев назад

      Informative as ever Josh - great content. Have a great new year 👍🏻

  • @wrzkace1
    @wrzkace1 8 месяцев назад

    great video thank you ive learned quite a few things

  • @rolandgerard6064
    @rolandgerard6064 8 месяцев назад

    Many thanks, great tutorial.

  • @jac37040
    @jac37040 7 месяцев назад

    Hi Josh, Love your work, have for years! Have you considered making a lesson or side series to Logic Pro here for a topic that I do not think you have specifically focused on-- unless I just can't find it--between this series and your original Logic Pro series from 7 years ago? This would be working with analog multitrack tapes transfered to digital, and how to get the best out of working with what you have left, and getting past the anomalies involved. For me, when I was in high school in the early 90s, my father had a Soundcraft Spirit 16 channel mixer and a Tascam MSR-16 tape machine and I learned how to use them and recorded lots of music between high school, college, and all the way up until 2016. I sold the equipment and had all my 1/2 inch analog tapes transferred to wav files as tape shed had started to set in bad. One thing I never realized is that old Soundcraft mixer's power supply always gave off a little noise and somehow that noise made it's way onto the tape, and I never noticed it until years later playing the tracks in Logic and not hearing the normal hum of the mixer's supply as it was always on anyway.) So as an example, I'm trying to learn how to combat filtering out some of that noise (trying out things like Izotope RX 8 de-hum features to remove or reduce and it does help some), but so far, looking around youtube for tutorials on how to properly work on transferred analog is few and far between. I'm usually just finding people teaching tape restoration and baking techniques (which I actually had to do on my old tapes) but nothing from post transfer into Logic for production techniques. If you have this out here already or have any other links or anyone here knows of any please share, as this is something that I really have fun playing around with, seeing if I can improve a mix on a song I recorded in 1993 and mix better in logic in 2024, or try to remove a piano bench squeak using plug ins we didn't have in 1995 on a song recorded in a professional studio from a 2" 24 track tape. Thanks as always Josh!

  • @vewilli
    @vewilli 8 месяцев назад

    Now, as I have watched #67 (I watched #68 first, then #67, :-) ), the answer to my question is given: the reason why I can’t see a 32 buffer size setting possibility is the Audio interface (in my case a Mackie Big Knob Studio+), but I have also got an Antelope ZenQ interface which might provide the possibility of 32 (haven’t seen that yet).
    But this video opened my eyes for so many things. Thanks so much to the workaholic and Logic expert MTHG …

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      Yes, there are some interfaces that don't support 32 for some reason. I've seen this before as well. It's not big deal, but just use 64 when tracking and you'll be just fine. You'll sometimes see higher buffer sizes than 1024 as well, especially when using higher sample rates.

    • @vewilli
      @vewilli 8 месяцев назад

      @@MusicTechHelpGuy Thank you for your answer and congratulation for your super videos! I‘ve seen so many of them already. And I‘ll keep watching more of them in the future. Kind regards from Austria.

  • @strangeattractor4959
    @strangeattractor4959 8 месяцев назад +1

    Ready with my Supreme coffee for another tutorial! I got a question, I been using the scenes in Logic Pro for techno and is very useful to put a Dj mixer filter VST in the stereo output to manipulate the whole channels in live performance, my question is should I use a bus channel instead or the stereo output channel, like what's the difference pros and cons of using the stereo output vs a bus channel? Thank you!

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      You can totally do it with the stereo output, you're not going to hear any difference in quality or anything by putting it on a bus. I usually put the filter on a track stack, and feed all of the tracks that I want the filter applied to to that stack. In many cases that ends up being all of the instrumental tracks. But for example, if you wanted to leave the vocals or some of the instruments unaffected by the filter, as tracks stack or bus might be a better option. The upside is with Logic 10.7 they made summing stacks inside of summing stacks possible, so it makes this process a little easier.

    • @strangeattractor4959
      @strangeattractor4959 8 месяцев назад

      @@MusicTechHelpGuy I will do that, thank you for the quick response!

  • @pianomaniac14
    @pianomaniac14 8 месяцев назад +1

    Hey Josh, thanks for this excellent tutorial! This helps me know how to teach these concepts better as an instructor. My question is: how would I explain the difference between "performance" dynamics (what a musician thinks of when we think of musical expressiveness) vs. dynamic range in audio production like we're discussing here. I notice that when I talk about compression and reducing dynamic range with (particularly) a classical musician, they start to feel uncomfortable. What's a more accurate way to describe to a musician with limited technical understanding what I mean?

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      The way I like to think about it is dynamic range in a live music situation is only limited by the structural characteristics of the instrument itself, the acoustic space you're in, the microphone you use, and the mic preamp you use. Playing too loud can make the mic over modulate, or can make the preamp clip. With both analog and digital recordings, you're taking that real-world dynamic range and fitting it into a "box". You have to. It's just a limitation of the recording media. In the "box" the ceiling is the loudest peak level the recording can be, while the floor is where noise will eventually overtake the softest elements in the recording. Albeit, in most digital recordings, you cannot hear the noise if the mics and preamp have been gain staged properly. Recording at too low of a level can be a cause for hearing the noise floor. So you might have this huge dynamic range in a live orchestral or chamber recording, but you ultimately have to "size down" that live dynamic range to fit into the digital space (the "box"). When I did orchestral, band and chamber recordings back in the day, I always liked to add just a little bit of compression to the main stage mics. Not a lot, but just enough to make the softest passages more tolerable when listening back to the recordings. Another consideration is how the music is going to be consumed. Noise can be introduced at the amplifier, speakers, or headphones as well. So it all adds up, so it's best to get the recorded signal away from the noise, even if that means adding a touch of compression to live recordings. Orchestral and chamber music benefits quite a bit from recording at 32-bit float, because if the players play louder at the performance than they do at the sound check, you can typically get about 8dBFS back if the recording clips.

    • @pianomaniac14
      @pianomaniac14 8 месяцев назад +1

      @@MusicTechHelpGuy I really like that idea of fitting the sound into a "package". When you go to a live performance, the sound is all around you. In a recording scenario, you take the sound and fit it into a "package" so you can distribute it. I did recording and post-production for a former classical piano teacher of mine who has his DMA in piano performance. He was doing some solo piano recordings and he is one of those artists who things that the more "acoustic" things are, the more expressive and more "natural they would be, so it was so hard to explain the reason and need for post-production. He believed that if you simply had a good acoustic space and a good mic, you shouldn't need to touch anything, or even master for that matter. I think the one thing he didn't understand was that post-production isn't just about enhancing the sound; it is RESTORATIVE because mics do not hear sounds the way our ears do! Have you ever had this problem working with classical musicians?

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      @@pianomaniac14 Exactly, yes. The reason why post-processing is needed is because microphones only give you a small fraction of the live environment, and they too have limitations in terms of their frequency response and dynamic range, and color. The more microphones you add to the space, the better you are capturing the space, but this also increases the floor noise and has the potential for phase issues. I've found that a well place stereo spaced pair above the stage works really well. For larger groups, you can add some far left and right mics. For piano it helps to add a mic or two inside the piano and blend it together with the stage mics.

    • @pianomaniac14
      @pianomaniac14 8 месяцев назад

      @@MusicTechHelpGuy For studio acoustic piano recordings, I like using a space pair of SDC's. Sometimes I'll also do a NOS array and then blend close mics and room mics together (phase aligned, of course).

  • @joshhanson9321
    @joshhanson9321 8 месяцев назад +1

    Logic Pro claims that you need to have an interface that supports 32 bit floating for it to be of any benefit. Do you use an interface that supports it?

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      It is dependent on whether your interface is actually capable of recording at 32-bit (or float). For example my Symphony I/O is 24-bit, so even if I choose 32-bit float, and try to clip the mic, the max clip level is just going to be 0.0. It's just getting truncated. Most converters these days still max out at 24-bit. But you'll see a few really high end ones that support 32-bit, and a lot of field recordings for work in film and TV that support 32-bit, obviously so that dialog tracks and set audio can be normalized back down below clipping if needed.

  • @giabgr
    @giabgr 8 месяцев назад

    Higher sample rates don't provide more accurate high frequency recording. It's just that higher frequencies can be recorded with higher sample rates. You know about Nyquist. You know about this, you just forgot.

    • @MusicTechHelpGuy
      @MusicTechHelpGuy  8 месяцев назад +1

      I didn't forget anything. If you are dealing with very high frequency content (particularly software synthesizers and instruments, and other processing in the box) it's not just that you can capture higher frequencies, you are also preventing foldover. Not just in the initial recording, but in the post-processing as well. People forget that, and people try to argue this topic with me on occasion. You're not the first, and you probably won't be the last, and that's totally cool. This is why I very carefully chose my words in this video, and omitted the Nyquist and aliasing parts from this video, because frankly, I'm not that concerned about sample rate. Like I say in the video, no one ever ruined a recording by using 44.1 kHz because the benefit of using higher sample rates is like a 1% audible improvement, and most people don't even hear it in the context of a full mix, especially microphone recordings run through high-end converters with good anti-aliasing filters. However, that said, if you record or process some very high frequency content at 44.1 kHz you can not only hear, but also visually see (in any EQ with a visualizer) the foldover harmonics coming back down on to the recording at non-harmonic intervals, causing a noticeable loss in clarity in the top end of the recording. I have seen it and heard it myself on many occasions. It's about the foldover harmonics causing slight dissonances with the upper audible harmonics. The clarity is different at higher samples rates. Does it mean you should record at higher sample rates? No, just like I say in the video. Does it mean there is a measurable difference, yes, albeit not worth the trade off of eating up your hard drive space.

    • @giabgr
      @giabgr 8 месяцев назад

      @@MusicTechHelpGuy Ok, processing is a different story and I agree with you. But you implied that higher rates record higher frequencies "more accurately" when talking about cymbal capture at the recording stage (with a graph). Higher sampling rates could help, but by getting less-than-perfect anti-aliasing filters further away, not because more samples capture higher frequencies 'more accurately'. Any sample rate uses the same amount of samples to capture a particular frequency, it's just that higher rates can go beyond.

    • @ScottFuckinRitchie
      @ScottFuckinRitchie 7 месяцев назад

      ⁠​⁠@@giabgrhigher sample rate has more samples per second. 48khz will sample 48,000 samples per second. This allows for more accurate recordings at frequencies above what the human ear can hear. There are harmonics and overtones produced by higher frequencies though that might be lost at a lower sample rate.

    • @giabgr
      @giabgr 7 месяцев назад

      @@ScottFuckinRitchie You completely misunderstand digital audio. The extra 4000 samples are capturing and extra 2k. That is all.

    • @ScottFuckinRitchie
      @ScottFuckinRitchie 7 месяцев назад

      @@giabgr Increasing the audio bit depth, along with increasing the audio sample rate, creates more total points to reconstruct the analog wave.