5 Mastering Settings That F*ck New Producers & Mixing Engineers

Поделиться
HTML-код
  • Опубликовано: 23 янв 2024
  • ⭐ STEM MASTERING MASTERCLASS - www.panoramamastering.com.au/...
    📧 The BEST newsletter for working audio professionals who strive for more! Subscribe here - www.panoramamastering.com.au/...
    🎹 Proud of your mix? AWESOME! Get a fresh set of ears on board for mastering and let’s work together! - panoramamastering.com.au
    🛠️ Mastering Template for ProTools, Logic and Ableton- www.panoramamastering.com.au/...
    --
    Hello, I'm Nicholas Di Lorenzo, Studio Owner, Mixing and Mastering engineer at Panorama Studios.
    I'm an Italian-Australian born and raised in Melbourne. I've been a creative professional for 10 years managing some pretty awesome projects for artists, labels and producers all around the globe.
    What motivates and drives me?
    My family,
    Good food,
    Great coffee.
    You can find me on many platforms:
    Instagram: / panorama_mastering
    Facebook: / panoramamastering
    Twitter: / panoramamasters
    Kit: kit.co/Panorama_Mastering

Комментарии • 199

  • @bakerlefdaoui6801
    @bakerlefdaoui6801 6 месяцев назад +27

    You are such an Audio Geek. It is so refreshing ! Some actual technical knowledge sharing instead of random tricks and tip. I love your channel !

  • @TheNickmeeks1
    @TheNickmeeks1 6 месяцев назад +7

    So regards to whether you should up sample and then down sample I have some thoughts. You mention at the start of the video that you recommend a ceiling of -0.2 with eight times over sampling, so it’s already been sampled up and back down once already. Now if you have multiple plug-ins with over sampling turned on, then each one is scaling up and scaling down in turn, and they are probably not using the same kind of filters as each other. When you stack different filters they start doing really odd things. So I’d say it might be better to upscale once, do all your processing, and then do one downscale

  • @andreabrunori
    @andreabrunori 6 месяцев назад +8

    I’m for upsampling the mix to 96 and do the mastering at 96, also if the mixed file is at 48. Downsampling introduces a high pass filtering, but if you use over sampling in your distortion or clippers or limiters, you are actually upsampling and then downsampling every time. So it’s better to upsample once, then down sample at the end, rather than having the plugins do it several times (if I have several plugins in my chain).

  • @ryde2012
    @ryde2012 6 месяцев назад +14

    I think Dan Worrell Done an in depth video on upsampling for your last question

  • @Mr_Audio
    @Mr_Audio Месяц назад

    I love how deep you go into the nerdy stuff.
    The reason I always work in 48k is because some plugins don't work in higher sample rates so they'll downsample your audio anyways.
    Some plugins also work better at higher sample rates because of aliasing filters. But like I said, some of them don't work at higher sample rates than 48k.
    And working at a sample rate of 96k your productions take up a lot more space.
    So 48k is where I sit.

  • @EG_John
    @EG_John 6 месяцев назад +14

    3:10 My friend, Natural Phase is a FIR mode in Pro-Q. Only Zero Latency is IIR and does not have any pre-ringing.

    • @dsp-bird
      @dsp-bird 4 месяца назад

      good comment (if only it were true) ((it's modified IIR))

    • @EG_John
      @EG_John 4 месяца назад

      @@dsp-bird It's easy to test your hypothesis in just two simpe questions (steps). The first question is: Add Pro-Q to the track in natural phase mode and see ❓if there's a delay? The answer is ✅yes, there will be a 320-sample delay. Then, the second question: Process the impulse using Pro-Q with strong cuts and rises in the natural phase mode and see ❓ if ripples occur before the impulse? Again, ✅yes, ripples will appear and their duration will match that of the 320 sample delay. So, the answer is clear, ☑the natural phase is a FIR mode.

    • @dsp-bird
      @dsp-bird 4 месяца назад

      @@EG_John The latency would be a lot longer if it would be only FIR, however, you are right in your observations, which are typical for FIR. Most likely it's basic IIR modified with FIR like characteristics. Something like shown in research paper "Accurate Discretization of Analog Audio Filters with Application to Parametric Equalizer Design"

    • @EG_John
      @EG_John 4 месяца назад

      ​@@dsp-bird In fact, the duration of the FIR filter can be arbitrary. The main thing is that its length must be sufficient to process the signal at the selected frequency resolution. For phase correction at higher frequencies, a filter core length of 48 samples may be sufficient. FabFilter Pro-Q settles on a value of 320 samples, which provides sufficient resolution to work with not only higher frequencies but also upper middle frequencies. BTW, after further analysis, I believe that there is no IIR mode in Pro-Q. Everything is implemented as a FIR, including zero-phase mode, which is a FIR with minimal phase. You can see that the ringing cuts off at the impulse end and does not fade out smoothly as it should in an IIR. Which is logical, as it simplifies the design of the equalizer by unifying its codebase across different modes.

    • @dsp-bird
      @dsp-bird 4 месяца назад

      Well to be fair that is the other possibility, but I still believe that modified IIR is the case with both, mainly because performance and latency benefits would hint to that direction.

  • @junovue
    @junovue 6 месяцев назад

    all applicable right away. thank you for the perspective!!

  • @dstagl
    @dstagl 6 месяцев назад +9

    Have you ever done any experimentation or testing with the ceiling and how streaming codecs might impact/create intersample peaks?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +5

      Almost every session; iZotope Ozone Codec Preview! 96kbps AAC

  • @billpodolak7754
    @billpodolak7754 6 месяцев назад +2

    You're on fire with these videos lately!! 🔥🔥🔥

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад

      Thanks bud! I hope you've been doing super well! We need to catch eachother on a call sometime soon!

  • @Tom-tv4ok
    @Tom-tv4ok 6 месяцев назад +1

    Fantastic tips! keep it up brotha! love your channel. The nerd level is what we all need

  • @marcoshehata8968
    @marcoshehata8968 6 месяцев назад

    Great video! Thanks for sharing.

  • @admyabanwala2453
    @admyabanwala2453 5 месяцев назад +3

    first time i saw your page, i loved it and ended up subscribing. for a little while i forgot about your page existed… rediscovered it today as i saw a video recommendation. i played it and again, loved it. just to realize later that im already subscribed.

  • @charliebernardo2050
    @charliebernardo2050 5 месяцев назад +1

    Thank you, this was the most informative video i have seen in a while. Subscribed

  • @ErikSpott
    @ErikSpott 5 месяцев назад

    You, my dude, are a teachers' teacher.
    Very well explained, and I now have some more things to think about!!
    But ok, listen to this, my guys and gals.
    My workflow is in cockos reaper. It automagically converts any sample rate to whatever.
    Set project sample rate to 88.2 in project settings. (But your actual sample rate can be 44.1, whateva!)
    There is a setting somewhere to bounce audio within a project to project sample rate.
    Mix in 44.1 to save cpu. And whenever you are fine with the sound of a bus or a track after processing (even upsampling, or whatever)- freeze it, or bounce to audio.
    Bam! Youve got a track with shit load of processing, no cpu usage and no aliasing.
    Think bout that!

  • @ampersand64
    @ampersand64 6 месяцев назад +10

    I think there might be an easy explanation for the difference in the volume of the null, when upsampling a song vs. a dirac spike.
    The differences between the original signal and the upsampled version occur mostly at frequencies near nyquist. It's the result of filter ringing and/or phase shift (depending on how it was downsampled).
    Normal music doesn't have much content above 20khz, so the differences are small. But a dirac spike is essentially white noise, so it has all frequencies equally, including unnaturally high ones.
    Additionally, not many people can hear 20khz that well, let alone PRE RINGING on the TRANSIENTS up there.
    On top of that, if your track sounds like shit above 16k, chances are that no one cares. Everyone's used to data-compressed audio anyways. If there's any part of the audio that is acceptable to "sacrifice", to prevent aliasing, it should be this range.
    For these reasons, I believe that it's safe to upsample and downsample many more times than you'd think, before anything becomes audible.
    But, as with all things, it should be decided by listening. Perhaps a blind listening test between samplerate-converted music and native-samplerate music is in order. I'd like to see if a mastering engineer can tell the difference!

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +3

      Nice observation!

    • @MrPete0282
      @MrPete0282 5 месяцев назад +1

      Beginners and audio... 1st of all even if you can't hear a 17k sine wave u can hear a square wave , which means like in music when harmonics are present your ears are much more sensitive , also there's intermodulation distortion of your speakers... there's no magic freqs that can be discarded entirely and u shouldn't even consider 16k that high in a full mix, but there's meaningful trade-offs like AA filters.
      A typical noob mistake (i've done it too) is have a dark mix and boost the high hat thinking oh that's it now my mix is bright...
      Another misconception is that digital lp filters only affect highs , that's wrong ringing and pre-ringing happens in the whole band , and in most cases src sounds like i've clipped my mix so i avoid it. Except saracon which i 've never listened to, only saw studio sounds good in that area!
      As for null tests i've noticed that nulling down to ~-100dbfs doesn't mean much , the old sonalksis eq already nulls most eqs on the market , however there's audible differences. Moreover some things like crosstalk i can barely hear at -50dbfs but i can easily hear freq delays at -120dbfs (manufacturer spec)!

  • @Fubuki43
    @Fubuki43 5 месяцев назад +1

    Incredible info, thank you!

  • @seansmall3401
    @seansmall3401 6 месяцев назад +2

    The reason you lower the output from 0 to somewhere around -.3db isnt because of intersample peaks... It is to give headroom for the peaks that get added during (mp3 or whatever format) compression. If you have your limiter set to 0 then mp3 conversion will add a very slight amount of peaks over 0. The more compressed, the more peaks get added. a 64KBPS mp3 will need about 1db of headroom before conversion to not clip. 192KBPS and higher are happy with -.3

  • @ripzzy7834
    @ripzzy7834 4 месяца назад

    you're the best at this, thank you for the knowledge

  • @richyr777
    @richyr777 6 месяцев назад +2

    I watched something on sample rates the other day saying that the only difference is when you start editing, for example, vocal tuning. If something's recorded at a low sample rate, you start getting bad aliases when editing. You can record at 96 but mix and master at 48 for e.g.

  • @fattmusiek5452
    @fattmusiek5452 6 месяцев назад

    I liked the ending of this really interesting, helpful vid!

  • @sionnachs_workshop
    @sionnachs_workshop 6 месяцев назад +1

    Thanks for the info!

  • @LCRLive687
    @LCRLive687 5 месяцев назад +1

    This is the most useful DAW tutorial channel I have found so far.

  • @OnyinyechukwuBoma
    @OnyinyechukwuBoma 5 месяцев назад +1

    I love your in-depth understanding of audio tech. I have to watch again cause some concepts flew past me.

  • @django3108
    @django3108 6 месяцев назад +1

    Solid video!!!

  • @sburton84
    @sburton84 6 месяцев назад +6

    Would be interesting to see what the frequency content is of that -50db information after upsampling/downsampling. I'm guessing it's mostly going to be up near the Nyquist frequency around 20k, which makes it even less likely to really be audible.

  • @TheDragonDeacon
    @TheDragonDeacon 6 месяцев назад +4

    Thanks for nerding everything out for us. 😂

  • @barbangabriele3460
    @barbangabriele3460 6 месяцев назад +1

    love your videos!

  • @jiteshmahato3421
    @jiteshmahato3421 6 месяцев назад +2

    this guy is a genius

  • @EarthbornApparatus
    @EarthbornApparatus 27 дней назад

    Thanks! Love your work. The science to mixing is your advice. Keep it going. Where are you located by the way?

  • @Dosxxx
    @Dosxxx 6 месяцев назад +2

    I haven't tested this myself, but I have read the reason for up sampling your session is more to do with the post processes. Running @44.1/48 and oversampling can still cause phase distortion due to minimum phase anti-aliasing filters or linear-phase artefacts from some linear phase oversampling filters. Running at 88.2/96 eliminates this problem. Again I haven't tested this, I've read it form some reputable sources and it made sense so I adopted it.

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +4

      Fair enough; for me personally; even though the difference is pretty negligible; until I can solidly prove/disprove the change in fidelity, why and when it happens; I'm sticking with native sample rates;

  • @musicforthemind3421
    @musicforthemind3421 6 месяцев назад

    Good work 👏

  • @gastonjabaly
    @gastonjabaly 6 месяцев назад

    Very interesting good job bud

  • @JazzyFizzleDrummers
    @JazzyFizzleDrummers 6 месяцев назад +1

    Part of the reason converting sample rates up and down dosent really matter that much is because the math for the wave only results in one answer below Nyquest. Hell, that *is* the Nyquest theorem. Its also why oversampling helps distortion plugins. The harmonics exceed the Nyquest limit where there becomes multiple solutions to fitting the wave to the sampled data points. That *is* what aliasing is. I think the reason you saw a bigger difference with the impulse is for the same reason impulses are used to generate IRs for convolution. The wave is so short frequency becomes undefined. Now i could be getting some things wrong/not explaining things well but an EE prof raised me so I do feel somewhat confident in this explanation based on our conversations over the years.

  • @nashse7en
    @nashse7en 6 месяцев назад +2

    again another good video really explaining why to do something and not just showing what you do.

  • @nunnukanunnukalailailai1767
    @nunnukanunnukalailailai1767 6 месяцев назад +1

    Interesting. I haven't oversampled my limiters or used tp limiting because most consumer volume controls are digital these days and the overshoots are so short in duration, so not doing that allows for less limiting. I do check with afclip though that there are no huge overshoots, and leave the ceiling at < 0, just for codec clipping and possible filter overshoots. Would be cool to have a limiter that allows to set a specific upsampling factor for the actual internal sidechain instead of having just a tp switch

  • @richertz
    @richertz 6 месяцев назад +3

    You’ve pumped out some cool videos recently. I wanted to ask this on one of your last videos. Would you ever use a clipper last? I always thought MEs would clip last but am I wrong?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +2

      In the analog domain yes! When I drive the converters for loudness. however with my current set-up I'm finding less and less use cases because I'm getting better results doing it upfront;

  • @M.W.777
    @M.W.777 6 месяцев назад +2

    Bro...I f*****g love you! 🌌🥛

  • @AlvaroMRocha
    @AlvaroMRocha 2 месяца назад

    I tend to render at minimum twice the native if using loads of non-linear effects (like saturation or modeled analog), to bring down audible range aliasing, which often shoots up their oversampling even higher.
    From 48 to 96 there's frequently an audible difference (but doing 192 is often overkill), the odd harmonics make the render sound rougher at native rate (sometimes that is sounds fine though, depends on the material) and the rendered at twice the rate sounds more "open" and "wide'. It's subtle, but if going for a full clean transparent sweetness in unbusy mixes with good tracked sources, it might make a perceivable difference in a good way, but none near using true analog gear (and resampling it at native rate).
    But if using linear digital effects only, and conventionaly, which should add no color or THD, the difference in bumping the sample rate, is likely inaudible and a waste of resources .

  • @juliana.2120
    @juliana.2120 6 месяцев назад +3

    #5 sounds like a question for Dan Worrall :)

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад

      It's pretty juicy and definitely needs more TLC to dive into;

  • @virusindigoindigovirus2870
    @virusindigoindigovirus2870 5 месяцев назад

    I always just enjoy making my music and let the studio engineers do these things…however these are good, thanks for making these videos 🙂

  • @danielkisel5661
    @danielkisel5661 6 месяцев назад +7

    I thought that the general advice to set final output to -1 dBfs is not just to prevent intersample peaks but also to prevent side effects of going from lossless wav file to lossy files like the ones used on Spotify, where the process of compressing our wav file can create artificial peaks up to 1dB above digital zero.
    Am I wrong about this?

    • @robbeservaes3625
      @robbeservaes3625 6 месяцев назад +4

      I also usually use -0.2 and I'm concerned about this too. I see most mastering engineers use anywhere from -0.2 to -0.5, not sure why they're not concerned about streaming services converting and compressing files and what this would do to peaks. Very interested in this

    • @Addghjifeet
      @Addghjifeet 6 месяцев назад +1

      But still many engineers in studios prefer -0.2 because we do not only listen in youtube or spotify

    • @robbeservaes3625
      @robbeservaes3625 6 месяцев назад +1

      @@Addghjifeet sure, but I think the majority of people will be listening on streaming services, so either the distortion that the conversion causes is not significant, or engineers have some other reason to prefer going this close to 0.

    • @danielkisel5661
      @danielkisel5661 6 месяцев назад +4

      @Addghjifeet everyone prefers -0.2 but that's not the point because if -0.2 will cause distortion on Spotify, then i don't know about you, but I would definitely rather sacrifice 0.5dB of loudness rather than hearing obvious distortion or other sound degradation.
      But that's why I'm asking whether this -1 dB recommendation still holds to be true because if not, then sure, everyone wants to go to -0.2 rather than -1...
      Cheers!

    • @twitchyboneselectronica2375
      @twitchyboneselectronica2375 6 месяцев назад +1

      It’s easy to test how much this actually matters IRL / and I’ve done it many times / just go and rip some reference tracks from Spotify - and gain makeup to account for the usual Spotify Lufs value - you’ll find that the vast majority of modern tracks sit above 0db true peak for sustained periods - no one cares. Personally because I enjoy detail I’ll take the time to micro manage levels….but in the big leagues it’s often ignored.

  • @grgdfgdfgdfgdfgfdgdf8606
    @grgdfgdfgdfgdfgfdgdf8606 2 месяца назад

    That was really helpful! I thought I should use linear phase in case of highpassing the signal to not generate phase issues!

  • @samphelps856
    @samphelps856 6 месяцев назад +1

    Thank you

  • @soundslikewater
    @soundslikewater 6 месяцев назад +1

    Maybe try the upsample/downsample with stems. I did some (not so scientific) testing and it was subtle. I preferred changing the sample rate to the source files, not after mastering.
    This was before using RX so the resampling algorithm might be a part of the equation as well. (may have been 10 years ago that I tried this)
    Would love to see you test this

  • @georgesimpson1406
    @georgesimpson1406 6 месяцев назад +2

    Personally use a lower peak ceiling because of mp3 conversion. Maybe its less of a common practice. Maybe some conversion is better... But site uploads that convert for you have no choice. So, basically why after conversion that doesn't read as peak during export do they sometimes exceed peak when placed the exported file through a meter?
    Do these sites like sound cloud matter? Not really, but I will do it anyway. Artists do still use it as a handy demo sharing tool he nothing else and if that is a demo to a label that might matter

  • @sebastopolband
    @sebastopolband 6 месяцев назад +3

    Hey Nicholas, your videos are the best on RUclips. I get why with 8x (or 16x, 32x) over sampling on the Pro-L 2 means that you can set the output at -0.2dB, but is there an issue with Spotify’s converters? Spotify states that you need to leave additional headroom so that its own converters don’t introduce distortion when it does the downsampling. Do you think this is not true?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +2

      It doesn't downsample necesserily; it compressed the data into a lossy codec, 96kbps AAC at normal quality;
      You can audition this on iZotope Ozone's codec audition; and listen for yourself; in my experience; the data lost in the conversion to streaming bitrates is more degrading to the audio than any offshoots/peaks going over 0; which are hard to identify unless you've got ISP > 0dBTP

    • @sebastopolband
      @sebastopolband 6 месяцев назад +3

      @@panorama_mastering right. But doesn’t it also downsample a 24/48 (or above) audio file to a 16/44 lossless file for premium subscriber playback? Same with Tidal, right?

  • @pad303
    @pad303 6 месяцев назад +3

    If your material is mixed rates do you work at the highest?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +1

      Good question!
      Want to know something funny; in 10 years, across 5,000+ projects. I've never encountered this problem....

  • @skipnick
    @skipnick 6 месяцев назад +1

    god damnit lol i just started using linear phase eq cause it worked so well compared to channel eq.

  • @DINJE
    @DINJE 6 месяцев назад +5

    Nicholas consistently sharing his top class knowledge 👏
    We appreciate you!!

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +1

      Thanks mate!

    • @SuperFake777
      @SuperFake777 6 месяцев назад

      What about linear phase EQ if you're eq'ing a parallel compression aux?​@@panorama_mastering

  • @DamnnnDarius
    @DamnnnDarius 6 месяцев назад +2

    I work at 96k mixing and mastering, is that bad, because for the most part it sounds great to me, and me working at this sample rate I don’t oversample

    • @DamnnnDarius
      @DamnnnDarius 6 месяцев назад

      And sometimes I record and 48k and after mix the recording at 96k

  • @c-rod9892
    @c-rod9892 17 дней назад

    Hey hey love your stuff so.. your sample rate theory… you have to also take into account that the plugins have more resolution at a higher sample rate. So that is why the noise floor would be less.. the higher the rate the quality goes up.. I have mixed a 44.1 session on a console to another Computer at 96k and the difference is very significant especially if you are monitoring on the 96k.. you will find the overall frequency response is much tighter and less noisy.. just my 2 cents.. keep up the great work🎉🎉

  • @ahimsastudio6763
    @ahimsastudio6763 6 месяцев назад +1

    Awesome video! What do you think about producers/mastering engineers who doesn't really care about clipping either it's because intersample peaks or because the song arrangement & mixing is designed to sound ok when clipping / produced with clipping in mind?

  • @KitKalvert
    @KitKalvert 3 месяца назад

    Love the comments on oversampling. My SSL2 forces 48k on startup .. but most my samples are at 44.1 - I just drop to 44.1 when making music now so the samples come into the project and music goes out native 44.1 / 24 bit

  • @exposingyouthetruth6443
    @exposingyouthetruth6443 6 месяцев назад +3

    Instead of mastering/Limiting to -0.2 db at 8x upsampling, would it not be enough to turn on "True Peak" limiting at for example Ozone 11 Maximizer plugin to get the same, but optimized result?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +1

      YES yes it would;

    • @exposingyouthetruth6443
      @exposingyouthetruth6443 6 месяцев назад +2

      @@panorama_mastering super thanks for the reply. Easier solution, nice to know the theory behind it though, using other plugins :))

    • @Prodluud
      @Prodluud 5 месяцев назад

      @@exposingyouthetruth6443thanks for highlighting this i dont need to change anything i guess

  • @zlbdad
    @zlbdad 6 месяцев назад +8

    Nice work. Upsampling is benign as you have discovered. The downsample/filtering is a bigger concern….and clearly audible. So is there a net benefit? Not sure there’s an objective measure to lean into.

  • @JakeyWakey
    @JakeyWakey 6 месяцев назад +2

    the -1 ceiling was because Apple was limiting at -1. anyone know if they are still doing that?

  • @MILENIUMDID
    @MILENIUMDID 6 месяцев назад +1

    Could you please make a video about the Clip-to-Zero strategy? I've been studying it through Pretolesi and Baphometrix (amazing 31 videos series about the topic). Would love to hear your thoughts.

    • @TeknaTronik
      @TeknaTronik 6 месяцев назад

      If everything is clipped to zero it sounds off. Something’s being clipped it is fine though. It’s a taste thing imo. I personally do not like an entire song clipped.

  • @screendrem
    @screendrem 6 месяцев назад +2

    I'm using the exact same plugins plus a few more. Useful things to consider. Thank you!

  • @Fubuki43
    @Fubuki43 5 месяцев назад

    12:25 What I’ve heard from my mastering teacher is that the one-time conversion (meaning you load the source material into a session with a higher sample rate) is always better than oversampling each and every plug-in, because it will do the conversion for each plug-in instance and stack those filters on top of each other. So the advice would be to always mix in a higher sample rate session, even when the recorded material was tracked at a lower sample rate. You could investigate this by having for instance 10 plug-ins on oversampling and compare it to the same 10 plug-ins on a higher sample rate project (but without oversampling).

    • @panorama_mastering
      @panorama_mastering  5 месяцев назад

      But how high a sample rate do you work at to avoid aliasing distortion? 8x 48khz is 384; that's not entirely practical;
      But the premise is correct!

    • @Fubuki43
      @Fubuki43 5 месяцев назад

      @@panorama_mastering The main point is that each time you oversample, a filter will need to be applied to turn the processed signal back into the sample rate of the session. The stacking of those filters is where they add up and create one steep filter, kind of like when you stack two 6dB/octave lowpasses, which together will make a 12dB/octave lowpass. Even though the filter is not in the audible range, the resonance and phases shift can extend to the audible range, especially if you're working in a 44.1kHz or 48kHz project. It would be better to just work in 96kHz or 192kHz and then oversample by 2x or 4x to get the same effect but with the 'better' filter of those higher sample rates (which is less steep because there's more room above the audible range)

  • @danielwetzel7777
    @danielwetzel7777 6 месяцев назад

    Thanks for just showing that you were confused about the last one

  • @naughtyducky6325
    @naughtyducky6325 18 дней назад

    More of a mixing question but if I’m processing a signal in parallel and want to have control over the frequencies affected , e.g saturating a signal. Should I use linear phase eq?

  • @joeferris5086
    @joeferris5086 5 месяцев назад

    Question for you regarding the first tip, output ceiling limiter settings.
    Spotify and RUclips and other streaming services recommend using a ceiling of -2.0. (They also recommend -14 LUFS but we know this is not the best advice.) However, the say that backing the ceiling down to -2.0 will allow the algorithm to convert the file to their system and minimize artifacts and truncation. Do you know anything about that? Could you set up a test some how to figure this out?
    I'm very intrigued by this. I used to use soundcloud and I thought I was noticing distortion in my uploads the higher the ceiling was.

  • @alessandrogarofalo8699
    @alessandrogarofalo8699 6 месяцев назад +2

    So if you 8x oversample with the output @ -0.2, you won’t need to activate true peak mode, right ?
    Ciaociao from Italy

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +2

      Yes; I forgot to mention this assumed you're working at 48khz!

    • @alessandrogarofalo8699
      @alessandrogarofalo8699 6 месяцев назад +1

      Lovely informations, Grazie!@@panorama_mastering

  • @TokyoSpeirs
    @TokyoSpeirs 6 месяцев назад +4

    I’m getting big Dan Worral vibes and that is a really good thing. The limiting / clipping ceiling just blew my mind.

  • @ralphverdult
    @ralphverdult 5 месяцев назад

    Great video, with some good truths. One question, why would you prefer 8x oversampling in Pro-L 2 to enabling True Peak Limiting? The Pro-L 2 TPL engine has an 8x oversampling rate internally (that's why Pro-L 2 might give more accurate read-outs than true peak meters using the IBU minimum of 4x oversampling). Do you find you are having an advantage of disabling TPL?

  • @Walid.OnTheTrack6725
    @Walid.OnTheTrack6725 6 месяцев назад

    every time i watch your videos i'm like owww my god

  • @992ras
    @992ras 6 месяцев назад

    Ok dithering adds noise so essentially it’s similar to tape saturation basically dithering it’s to hide bad wave quantization by adding noise to mask at the high harmonics.
    The number mean a lot music basically shouldn’t go over 48 kHz and 32 bits at bounce down during mastering. the higher rates are for high resolution what has higher resolution video so 96, 192 are used for music videos and movies. Just doing music you don’t gain anything in bouncing at the higher resolution. Also Dithering doesn’t get ride of phasing if you have something phasing your gone to hear it in your recording, mix down etc that’s why they have phase flip on microphone pre amps when ever you record stereo matched pair microphones you need to flip the phase. Any two microphones with similar polar frequencies can phase two things happen either those frequencies cancel each other out or no sound at all it’s normally the first the sound just sounds quite and has no life when flip the phase it’s perceived as sounding loader and has life.

  • @eaccin
    @eaccin 6 месяцев назад +2

    Hi, Do you usually apply dither on your limiter when bouncing or do you print you master to an audio track and inside pro tools and apply the dither afterwards? thanks

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +1

      Dither on limiter always; I'd rather have that low-level noise baked in, JUST INCASE at any state the full res master is truncated down by another party;

    • @martzooo
      @martzooo 6 месяцев назад +1

      Just do it once, so the fastest is on the limiter.. null testing is your answer :D

    • @eaccin
      @eaccin 5 месяцев назад

      Hello again! So, you bounce to disk and do the fades afterwards in another session? What if you want to AB old and new masters, then do you print internally? it would be cool if you made a video about that.

  • @TeknaTronik
    @TeknaTronik 6 месяцев назад +1

    Nice vid

  • @DaftFader
    @DaftFader 6 месяцев назад +2

    I think I've seen someone else do the sample rate test, but they did it much differently. They up sampled and down sampled once, twice, three times, like 10 times in a row, 25 times in a row, 50 times and 100 times (to compare compounded effects) and in doing this it exaggerates the results, so you can observe them better. The amount of times are arbitrary, it's just to know how much it's actually changing the signal, and if it's worth worrying about, as if it's negligible after 25 times but not 100 .. it's probably fine, but if after 10 times or less it's already started to significantly degraded the signal, you might want to avoid it.
    You can use this same compounding of multiple stacked same processes for a bunch of other testing too. I've also seen the same test with D/A A/D conversion (For example to highlight what happens when using a hybrid set up using poor quality converting).. and that run multiple different times, and seeing how much that degrades the signal too (to test convertors against each other).

    • @danielkisel5661
      @danielkisel5661 6 месяцев назад +1

      The question is, are we going to upsample/downsample 20x during the mastering session?
      Let's say we upsample to 192KHz from 44.1KHz or 48KHz then we generally need less oversampling in plugins because it will have the same headroom like using 48KHz sample rate 4x oversampled, so in my opinion the upsides seems to be greater than the downsides by using less oversampled plugins which do create phase shift around high frequencies and smear transients little bit themselves.
      So while going up 192KHz will have some side effects just by oversampling 10 plugins in 44.1KHz or 48KHz might have worse effect...
      But then depending how strong the nonlinearities in those plugins used for example clipper, saturation, analog style compression we need to then manually clean everything above Let's say 25KHz ourselves after every non linear plugin, otherwise going up to 192KHz might not offer any benefits and cause more harm and aliasing than staying in 44.1KHz with oversampling...

    • @DaftFader
      @DaftFader 6 месяцев назад +2

      @@danielkisel5661 It's not about creating a realistic example. It's about exagerating what the effects are to see if they are even worth bothering to worry about by being able to see exactly when the threshold for the processes destructive effects begin.
      As I state in my OP, if they don't start really coming into play till many many multiples of the same process, you know doing it once isn't going to be an issue in the slightest.
      Doing it once like you would actually use in a session with inconclusive results tells us nothing. The whole point is to learn more by basically holding a magnify glass over that process and seeing exactly how distructive it is or isn't over multiple iterations.
      I know the results of these tests, but the whole point is panorama wants a way to show it to his audiance. And this is a way to demonstrate how destructive a process is or isn't over multiple stacked runs of the same process. It doesn't need to be realistic to a real world use case, it's a demonstration and a test used to give grater perspective on how much something effects something else that wouldn't normally be very audiable in a single process.

    • @danielkisel5661
      @danielkisel5661 6 месяцев назад +2

      @DaftFader Yeah I get it. It was more just a thought to give counter perspective that we shouldn't blindly believe if someone tells don't upsample because we might be actually loosing on potential benefits of running the plugins in no oversampling with higher headroom, and if the plugins are coded properly and handle sample rates like 192KHz then to me they always sound better, more crisp with more depth and punch not to mention no high frequency cramping when boosting with simple digital EQ (which happens with a lot of non oversampled digital EQs in 44.1KHz and 48KHz is you do the top end shelf boost)
      Cheers!

    • @DaftFader
      @DaftFader 6 месяцев назад +2

      @@danielkisel5661 Oh yeah for sure. The only rule I follow is don't do anything as rule. Only do it if you know and understand what you are about to do, why you are doing it, and what you are trying to achieve or mitigate, and then you will find out everything usually has it's place (well almost everything), and it's almost never on every single track ever, but sometimes only on one in 100 tracks.
      Even a limiter in mastering isn't going to be used 100% of the time (for example a mix that can't be re-done, that already has too much limiting on, then the mastering becomes more of a rescue mission). However that's just an example to show that you can't blindly follow something just because. As if you were given that over compressed and limited track, and there was no way to re-do the mix, if you blindly put a limiter on it, you'd only be doing damage. Same can be applied to everything really. ;)

  • @merajdeylami9354
    @merajdeylami9354 6 месяцев назад

    Does Ozone 11 has supersampling embedded? Because I can't find a setting for it

  • @JOYCEOFFICIALCHANNEL
    @JOYCEOFFICIALCHANNEL 2 месяца назад

    do you use linear phase on parallel processing?

  • @alexanderehamby813
    @alexanderehamby813 6 месяцев назад +4

    Airwindows Ultrasonic after plugins that introduce harmonics, in an upsampled session, should work out similar to oversampling each plugin in a native session. That ultrasonic filter is the secret that most miss, it effectively preserves "high-frequency headroom" between non-linear processes, which acts like a buffer between the highest freq harmonics and the Nyquist freq, and that prevents aliasing and IMD... oversampling in a plugin has that filter already built in.

  • @gnt132
    @gnt132 5 месяцев назад

    La pista final que exporto para subir a la distribuidora, tiene que estar en 24 bits o en 16 bits?

  • @deadislander
    @deadislander 6 месяцев назад

    I've been upsampling my mix sessions lately at 48 instead of 41 because I'm running everything through my mixer and in theory capturing it back into a higher sample rate feels like it preserves more of my mixers sound rather than the original 41 signal I'm sending it through. small stuff but I do it nonetheless, mostly to "future proof" my stuff a little little bit.

    • @deadislander
      @deadislander 6 месяцев назад

      I may be crazy

    • @georgesimpson1406
      @georgesimpson1406 6 месяцев назад +1

      I know a few who do. The difference is more noticeable when summing many tracks. This is the anti aliasing filter . Once it breaches human hearing range maybe it doesn't matter, and Up sampling distortion intensive processes will do, but it goes out of range only at 48k its still encroaching the top end at 41. Of course any stretching or re pitching or high distortion stuff maybe better even higher but generally not.

  • @BigHugeYES
    @BigHugeYES 6 месяцев назад +1

    I always use Natural Phase on vocals

  • @user-fe9ws7dh4k
    @user-fe9ws7dh4k 6 месяцев назад

    i have question regarding dithering.There is dithering optionis in some daws when u export the file.Is there interference and should it be on even if there is dithering made by the daw?

  • @DRasikhulFikri
    @DRasikhulFikri 5 месяцев назад

    hey nice presentation, thank you so much for scientific method
    have subscribed 👍

  • @Eichelprinzessin
    @Eichelprinzessin 6 месяцев назад

    Wouldnt the master at -0.2 TP be distortion free at first but after uploading to Spotify it'll be distorted cuz of the conversion? Is this distortion then even audible (from ur experience)?

    • @Eichelprinzessin
      @Eichelprinzessin 6 месяцев назад

      Also got one more thought about that. If Spotify converts to their mp3 quality, there won't be as much distortion from true peaks, since this mostly occurs in the high frequencies (16kHz 'cut'). Is that right?

  • @demodeiowa
    @demodeiowa 5 месяцев назад

    Set one limiter at .02 and follow that with another limiter at .01 just in case anything slips from the first limiter

  • @davelordy
    @davelordy 5 месяцев назад +1

    _"the tests don't approve my theory"_
    *prove

  • @mignax6888
    @mignax6888 6 месяцев назад

    I have a question if I didn’t put that minus .2 db on the limiter and i instead gain down my track with such amount is it the same ? I will try to null test maybe ✌️

  • @nickrony7307
    @nickrony7307 6 месяцев назад

    Lets say I wanted to balance a track thats louder on the left side because faders werent pushed evenly, would linear phase be better? using pro-q3, just a +1db on the whole frequency spectrum choosing the Left option on the Eq

    • @twitchyboneselectronica2375
      @twitchyboneselectronica2375 6 месяцев назад +2

      I think to be even simpler I would just use a DAW utility/gain plugin and raise one channel 1dB and avoid the whole issue.

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад

      What twitchy bones said!

  • @mosermichael4404
    @mosermichael4404 6 месяцев назад

    hmm🤔. regarding EQ. Do you use 2 EQs, one just for mid (without oversampling) and the other just for side signal (with oversampling) processing?
    Then what about the Michelangelo EQ (saturation)? better in oversampling mode or without?

    • @wrighteously
      @wrighteously 6 месяцев назад +1

      if there's any form of saturation taking place, like clipping, distortion, etc, use oversampling always (if your computer is able to handle it).

    • @mosermichael4404
      @mosermichael4404 6 месяцев назад +1

      @@wrighteously haha then it's somehow saying again. since the Michelangelo eq is both eq and saturation! You shouldn't use stereo EQ for oversampling, but you should use saturation. 😂🤔
      and yes my PC can easily handle it in oversampling mode!😋🤗 I could even do the mastering in the entire mix project with 40 channels!😉
      but I won't do that! always include the mix in stereo! then master as a stereo file.

  • @user-gh8hv4oo4z
    @user-gh8hv4oo4z 6 месяцев назад +2

    That would be SO useful If only I could understand what you're saying😂

  • @Tom-tv4ok
    @Tom-tv4ok 6 месяцев назад +1

    Wouldn't it make sense to upsample or downsample to the sample rate thats mathematically even? 44100 to 88200 and avoid 96k if you have a project that was originally made in 41k aren't

    • @zlbdad
      @zlbdad 6 месяцев назад +1

      No. 32 bit mantissas make it irrelevant to use integer relationships

  • @davidpereira4455
    @davidpereira4455 6 месяцев назад

    3:43 what if you delete the pre ringing after the Linear phase eq? Just deleting that specific part? Does it still punch? Or the transient got "pulled back"?

    • @EG_John
      @EG_John 6 месяцев назад

      Yes, this can be done if we have only one kick in the signal. But we're not just processing one kick in music, are we? In a complex signal, you can not separate the pre-ringing in any way. It completely merges with the rest of the musical material.

    • @davidpereira4455
      @davidpereira4455 6 месяцев назад

      @@EG_John My question was a bit more simple. I understand that linear phase eq causes phase shift (changing the overall amplitude of harmonics) around the cut off (specially if there's a huge slope on the cut. Not really usefull for realtime live processing obviously but for individual channels on a mixing stage, where cpu might take a hit, why the hell bother if you can resample afterwards (saving cpu). So my curiosity goes towards the beneficial of using linear phase (with its phase preserved after). As in if i trim the linear phase processed signal (removing the pre ringing) against the normal no latency eq and A B them side by side (visually) is there a difference? I would expect some sonic difference between the 2. I'll make a null test to test it for sure once i get a chance

    • @EG_John
      @EG_John 6 месяцев назад

      @@davidpereira4455 Your answer is extremely confusing. For example, you write that “I understand that linear phase eq causes phase shift.” But this is completely wrong. A linear phase equalizer does not cause a phase shift, it causes sample delay. The phase of the signal remains completely coherent.
      Everything else sounds like a bunch of words that I can’t put together into a meaningful sentence, sorry. In order for me or someone else to help you with the answer, try to formulate the text somehow differently.

    • @davidpereira4455
      @davidpereira4455 6 месяцев назад

      @@EG_John Ok i'm gonna straight forward. On 3:45 you see 2 tracks. The above one is normal eq (no latency) and under it is linear phase eq processed. On the linear phase one if i trim the extra sample (pre ringing) as to match the exact same initial point as the above track (no latency eq), will there be any sonic changes between them? Same exact cutoff on both and resampled to put side by side, will that trimming (removing the pre ringing printed) on the linear phase kick track be noticeable on the transient?

    • @EG_John
      @EG_John 6 месяцев назад +1

      @@davidpereira4455 Correct me if I'm wrong, but I believe you're asking: besides pre-ringing, will the transient (click) sound of the kick change much compared to the raw kick?
      The answer is no. There will be no changes to the nature of the transient. It will still click and punch as it did originally. The equalizer will only cause timbre changes to the frequencies it targets.
      If you're asking not in comparison to the raw kick, but compared to a kick processed with a non-linear phase mode, then yes, there may be audible differences beyond just EQ. A non-linear equalizer can shift the timing of transients, potentially "pulling back" the initial click. However, these timing changes tend to be quite subtle.

  • @sandrvoxon
    @sandrvoxon 6 месяцев назад +1

    macOS Catalina on new MacBook Pro ?

  • @boristubeti6776
    @boristubeti6776 6 месяцев назад

    48 session the standard now?

  • @LYSHEmusic
    @LYSHEmusic 6 месяцев назад +2

    I find it to be quite controversial.
    You are advising to use oversampling here and oversampling there and then to better work in native sample rate...
    Oversamling actually is upsampling and downsampling each time you use it.
    And each plugin with oversampling will add this difference you are measuring in the last part of the video.
    My suggestion is to listen to the difference and not to measure it.
    More often oversampling itself causes a bigger impact on the sound than aliasing.
    Aliasing just add frequencies, and oversampling does a lot more but yes, it doesn't add frequencies.
    Aliasing is dynamic artifact - sometime we have it more, sometime less, sometimes not at all..
    Arifacts from oversampling is constant.
    And sometime oversampling applies linear-phase filtering.
    The same with overshoots. What is worse - ISP on poor playback systems or sampling artifacts for listeners that cares more about the sound?
    I don't know the answer for each case but I would like to listen to these ISP first.
    Theory and numbers are nice, but with music best decisions will be made by listening.
    My message is just for clarification and discussion. I respect your efforts and intentions.

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +1

      Hey LYSHE,
      Damn right it is! And I really appreciate your comment too;
      You're 100% right; the oversampling / upsampling is relatively benign based on my tests and qualifications for looking at it, so it's completely inconclusive and contradicts other advice;
      I will say; aliasing distortion is noticeable, and for me at least the trade-off to oversample for harmonic processors is worth it given that aliaising distortion is inharmonic.
      Quite a bit to unpack there; however this would be something really worthwhile jumping on a call / interview for... I might keep this in the back pocket; thanks for commenting again!

    • @danielkisel5661
      @danielkisel5661 6 месяцев назад

      Nice perspective!
      That's probably why I found that if plugin is developed correctly then using 192KHz sample rate and no oversampling yield best results for me and only when I use heavier nonlinear processing like clipping or limiting or non subtle saturation I may decide to use either oversampling or manually low pass to clean the headroom of supersonic frequencies above 25KHz for further non linear processing if needed, because when using 96KHz or 192KHz frequencies above 20KHz although not really hearable will generate intermodulation distortion and can potentially worsen the effect of aliasing pretty quickly...

    • @gossipboynyc9625-VN
      @gossipboynyc9625-VN 6 месяцев назад

      @@panorama_masteringpro-oversampling >> aliasing , but yeah use the ears since potential filtering or phasing interactions 😅

    • @gossipboynyc9625-VN
      @gossipboynyc9625-VN 6 месяцев назад

      @@danielkisel5661this makes sense, with the benefitting of low passing as you said with your best low pass, the bringing the air back up with your best 16 kHz to 40 kHz air additive gainer. Now if we can get a 50 kHz air band to test how it works or could compensate with all this aliasing vs over sampling potential filtering interactions discussion. Same with the bottoms super lows under 15-35 hz etc. Maybe the air bands like in Slate’s Fresh air plugin could save us all 😅

  • @planetclay
    @planetclay 5 месяцев назад

    since we're discussing audio-theory here i don't understand the need to turn DOWN the audio being demonstrated?
    kinda like turning out the lights before a painting class....no? or is it just the numbers and calculations which are most important?
    i come to listen and learn but all i hear is you talking.

  • @bornindian28
    @bornindian28 6 месяцев назад +18

    When will I understand what u are talking!😒

    • @xvnvs8673
      @xvnvs8673 5 месяцев назад

      ☹️

    • @MrTomservo85
      @MrTomservo85 5 месяцев назад +5

      He's talking about how to impress other producers by adjusting settings no normal person will ever hear or even care about. Just write good music. All the mastering tricks can't make a bad song good

    • @bscott33
      @bscott33 5 месяцев назад +2

      @@MrTomservo85your right about writing a good song. But a bad recording/ mix/ or master can break that great song. Quality control at each level is vital to give the song a chance

    • @MrTomservo85
      @MrTomservo85 5 месяцев назад +1

      @@bscott33 I'm not so sure. Listen to Scott Pilgrim Vs The World Ruined A Whole Generation of Women by Negative XP. It may not be your taste, but the shitty production quality is a charming characteristic of the song. Why would anyone want shitty production unless there was some cool factor to it? And how did that specific shitty sound get a cool factor? I would argue that bands writing great songs and putting them out with shitty production made that sound cool and desirable to a lot of people. If the songs had not been good and just poorly produced no one would aspire to sound like that, both melodically and sonically. Well written music makes production subjectively good. Not the other way around.

    • @bscott33
      @bscott33 5 месяцев назад

      @@MrTomservo85 can’t argue that. Preach

  • @iSeeZar
    @iSeeZar 5 месяцев назад

    Not using linear phase in mastering lol. By using the worst case possible by using the most narrow EQ possible lol. Make sure to tell world renowned MEs and mixers that they're doing it wrong. Jokes...

  • @davelordy
    @davelordy 5 месяцев назад +2

    _"add dither, it's not going to make the rest of your master sound great . . . "_
    Terrible wording, sounds like you're saying adding dither will ruin the sound of your master.

  • @TK-11
    @TK-11 6 месяцев назад

    It should be noted that the oversampling ratios and under-read maximums cited by ITU-R BS.1770-5 presumes a 48 kHz sample rate. So for 96 kHz signals you only need to oversample by half the cited ratio and the correct under-read maximum would be on the same row as the half ratio; 44.1kHz signals will have under-read maximums that are bit higher than the values cited for each corresponding ratio.
    Recommending to always use dither should also come with caveats... it's important to understand that performing a 16 bit dither on 24 bit or float signal will effectively degrade the signal to 16 bits despite the signal never being stored in 16 bit format. Similarly performing a 24 bit dither and then immediately converting to 16 bits is pointless because the only bits altered by a 24 bit dither will all be truncated during the conversion.
    When it comes to repeatedly up-sampling and down-sampling I'm pretty sure it largely comes down to the low pass filtering needed for each conversion that determines how much audible impact the conversion will have on the signal. Long story short, lower sample rates require steeper filters as Nyquist approaches the audible limit of around 20k. So with high rates where 20K is far below Nyquist you can resample back and forth all you like and any artifacts from the conversions should fall well outside the audible frequency range and be relatively easy to filter while resampling to rates like 44.1 kHz and 48 kHz that both bring Nyquist quite close to 20k are more likely to suffer audible effects from the steep low pass filtering required. Of course there are many resampling algorithms but in the context of audio I believe this is the crux of the issue.

  • @GingerDrums
    @GingerDrums 6 месяцев назад +1

    -0.3 is safer if the .wav is getting converted to .mp3. the conversion can result in 0.4db of intermodulation...

  • @Beatsbasteln
    @Beatsbasteln 6 месяцев назад +2

    I still don't agree with your anti linear phase opinion. I mean,first of all the artefacts sound way less severe in linear phase on that sharp boost. The natural phase sound was all ringy, like the kick was followed by an 808 bassline. and ofc that can be cool but it's more of a soundesign thing. but the linear phase version sounded much more like a kick of the same length, but just with a heavy boost on a narrow frequency. and the only artefact is a tiny bit of pre-ringing in the lower frequencies. you need to listen closely to the sound to perceive it in solo, and in the mix it would be completely inaudible. ofc an argument could be made about if a lot of pre-ringing throughout the projects causes less headroom. but as you say, there are typically not many moments where you need to filter that extremely anyway. and on top of that preringing also becomes kind of a vibe nowadays, with spectral processing becoming more and more popular and shaping entire genres. you know pitchmap and all that. at this rate people wouldn't even perceive pre-ringing as an artefact anymore but as an added bonus feature

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад +2

      I still owe you the respect of going and watching your video to see how you used it creatively;
      In session though I just avoid it otherwise; using a single notch like that is only one filter types; shelves have strong phase rotation as do HPF's, and I avoid linear phase for both of these;

    • @Beatsbasteln
      @Beatsbasteln 6 месяцев назад

      @@panorama_mastering yep. but i honestly think i'd need to make a new video to get the point across. until now i focussed a lot on sounddesign aspects, but i more and more feel like it blends into the realms of mixing to enjoy pre-ringing

  • @FormatTV82
    @FormatTV82 6 месяцев назад

    Doesn’t work because you must hear what you do.

  • @davelordy
    @davelordy 5 месяцев назад +1

    You really need to learn how to pronounce 'aliasing' 🙂😉

    • @panorama_mastering
      @panorama_mastering  5 месяцев назад

      Sorry brother, english isn’t my first language!

    • @davelordy
      @davelordy 5 месяцев назад

      @@panorama_mastering Well, I'm here to help you 🙂
      aliasing = [a - lee - is - ing] or [a - lee - as - ing]

  • @poorlittlemonkey
    @poorlittlemonkey 6 месяцев назад

    Putting stuff on the master track in your daw isn’t “mastering” and producers shouldn’t be putting limiters on stuff or doing upsampling anyway. Are you unaware of these things or do you just not care because you want clicks?

    • @panorama_mastering
      @panorama_mastering  6 месяцев назад

      Hey mate, always happy to take criticism on the chin; so can we take a step back?
      Where did I claim putting stuff on the master track in your daw is mastering?
      Producers WILL put limiters on material; so having open dialogue and discussion around what people engage with is a healthy way for everyone to learn.
      Happy to keep chatting; usually people who are hyper critical of my channel I learn a lot from; so I welcome it.

    • @poorlittlemonkey
      @poorlittlemonkey 6 месяцев назад

      @@panorama_masteringit’s heavily implied in your title/thumbnail. It doesn’t even make sense to be telling producers about “mastering settings” to begin with. It’s like making a video for people that frame houses suggesting the best tips for painting the walls.
      Producers are putting limiters on things specifically because of videos like this.