How smart is Smart:EQ 4?

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 76

  • @Denxndresiden
    @Denxndresiden 3 месяца назад +1

    PERFECT EXPLAINATION !!! THANK YOU !

    • @OliBeaudoin
      @OliBeaudoin  3 месяца назад

      You're welcome! 😎🙏

  • @Captainhit
    @Captainhit 7 месяцев назад +3

    FINALLY! Someone who knows how to explain and teach! Don't mean to be unkind but most people simply don't know how to explain the Smart:EQ 4 or any other tools when it comes to recording and the tools that are used. You're a great coach and teacher. Took me an hour to find you and I have subscribed. Thank you.

    • @OliBeaudoin
      @OliBeaudoin  7 месяцев назад

      Thanks Captain! Glad it helped! 🙏😎

  • @grolux12
    @grolux12 10 месяцев назад +3

    Thank's a lot Oli. Clear explanations. Good music by the way ;-) Merci Oli

  • @saiben9639
    @saiben9639 11 месяцев назад +4

    My new reference vid to use this plugin thx .

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад +1

      Thank you so much Saiben! Glad it could help!

  • @Steviee8
    @Steviee8 7 месяцев назад +2

    nice demo, Do you still do fine tuning with your faders in the end or are you leaving them at unity and dialing in only with Smart EQ? And does changing fader levels on the individual tracks does it completely screw SmartEQ? and it needs to re learn

    • @OliBeaudoin
      @OliBeaudoin  7 месяцев назад +4

      Hey Stevie! Good questions there.
      1. Smart:EQ4 is not meant to replace volume automation with faders 🎚️, but rather to create space in the frequency spectrum (unmasking). So, yes, I would still do fader automation. I like to keep all faders at unity until it’s time for final automation. I do that by adjusting clip/audio event gain or within the plugin chain (any plugin with gain setting can do).
      2. Depending on your DAW’s mixer, when inserting a plugin, you can either load it PRE fader of POST fader, whichever suits your needs best depending on the context.
      3. If Smart:EQ4 was loaded post-fader, changing levels with the fader will affect input volume in the plugin, but should not affect frequency/spectral adjustment curve. I would rather load it pre-fader and then work on fine-tuning volume automation after.
      Hope that helps! 🙏

  • @mufcmusic8514
    @mufcmusic8514 10 месяцев назад +1

    Best tutorial on this thank you so much

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      Glad it was helpful! 🙏😎

  • @rickneibauer1
    @rickneibauer1 4 дня назад +1

    thank you though great video

  • @NormanTiner
    @NormanTiner 10 месяцев назад +2

    Quick question. Were those vocals heavily tuned? That first sustained note has that kind of weed whacker artificial sound.

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      Yes, they were tuned with Melodyne. But, what you’re hearing is the unison doubles for that first part, which gives a glassy/chorus-type of character to the vocal’s sound.

  • @frattuncbas
    @frattuncbas 4 месяца назад +1

    Use it after Ayaic Mix Monolith and see more magic 🙂 and add some Techivation stuff..Viola!🎉

    • @OliBeaudoin
      @OliBeaudoin  4 месяца назад +1

      I’ll check that plugin out! Thanks for your comments! 🙏😎

  • @Henaksi
    @Henaksi Месяц назад

    Great lesson.

  • @monomono
    @monomono 8 месяцев назад +1

    so cool! wish i had any cool music to work with

  • @kapitbanda
    @kapitbanda 2 месяца назад

    this is the workflow im looking for ! thanks ! i also noticed all adaptive is 100%, i wonder if it's hungry on cpu then?

    • @OliBeaudoin
      @OliBeaudoin  2 месяца назад +1

      Yes, using the adaptive setting uses more ressources for sure, but it doesn’t have to be used in all situations. All “dynamic” plugins (working/adapting in real-time) use more ressources, so if you’re running short on CPU power, better to use Smart EQ with static settings (adaptative @ 0). Cheers!

  • @guinganfg
    @guinganfg 7 месяцев назад +1

    Nice video, my friend!!
    Noob quetion here, should I use smar eq4 as the last plugin of my chain, meaning, conpression and lets say saturation first? or should I use it as my first plugin on the chain having the rest of the processing done after smart eq4?
    Thanks in advance! By the way you got a new subscriber contrats on the content!
    Cheers! 🤘🏻🤘🏻🤘🏻

    • @OliBeaudoin
      @OliBeaudoin  7 месяцев назад

      Thanks for your comment and sub!
      For the spot in the chain, it really depends on your approach/needs. Do you want it to do “heavy lifting” or do you want to use it more as a sort of polish?
      In my case, I like to use it at the end of the chain to get that last 5-10% of unmasking that standard EQ and other processing didn’t achieve. Remember, there are no set rules in mixing, it’s all about what you hear in the end.
      Hope that helps! 🙏

  • @DSWL_
    @DSWL_ 7 месяцев назад +1

    was looking for a video on how to route it in FL 21 specifically because when i try to make groups it doesn't recognize the other mixer channels with instruments on them that i want to EQ. should I make a dedicated bus channel?

    • @OliBeaudoin
      @OliBeaudoin  7 месяцев назад

      Hey! I don’t use FL Studio, but yes, try creating dedicated busses. In the video, I printed all busses to stems and then loaded the plugin on all of them. Hope that helps! 🙏

  • @aiconic10
    @aiconic10 4 месяца назад +1

    Mine only shows the single instrument. I need to read the manual. I have no idea how to get the same screen you have.

    • @OliBeaudoin
      @OliBeaudoin  4 месяца назад

      Have you tried loading an instance on each bus/track, then adding it to a group?

  • @frattuncbas
    @frattuncbas 4 месяца назад +1

    So do you think it is more functional or correct to use Sonible before or after when we want to characterize channels with SSL or something similar?

    • @OliBeaudoin
      @OliBeaudoin  4 месяца назад +2

      The way I choose to use it is at the very end of my chain on each bus, but there’s no right or wrong. I prefer to do most of the heavy lifting manually and not rely too much on “smart” or automated decisions. That way, it can help you get that last 5-10% of clarity and punch by unmasking your busses without “taking over” your workflow!

    • @frattuncbas
      @frattuncbas 4 месяца назад +1

      @@OliBeaudoin I'm trying to do same chain.🙏

    • @OliBeaudoin
      @OliBeaudoin  4 месяца назад

      @@frattuncbas Awesome!

  • @Racleborg
    @Racleborg 3 месяца назад +1

    There’s no profile for acoustic drum overheads or toms-what do you suggest?

    • @OliBeaudoin
      @OliBeaudoin  3 месяца назад

      Overhead mic treatment can vary a lot. You can use the preset for the drumbus and then reduce the frequency range (width setting) where smartEQ works. Same for toms. Narrow down what bothers you and reduce the width setting to that target area. Usually, that would be the low-mid freq area. The goal here is that smartEQ4 can have a “dialogue” between busses and reduce frequency masking, not entirely shape an EQ curve for each channel, as this would be the mixer’s job. Hope that helps!

    • @Racleborg
      @Racleborg 3 месяца назад +1

      Many thanks

  • @rickneibauer1
    @rickneibauer1 4 дня назад

    wish you would have went back ad forth with bypass/engaged faster.

  • @xuser8314
    @xuser8314 10 месяцев назад +1

    Where can I listen to the music? :)

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      It’s not yet released! The band is called All is Ashes. Check them out on YT or other DSPs!

  • @dougleydorite
    @dougleydorite 10 месяцев назад

    The bass guitar is very pumpy but it actually works in the mix

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      The stems were printed with sidechain compression on, that’s why!

  • @LoserDub
    @LoserDub 10 месяцев назад

    I have the sonible (i forgot the exact name im not home) smartlimiter? And im always having problems with its floor and headroom, it will constantly tell me to move headroom to 1db but the arrow will be pointing down, or it will tell me to move the floor to x and i do it and then it tells me to go and move it back. Frustrating, but when you just use your ears and follow the lufs graph its great

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      Smart:limit is a great limiter. Going with your ears is always the best though! Meters are only there to help!

  • @NormanTiner
    @NormanTiner 10 месяцев назад +3

    One of the biggest issues with AI/machine learning across the board is it's tendency to lean toward the average. You ask GPT a question and it'll answer with a weak, generalized answer. You generate a song and the lyrics are fitting but boring and the arrangement is suited but not special. You ask it to arrange your mix and it'll clear up the masking but it doesn't leave any creative masking or accentuate things to suit the song.
    It doesn't matter how much data is thrown at it, this is inherent in the technology. Eventually maybe there will be a focus on unique and creative outputs, but for now it just doesn't compete with human mastery.

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад +2

      That’s exactly why all this AI-driven processing has to be taken with a grain of salt 🧂. We can only achieve unique and original-sounding mixes/songs with a human/creative approach to it… until Skynet takes over! 🤖🤣

    • @dougleydorite
      @dougleydorite 10 месяцев назад +1

      Make nasty and use the mix/parallel blend to bring in some balance? Automate the mix knob to taste

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      @@dougleydorite That's a cool idea Doug!

  • @larsb.nielsen4481
    @larsb.nielsen4481 11 месяцев назад +1

    is it possible to automate the arrangement of the group management.?
    ex. a part of the song you need to rearrange the group management (vocals in the back ex.)🤔

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад +1

      I tried and it doesn't seem like it is possible. As a workaround, I would duplicate the tracks (in this case BGV), and assign them to a new group. Voilà!

    • @larsb.nielsen4481
      @larsb.nielsen4481 11 месяцев назад +1

      @@OliBeaudoin hmm🤔can you please make a short video of this workaround? thanks

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад +3

      ​@@larsb.nielsen4481 It's quite simple:
      1. Duplicate the tracks that you want to be treated differently for that part of the song
      2. Disable/mute all the originals events that were duplicated only in that specific part
      3. Create a new group in smart:EQ 4 with the tracks that were duplicated, disable/mute all events other than in that specific part
      4. Add that new group to a “Front/Middle/Back” depending on what you want to prioritize.
      Hope that helps!

    • @larsb.nielsen4481
      @larsb.nielsen4481 11 месяцев назад +1

      @@OliBeaudoin Thanks 😀🤩

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад

      You're very welcome!

  • @NoNovaCain
    @NoNovaCain 10 месяцев назад +1

    How do I use this with reason 12. Please someone help me

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      I’m not familiar with Reason, but any DAW that can load a third party plugin (VST/AAX/Component) should be able to load Smart:EQ 4 plugin like any other.

    • @NoNovaCain
      @NoNovaCain 10 месяцев назад

      how do I get it to listen to all my tracks? Anyone help me please

  • @evanduril
    @evanduril 10 месяцев назад +3

    I found it's not very good in melodic techno, especially with snappy and distorted bass, when the bass leads in the song along with some synth. It justs lowers the dynamic and volume of everything, making everything average.

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      Have you tried narrowing the width of the EQ curve, so that it targets only the spectrum of your audio source where it's really needed? Let me know if that helps!

    • @evanduril
      @evanduril 10 месяцев назад +1

      @@OliBeaudoin I did, but when i do it doesn't include bypassed frequencies for unmasking. I'm not sure if AI has been trained on melodic techno like ArtBat, Medusa etc

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад +1

      @@evanduril Ok, thanks for letting me know! The algorythm should have assessed basically every genre out there. I'll check with the team about this issue!

    • @evanduril
      @evanduril 10 месяцев назад

      @@OliBeaudoin I've been testing this for last days and it seems it's even worse than i thought OR i don't really understand how it works. I've created some bass, heavy in low frequencies, and kick with good amount of sub. The i've added both to group, set both to group only. What i was expecting Smart eq to do is eliminate frequency masking. What it did was to cut almost all lows from bass and boosted low-mids in kick. So it broke the kick and removed bass frequencies from bass...

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      @@evanduril Hey Evan, have you tried using it in Tracks+group as well? Are these tracks reasonably balanced in the first place? Have you picked the right algorithm for each source? Let me know. It may be that the learning curve is not there yet. I was already used to Smart:EQ3, so it wasn't too hard to get used to the new features of version 4.

  • @artysanmobile
    @artysanmobile 9 месяцев назад +1

    We don’t need a smart equalizer. Our ears are the perfect ‘brain’ for using an equalizer. What is needed instead is practice and understanding combined with a precision tool with well conceived controls. The whole plug-in market is drifting dangerously in the direction of “the machine knows best” which is antithetical to art itself. The concept of all ‘smart’ plugins is that there is one right way to tackle whatever the job is. By extension, this can only lead to uniformity in production of music, the very last thing music could ever need.

    • @OliBeaudoin
      @OliBeaudoin  9 месяцев назад +3

      You have a point there. This type of tool is has several advantages though: get a quick balance in no time, great learning tool to understand frequency masking, and it can serve to get that last 5 to 10% after a mix has been done.

  • @OhanaNery
    @OhanaNery 11 месяцев назад

    🔥

  • @pd177
    @pd177 6 месяцев назад +1

    Didn't smart eq 3 do that?

    • @OliBeaudoin
      @OliBeaudoin  6 месяцев назад +1

      Yes, but no group hierarchy like V.4 and a few other things as well

  • @garethde-witt6433
    @garethde-witt6433 11 месяцев назад +1

    It’s not very useful

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад +2

      What do you mean?

    • @garethde-witt6433
      @garethde-witt6433 11 месяцев назад +1

      @@OliBeaudoinit’s just another eq the only difference is the ai. I like to use my ears when eqing so I have a reasonable idea of when I have what I want. The Ai makes rough suggestions which are calculated through machine learning giving you “ a potentially perfect mix curve”. This makes it useless and takes the feel out of the mix. And you end up with soulless mixes that sound boring. So it’s not a useful tool.

    • @OliBeaudoin
      @OliBeaudoin  11 месяцев назад

      I get your point. We should never solely rely on AI, but it can give great insights. For preproduction purposes, it can also get you setup in no time to get a decent balance.

    • @Harrysound
      @Harrysound 10 месяцев назад +2

      It’s really good at levelling out stuff and giving you Starting point to use your ears with like a ssl EQ for example

    • @OliBeaudoin
      @OliBeaudoin  10 месяцев назад

      @@Harrysound Yes, it's a huge time saver!