FINALLY! Someone who knows how to explain and teach! Don't mean to be unkind but most people simply don't know how to explain the Smart:EQ 4 or any other tools when it comes to recording and the tools that are used. You're a great coach and teacher. Took me an hour to find you and I have subscribed. Thank you.
nice demo, Do you still do fine tuning with your faders in the end or are you leaving them at unity and dialing in only with Smart EQ? And does changing fader levels on the individual tracks does it completely screw SmartEQ? and it needs to re learn
Hey Stevie! Good questions there. 1. Smart:EQ4 is not meant to replace volume automation with faders 🎚️, but rather to create space in the frequency spectrum (unmasking). So, yes, I would still do fader automation. I like to keep all faders at unity until it’s time for final automation. I do that by adjusting clip/audio event gain or within the plugin chain (any plugin with gain setting can do). 2. Depending on your DAW’s mixer, when inserting a plugin, you can either load it PRE fader of POST fader, whichever suits your needs best depending on the context. 3. If Smart:EQ4 was loaded post-fader, changing levels with the fader will affect input volume in the plugin, but should not affect frequency/spectral adjustment curve. I would rather load it pre-fader and then work on fine-tuning volume automation after. Hope that helps! 🙏
Yes, they were tuned with Melodyne. But, what you’re hearing is the unison doubles for that first part, which gives a glassy/chorus-type of character to the vocal’s sound.
Yes, using the adaptive setting uses more ressources for sure, but it doesn’t have to be used in all situations. All “dynamic” plugins (working/adapting in real-time) use more ressources, so if you’re running short on CPU power, better to use Smart EQ with static settings (adaptative @ 0). Cheers!
Nice video, my friend!! Noob quetion here, should I use smar eq4 as the last plugin of my chain, meaning, conpression and lets say saturation first? or should I use it as my first plugin on the chain having the rest of the processing done after smart eq4? Thanks in advance! By the way you got a new subscriber contrats on the content! Cheers! 🤘🏻🤘🏻🤘🏻
Thanks for your comment and sub! For the spot in the chain, it really depends on your approach/needs. Do you want it to do “heavy lifting” or do you want to use it more as a sort of polish? In my case, I like to use it at the end of the chain to get that last 5-10% of unmasking that standard EQ and other processing didn’t achieve. Remember, there are no set rules in mixing, it’s all about what you hear in the end. Hope that helps! 🙏
was looking for a video on how to route it in FL 21 specifically because when i try to make groups it doesn't recognize the other mixer channels with instruments on them that i want to EQ. should I make a dedicated bus channel?
Hey! I don’t use FL Studio, but yes, try creating dedicated busses. In the video, I printed all busses to stems and then loaded the plugin on all of them. Hope that helps! 🙏
The way I choose to use it is at the very end of my chain on each bus, but there’s no right or wrong. I prefer to do most of the heavy lifting manually and not rely too much on “smart” or automated decisions. That way, it can help you get that last 5-10% of clarity and punch by unmasking your busses without “taking over” your workflow!
Overhead mic treatment can vary a lot. You can use the preset for the drumbus and then reduce the frequency range (width setting) where smartEQ works. Same for toms. Narrow down what bothers you and reduce the width setting to that target area. Usually, that would be the low-mid freq area. The goal here is that smartEQ4 can have a “dialogue” between busses and reduce frequency masking, not entirely shape an EQ curve for each channel, as this would be the mixer’s job. Hope that helps!
I have the sonible (i forgot the exact name im not home) smartlimiter? And im always having problems with its floor and headroom, it will constantly tell me to move headroom to 1db but the arrow will be pointing down, or it will tell me to move the floor to x and i do it and then it tells me to go and move it back. Frustrating, but when you just use your ears and follow the lufs graph its great
One of the biggest issues with AI/machine learning across the board is it's tendency to lean toward the average. You ask GPT a question and it'll answer with a weak, generalized answer. You generate a song and the lyrics are fitting but boring and the arrangement is suited but not special. You ask it to arrange your mix and it'll clear up the masking but it doesn't leave any creative masking or accentuate things to suit the song. It doesn't matter how much data is thrown at it, this is inherent in the technology. Eventually maybe there will be a focus on unique and creative outputs, but for now it just doesn't compete with human mastery.
That’s exactly why all this AI-driven processing has to be taken with a grain of salt 🧂. We can only achieve unique and original-sounding mixes/songs with a human/creative approach to it… until Skynet takes over! 🤖🤣
is it possible to automate the arrangement of the group management.? ex. a part of the song you need to rearrange the group management (vocals in the back ex.)🤔
I tried and it doesn't seem like it is possible. As a workaround, I would duplicate the tracks (in this case BGV), and assign them to a new group. Voilà!
@@larsb.nielsen4481 It's quite simple: 1. Duplicate the tracks that you want to be treated differently for that part of the song 2. Disable/mute all the originals events that were duplicated only in that specific part 3. Create a new group in smart:EQ 4 with the tracks that were duplicated, disable/mute all events other than in that specific part 4. Add that new group to a “Front/Middle/Back” depending on what you want to prioritize. Hope that helps!
I’m not familiar with Reason, but any DAW that can load a third party plugin (VST/AAX/Component) should be able to load Smart:EQ 4 plugin like any other.
I found it's not very good in melodic techno, especially with snappy and distorted bass, when the bass leads in the song along with some synth. It justs lowers the dynamic and volume of everything, making everything average.
Have you tried narrowing the width of the EQ curve, so that it targets only the spectrum of your audio source where it's really needed? Let me know if that helps!
@@OliBeaudoin I did, but when i do it doesn't include bypassed frequencies for unmasking. I'm not sure if AI has been trained on melodic techno like ArtBat, Medusa etc
@@evanduril Ok, thanks for letting me know! The algorythm should have assessed basically every genre out there. I'll check with the team about this issue!
@@OliBeaudoin I've been testing this for last days and it seems it's even worse than i thought OR i don't really understand how it works. I've created some bass, heavy in low frequencies, and kick with good amount of sub. The i've added both to group, set both to group only. What i was expecting Smart eq to do is eliminate frequency masking. What it did was to cut almost all lows from bass and boosted low-mids in kick. So it broke the kick and removed bass frequencies from bass...
@@evanduril Hey Evan, have you tried using it in Tracks+group as well? Are these tracks reasonably balanced in the first place? Have you picked the right algorithm for each source? Let me know. It may be that the learning curve is not there yet. I was already used to Smart:EQ3, so it wasn't too hard to get used to the new features of version 4.
We don’t need a smart equalizer. Our ears are the perfect ‘brain’ for using an equalizer. What is needed instead is practice and understanding combined with a precision tool with well conceived controls. The whole plug-in market is drifting dangerously in the direction of “the machine knows best” which is antithetical to art itself. The concept of all ‘smart’ plugins is that there is one right way to tackle whatever the job is. By extension, this can only lead to uniformity in production of music, the very last thing music could ever need.
You have a point there. This type of tool is has several advantages though: get a quick balance in no time, great learning tool to understand frequency masking, and it can serve to get that last 5 to 10% after a mix has been done.
@@OliBeaudoinit’s just another eq the only difference is the ai. I like to use my ears when eqing so I have a reasonable idea of when I have what I want. The Ai makes rough suggestions which are calculated through machine learning giving you “ a potentially perfect mix curve”. This makes it useless and takes the feel out of the mix. And you end up with soulless mixes that sound boring. So it’s not a useful tool.
I get your point. We should never solely rely on AI, but it can give great insights. For preproduction purposes, it can also get you setup in no time to get a decent balance.
PERFECT EXPLAINATION !!! THANK YOU !
You're welcome! 😎🙏
FINALLY! Someone who knows how to explain and teach! Don't mean to be unkind but most people simply don't know how to explain the Smart:EQ 4 or any other tools when it comes to recording and the tools that are used. You're a great coach and teacher. Took me an hour to find you and I have subscribed. Thank you.
Thanks Captain! Glad it helped! 🙏😎
Thank's a lot Oli. Clear explanations. Good music by the way ;-) Merci Oli
Merci mec! 😎🙏
My new reference vid to use this plugin thx .
Thank you so much Saiben! Glad it could help!
nice demo, Do you still do fine tuning with your faders in the end or are you leaving them at unity and dialing in only with Smart EQ? And does changing fader levels on the individual tracks does it completely screw SmartEQ? and it needs to re learn
Hey Stevie! Good questions there.
1. Smart:EQ4 is not meant to replace volume automation with faders 🎚️, but rather to create space in the frequency spectrum (unmasking). So, yes, I would still do fader automation. I like to keep all faders at unity until it’s time for final automation. I do that by adjusting clip/audio event gain or within the plugin chain (any plugin with gain setting can do).
2. Depending on your DAW’s mixer, when inserting a plugin, you can either load it PRE fader of POST fader, whichever suits your needs best depending on the context.
3. If Smart:EQ4 was loaded post-fader, changing levels with the fader will affect input volume in the plugin, but should not affect frequency/spectral adjustment curve. I would rather load it pre-fader and then work on fine-tuning volume automation after.
Hope that helps! 🙏
Best tutorial on this thank you so much
Glad it was helpful! 🙏😎
thank you though great video
You’re welcome!
Quick question. Were those vocals heavily tuned? That first sustained note has that kind of weed whacker artificial sound.
Yes, they were tuned with Melodyne. But, what you’re hearing is the unison doubles for that first part, which gives a glassy/chorus-type of character to the vocal’s sound.
Use it after Ayaic Mix Monolith and see more magic 🙂 and add some Techivation stuff..Viola!🎉
I’ll check that plugin out! Thanks for your comments! 🙏😎
Great lesson.
Thank you! 🙏
so cool! wish i had any cool music to work with
🙏
this is the workflow im looking for ! thanks ! i also noticed all adaptive is 100%, i wonder if it's hungry on cpu then?
Yes, using the adaptive setting uses more ressources for sure, but it doesn’t have to be used in all situations. All “dynamic” plugins (working/adapting in real-time) use more ressources, so if you’re running short on CPU power, better to use Smart EQ with static settings (adaptative @ 0). Cheers!
Nice video, my friend!!
Noob quetion here, should I use smar eq4 as the last plugin of my chain, meaning, conpression and lets say saturation first? or should I use it as my first plugin on the chain having the rest of the processing done after smart eq4?
Thanks in advance! By the way you got a new subscriber contrats on the content!
Cheers! 🤘🏻🤘🏻🤘🏻
Thanks for your comment and sub!
For the spot in the chain, it really depends on your approach/needs. Do you want it to do “heavy lifting” or do you want to use it more as a sort of polish?
In my case, I like to use it at the end of the chain to get that last 5-10% of unmasking that standard EQ and other processing didn’t achieve. Remember, there are no set rules in mixing, it’s all about what you hear in the end.
Hope that helps! 🙏
was looking for a video on how to route it in FL 21 specifically because when i try to make groups it doesn't recognize the other mixer channels with instruments on them that i want to EQ. should I make a dedicated bus channel?
Hey! I don’t use FL Studio, but yes, try creating dedicated busses. In the video, I printed all busses to stems and then loaded the plugin on all of them. Hope that helps! 🙏
Mine only shows the single instrument. I need to read the manual. I have no idea how to get the same screen you have.
Have you tried loading an instance on each bus/track, then adding it to a group?
So do you think it is more functional or correct to use Sonible before or after when we want to characterize channels with SSL or something similar?
The way I choose to use it is at the very end of my chain on each bus, but there’s no right or wrong. I prefer to do most of the heavy lifting manually and not rely too much on “smart” or automated decisions. That way, it can help you get that last 5-10% of clarity and punch by unmasking your busses without “taking over” your workflow!
@@OliBeaudoin I'm trying to do same chain.🙏
@@frattuncbas Awesome!
There’s no profile for acoustic drum overheads or toms-what do you suggest?
Overhead mic treatment can vary a lot. You can use the preset for the drumbus and then reduce the frequency range (width setting) where smartEQ works. Same for toms. Narrow down what bothers you and reduce the width setting to that target area. Usually, that would be the low-mid freq area. The goal here is that smartEQ4 can have a “dialogue” between busses and reduce frequency masking, not entirely shape an EQ curve for each channel, as this would be the mixer’s job. Hope that helps!
Many thanks
wish you would have went back ad forth with bypass/engaged faster.
Noted!
Where can I listen to the music? :)
It’s not yet released! The band is called All is Ashes. Check them out on YT or other DSPs!
The bass guitar is very pumpy but it actually works in the mix
The stems were printed with sidechain compression on, that’s why!
I have the sonible (i forgot the exact name im not home) smartlimiter? And im always having problems with its floor and headroom, it will constantly tell me to move headroom to 1db but the arrow will be pointing down, or it will tell me to move the floor to x and i do it and then it tells me to go and move it back. Frustrating, but when you just use your ears and follow the lufs graph its great
Smart:limit is a great limiter. Going with your ears is always the best though! Meters are only there to help!
One of the biggest issues with AI/machine learning across the board is it's tendency to lean toward the average. You ask GPT a question and it'll answer with a weak, generalized answer. You generate a song and the lyrics are fitting but boring and the arrangement is suited but not special. You ask it to arrange your mix and it'll clear up the masking but it doesn't leave any creative masking or accentuate things to suit the song.
It doesn't matter how much data is thrown at it, this is inherent in the technology. Eventually maybe there will be a focus on unique and creative outputs, but for now it just doesn't compete with human mastery.
That’s exactly why all this AI-driven processing has to be taken with a grain of salt 🧂. We can only achieve unique and original-sounding mixes/songs with a human/creative approach to it… until Skynet takes over! 🤖🤣
Make nasty and use the mix/parallel blend to bring in some balance? Automate the mix knob to taste
@@dougleydorite That's a cool idea Doug!
is it possible to automate the arrangement of the group management.?
ex. a part of the song you need to rearrange the group management (vocals in the back ex.)🤔
I tried and it doesn't seem like it is possible. As a workaround, I would duplicate the tracks (in this case BGV), and assign them to a new group. Voilà!
@@OliBeaudoin hmm🤔can you please make a short video of this workaround? thanks
@@larsb.nielsen4481 It's quite simple:
1. Duplicate the tracks that you want to be treated differently for that part of the song
2. Disable/mute all the originals events that were duplicated only in that specific part
3. Create a new group in smart:EQ 4 with the tracks that were duplicated, disable/mute all events other than in that specific part
4. Add that new group to a “Front/Middle/Back” depending on what you want to prioritize.
Hope that helps!
@@OliBeaudoin Thanks 😀🤩
You're very welcome!
How do I use this with reason 12. Please someone help me
I’m not familiar with Reason, but any DAW that can load a third party plugin (VST/AAX/Component) should be able to load Smart:EQ 4 plugin like any other.
how do I get it to listen to all my tracks? Anyone help me please
I found it's not very good in melodic techno, especially with snappy and distorted bass, when the bass leads in the song along with some synth. It justs lowers the dynamic and volume of everything, making everything average.
Have you tried narrowing the width of the EQ curve, so that it targets only the spectrum of your audio source where it's really needed? Let me know if that helps!
@@OliBeaudoin I did, but when i do it doesn't include bypassed frequencies for unmasking. I'm not sure if AI has been trained on melodic techno like ArtBat, Medusa etc
@@evanduril Ok, thanks for letting me know! The algorythm should have assessed basically every genre out there. I'll check with the team about this issue!
@@OliBeaudoin I've been testing this for last days and it seems it's even worse than i thought OR i don't really understand how it works. I've created some bass, heavy in low frequencies, and kick with good amount of sub. The i've added both to group, set both to group only. What i was expecting Smart eq to do is eliminate frequency masking. What it did was to cut almost all lows from bass and boosted low-mids in kick. So it broke the kick and removed bass frequencies from bass...
@@evanduril Hey Evan, have you tried using it in Tracks+group as well? Are these tracks reasonably balanced in the first place? Have you picked the right algorithm for each source? Let me know. It may be that the learning curve is not there yet. I was already used to Smart:EQ3, so it wasn't too hard to get used to the new features of version 4.
We don’t need a smart equalizer. Our ears are the perfect ‘brain’ for using an equalizer. What is needed instead is practice and understanding combined with a precision tool with well conceived controls. The whole plug-in market is drifting dangerously in the direction of “the machine knows best” which is antithetical to art itself. The concept of all ‘smart’ plugins is that there is one right way to tackle whatever the job is. By extension, this can only lead to uniformity in production of music, the very last thing music could ever need.
You have a point there. This type of tool is has several advantages though: get a quick balance in no time, great learning tool to understand frequency masking, and it can serve to get that last 5 to 10% after a mix has been done.
🔥
❤️
Didn't smart eq 3 do that?
Yes, but no group hierarchy like V.4 and a few other things as well
It’s not very useful
What do you mean?
@@OliBeaudoinit’s just another eq the only difference is the ai. I like to use my ears when eqing so I have a reasonable idea of when I have what I want. The Ai makes rough suggestions which are calculated through machine learning giving you “ a potentially perfect mix curve”. This makes it useless and takes the feel out of the mix. And you end up with soulless mixes that sound boring. So it’s not a useful tool.
I get your point. We should never solely rely on AI, but it can give great insights. For preproduction purposes, it can also get you setup in no time to get a decent balance.
It’s really good at levelling out stuff and giving you Starting point to use your ears with like a ssl EQ for example
@@Harrysound Yes, it's a huge time saver!