This is really important for people to understand.Steve Duda once said:"I could make a better mixdown with just the faders than most people would using everything at their disposal" You don't need to MAKE sounds work together.They should already work together from the get go.And mixing should most of the time just ENHANCE the track instead of FIXING it.
Of course you have to make sounds work together during the actual sound design. If you are a "producer" who only uses samples you have to select them wisely though.
I generally agree with what you're saying - it's really good to make people think about what they are doing when EQing - but it can be a little misleading to think of this ("never shave of the fundamental") as another rule... if the fundamental is played by the bass the pad might not sound constrained or empty in context. Of course this relates to a lot of variables but generally a fundamental frequency is a sine and it doesn't matter too much which instrument plays it - and maybe you'd still want those harmonics of the "ghost fundamental" to connect with the subs / bass - it's really up to the situation imo. Another example for bass would be pushing a sine or any waveform into distortion and EQing afterwards to keep the harmonics but get rid of the rumble (replacing it with a cleaner sub sine) - you can't do that if you don't start with a lower note to generate those harmonics.
^ This, but i think it's important to know this piece of info (including your comment), especially if you're a beginner. If you've been producing for a couple years or so the info in the video has probably become second nature, but i made this mistake countless times when i was starting out, would've been easy to get cleaner results had i known this from day one. I'm at a point where i'm choosing wether to include or delete the low notes in an informed way (i'm actually working on a track with a lot of stacks rn and all of these things were extremely important for the final result), but not everyone is here yet. This is probably the main origin of clutter in beginners/intermediates' productions so i think it's positive to have a video explaining this otherwise not obvious concept
@@matteodonato3918 yeah i agree... i mean i do often delete the lower notes of a pad or another instrument if i realize it's clashing with the bass - but as i said... it's situational and I usually try both EQing and deleting notes and see which one sounds better to my ears.
what you are refering to is the so called residual effect. it is a psycho acustic effect that comes to play in phones for example: our brain can reconstuct the base note from it's harmonics. (a phone can not play the base note of our voices, we still hear the people in a somewhat regular voice) in electronic production leaving the base note out as you did can be a good idea, since you'll be adding noise here and there to not make your mix sound too sterile but when you're recording a band you don't want the filters to be as steep and you want that noise since you won't be adding anything post recording. liked the video, keep it up!
Been doing this for years. I tend to work with pianos and strings. I create a chord for the piano let’s say a major 7th. Open up another channel with the same chord and instrument, but this time an octave lower. You can eq them differently to fit together. Pan one left, the other right. Create one big sound by just splitting it up. Strings are great cause you can treat a whole chord comprised of 6 or 7 notes, separate each note to there own channel. So now you can have 7 separate channels, treated separately, panning, stereo width. I even add automatic panning left and right of the high notes and maybe some distortion on the low notes. It’s just more control of the whole sound.
That difference is really incredible. I've been told before to consider the frequency dynamics of a track during the arrangement stage, but I feel like now I understand why a little bit better. This is super helpful.
I mean I guess?? I usually don’t eq/mix my tracks individually. You gotta hear all of the tracks in context imo. Maybe I want the “clutter” in my drop, so what if it’s not technically correct? If it sounds good it IS good. Sometimes you don’t even need to eq or compress depending on the sound. Don’t overthink it or else you’re just gonna be wasting time and not finishing the important elements of the song. But keep trucking!
This is so simple yet infinitely useful! Right back from my project where I tested this approach on chords. High-pass vs. various volume of chord notes, lowest for lowest notes, gradually increasing towards higher notes. The result is EPIC. Thank you man, much love.
This video was super well done man! The number one question I get about my music is how to I get it to sound so tight. And it all comes down to what you're talking about. Sound selection is the majority of your mix, and with selecting sounds it all comes down to its harmonic range it fills so thank you for reiterating this in a way that is easy to understand.
Loved the messy/raw background and natural look of the video. "Make your music frequency compatible"; I think the same, but I have not experimented with it yet.
How do you utilize this to combat the RUclips EDM producer go-to -- hella layering of the same notes with new sounds? Would different sound waves clear up some of the muddiness?
Wow. Julian. I’m in awe. Looking back on past projects I think that’s my issue, particularly with by mid-low to sub range instruments. I think this will really help during my production sessions.
A lot of "traditional" instruments and performance groups automatically separate frequencies. For example in a string quartet, violin, viola, and cello have little overlap of frequencies. In a rock band bass guitar is below rhythm guitar, and lead guitar tends to use the high strings. In a solo piano recording the keys literally correspond to the spectrum so two keys can't hit the same frequency. In electronic music you have to think about these things a bit more because a "pad" sound can end up with bass frequencies in it, chord stacks can pretty much cover the whole spectrum, and basses can have high frequencies that conflict with everything. On top of that, the presets in most synthesizers are meant to sound really good on their own, to sell synthesizers or plug-ins. These sounds sound great because they fill up the spectrum all by themselves, but when you're making a track they can be the worst choice. I use presets a lot but I often find myself removing one of their oscillators or filtering to make them sound thinner and weaker... which works much better in the mix. Anyway, this video has some great advice and it's something I wish I learned 20 years earlier...
but in a rock band you wouldn't crank the bass dial on your amp... you would rather take it down a bit... EQing still has a place in rock music. in orchestral music you might also not always cleanly separate every frequency - yes in a string quartet maybe but in an orchestra you might combine different low instruments for their specific timbres. I think it's good to be aware of which notes are playing but you shouldn't be afraid to double fundamentals with different harmonic structures (in moderation and depending on the register) if it sounds good to your ears... I agree with what you are saying about presets tho, especially bass presets tend to be pretty bloated. btw filtering is basically the same as a lowcut on an EQ (which again can be ok if you know what you are doing).
Very similar to whar you said years ago when you talked about mixing in a 3D Box using octaves! This time around was with a clear example of the difference. A follow up mix breakdown would be insightful to see how we can apply the "band arrangement" in our DAW. Keep up the good work, great tutorial!
But even when you high pass you can see that you don’t completely get rid of the fundamental. It’s still there but at lower dB. So unless your bass and pad are being played by the same synth, i feel that sharing the fundamental with the lower volume of the pad fundamental can help blend it with the bass. But it also really depends on what kind of sound you want. What’s shown in this vid can sound “cleaner” with plenty of separation to perhaps fit more elements. Whereas keeping the bass fundamental in can sound fuller and warmer (and yes, if overdone with too many sounds…muddy). So imo all depends on genre/sound you’re going for. That said, this is a great video so that people can learn to be more intentional with their writing and EQing.
And not just sharing the fundamental, but having those harmonics of the fundamental seems like it could blend the sound more too. Maybe that’s also what you were saying. I personally like a lot of the fundamental in the low end to make music sound warmer so I feel like the overlap could also be a good thing. I feel like this video should present it a little more as an intentional tool like you’re saying rather than a sort of rule.
It’s different when u layering bass sounds together ;) especially in bass house music they layer so much bass sounds together that brings one bass sound in.
Hi, Julian. Firstly, I'd like to admit that even though I'm not an Ableton user, I suppose your advice could be applied to almost any DAW. Your explanation and reasoning is really simple and logical, and you've done a really good job in delivering your point. But here's my two cents in relation to your advice. In my honest opinion, I think that it's actually quite alright to have different instruments occupy the SAME notes, which would result in higher energy levels in the particular frequency of the spectrum related to those notes, ultimately causing the clutter/overlap as you mentioned. However, I feel that it is OKAY to have an overlap of sounds. The main difference lies in how we manage the amount of overlap so as to avoid the masking of various elements present in the context of a mix; the art of mixing. Hence, corrective/subtractive EQ is the best way to sculpt sounds and manage the energy levels caused by these overlaps within a particular frequency range with the main objective of finding a good balance throughout the frequency spectrum whilst additive EQ on the other hand, is useful in providing more clarity and character to the sounds. Generally, I think the best advice that I've gotten from my little experience in production is this: If it SOUNDS good, that's what matters most! P.S. Thank you for your advice and all the best! ☺️ Alex
Super informative video! This is one thing that I happen to do already, just because of the way I've always done things, but it never quite occurred to me the sonic implication of stripping a sound of its fundamental frequency. On an oscilloscope, the result would be really easy to see --- you've lost the sound's foundation, and the residual harmonics are just stuck there mixed in with everything else.
I've seen this opinion before and I don't disagree with it, but I'm also a bit indifferent to the specific argument because it's a judgment on whether the sound lives in the arrangement or in the mix. If you start with total control over the synths and sequence, there's no particular reason not to arrange it as best you can before mixing, just like ABBA did it in the 70's(Dancing Queen is one of my go-to examples of the entire arrangement being petted to give every element space). But if what you're working with is produced samples with their own arrangement already in place, mixing for space becomes way more critical. Lately I've gotten interested in building up a template project again, and a major factor in making templates work well is to encode "set-and-forget" mixing decisions. In this role, the major parts are pre-shelved and given some treatments before you've written a single note or even decided on the instrument, so you end up with the mix determining the arrangement, because what will sound good is that which sounds good in that mix - and then the question of whether you want a missing fundamental is settled just by whether or not you want energy filling out the higher frequencies. Often you do, but it doesn't have to come from playing another note of that patch - it can come from FX and layering too. There are a lot of stylistic concerns entailed in the question, but my agreement mostly stems from the premise of solving as many sources of mix problems early on, so that the overall processing chain stays light with few artifacts.
This is one of the most important mixing technique videos on youtube. The pros do this a lot! sound selection isnt only about choosing great sounds at the right frequencies its also HOW you play them and how that relates to the other sounds and the big picture of the mix. Thanks man!
I think je means that in jazz the piano (or guitar) won’t play the root note (and even the 5th) and focus on the 3rd, 7th 9th ect, playing with inversions and open voicings, while the bass will play the root and fith + a few passing notes to create what they call a « walking bass » Very cool and straightforward tutorial btw !
This is cool information about sound designing. But not enough to make me stop using the hi pass to clean my sounds up. lol! Mixing for years and I teach exactly what you said "dont do". I guess its true what the greats say... mixing is not an exact science but is an exact science. What some say "dont do" is exactly what works for others. But cool sound design video though. 👍🏾
you cant judge a sound on how it sounds in isolation. if youre making EQ changes to get a sound to sit in the mix, then you have to here it in the context of that mix. really, listen to isolated stems from any of your favorite songs, theres a good chance that a lot of those tracks will sound really thin and shitty on their own, but its all in how they come together
I think what he's getting at is not so much that high passing leads to thin and shitty sounds, more that by high passing beyond the fundamental you're effectively losing the note but leaving the noise from the harmonics. By removing the note and instead playing it on a sound more suited to that register you have better seperation which actually then gives you more breadth to eq.
@@MrClarkio But there are so many examples of times you might want to keep the harmonics of a sound but don't need the fundamental. Maybe you have a deep sub bass and you want another big saw bass hit for an impact at a drop or something. If the sub is playing at A1, you might still want the bass hit to sound as if it's playing an A1, but you don't actually want more of that fundamental. It's just going to create phase issues and your bass levels will be all over the place. All you really want are the harmonics from that bass impact.
4:23 You can opt to remove the notes from the synth that will be cut by the EQ. It’s like a pianist not playing the root because the bassist is covering it. EDIT: Ah, that’s exactly what he suggested. I jumped the gun with my comment. My bad
The situation is getting even worse with natural sounds that have more complex harmonic series than just octaves. So that would make that A\B example even more evident. Thanks for the vid
Yes and no, in general this is a really good advice but in some cases I would go for making notes that are under the low cut. For example if you have a really low bass note, that is not equal to other bass like sounds in the mix, I would make a second bass with the same note (and a lot of distortion) but with a heavy low cut. The two Basses go together smooth because they play the same note and the second bass will make it sound good on smaller speakers. Do not underestimate the power of the low end from your Monitors, in other listening situations the "note-seperation-in-frequence-correlation-thing" might kill you juicy low bass
This is good advice for electronic sounds, but for samples such as pianos, strings or vocals, there will be microphone and room rumble. It's better to highpass recorded sounds to remove these types of artifacts.
If you double up the root note like in 6:31, I think, there's no difference between eq'ing vs. omitting the lower root note. I'll still hear a A minor chord, ie the upper harmonics don't change. Or am I missing the point here?
The upper harmonics will change when omitting the root note. The root note has some unique harmonics. For example, A1's 3rd harmonic is E3, whereas A2's 3rd harmonic is E4. If the A1 is EQ'ed out, you will hear an E3 which doesn't belong. These differences make it possible to hear when two notes are overlapping in different octaves.
Which is why writing and mixing aren't separate phases, but more of a back-and-forth. Of course, you might want the harmonics from a ghosted note, just depends on the situation.
Trance is tricky because super saws are so harmonically rich. I’d say just make sure that you don’t have too much information in the octave that the vocal is singing in
This is only really applicable to digital electronic music. The entire reason a low cut/high pass filter is a thing is to remove unwanted noise (subharmonics from distortion, accidentals, body resonance, room resonances, electrical interference etc) from the mix, to allow space for other instruments. There is basically ALWAYS going to be sound that is picked up by a mic below the fundamental of what is being recorded, and it will muddy up the mix, especially if you have a bass guitar in the mix (body resonance on most instruments is in the 20-200hz range, fundamentals on a bass will be somewhere in the 40hz-1khz range).
You totally make sense! That's why I don't play chords (with 1 instrument). Rather, I place notes on different tracks with different instruments to make a rich chord progression. Polyphonic notes always distorts sounds and worsen it when you add effects to their respective instrument (sound). It leads you to remixing, editing, removing and adding new tracks. It's a total waste of time! Simplicity is the key! Lastly, the base I used as a transitional sound to carry vocals to the melody.
Q: how does it work with Bass? Often I high pass it at 100hrz so that i can create another channel for a sub bass. When i move the bass notes an octave higher its too high, but if I keep it the same it loses its foundation . What should i do then?
The point is to keep the fundimental of your sub and bass an octave apart. If the sub is interfering with the bass you could try a high cut on the sub. If you need to increase the octave of the bass you could try closing the filter cut off on the synth more less of the harmonics come through.
@@dilbydj i got the first part on the sub hi cut. As for the bass itself, if I use a higher octave notes (one octave up from the sub which is more than 100hz for example), the the bass starts to sound too mid rangy and not like a bass.. so basically i do follow the tutorial advice but the bass doesn’t end up sounding like a bass. How do I overcome this?
Don’t listen to this video. What you’re describing is just one of the many situations where you would want to cut the fundamental. Having a seperate sub and top bass is very common and gives you much more control over the level and consistency of your sub. You want the top bass to sound like it’s playing the bass notes, but you actually only want the harmonics of the sound as the fundamental is being covered by your sub bass. Keep doing what you’re doing.
1:23 It's actually an Amin7. I know it's a bit nitpicky but when you're are talking about dissonant harmocis etc. using a chord that may be considered dissonant by some without saying so may be confusing to some people ;D Also your room is hella reverbarant, you may want to consider some acoustic treatment.
You say the sound doesn’t sound good without it’s fundamental. But you’re not listening in context. The whole point of that high-pass is that the bass will be filling the spot of that fundamental.
I think you slightly missed his point there. What he was trying to say was, why use a high pass to filter out the fundamental when you can just delete the low note altogether (in the pad) and make the EQ unnecessary + end up with less clutter in the higher frequencies.
@@liamzeyle4283 because you wants overtones? Honestly this is basically pretty straightforward knowledge for a lot of people who make pad/supersaw driven music like Trance/BigRoom/Future Bass, know.
This is true for more simple chords, but arpeggios and faster chord progressions usually dont work as good with a bass so it’s still something to keep in mind.
Thanks for the vid :) I think you could easily make another video emphasizing the opposite, which would be by having other instruments play the harmonics of a missing fundamental low note, your "bass" will come through better on small speakers due to the psycho-acoustic effect that it has. Ohhh audio!
Man thank you si much, I've been producing for more than 6 years and damn I never thought of this! I will take a very close look to your channel now so I could improve myself way more.
Watch this and other videos a week before they premiere on Patreon:
www.patreon.com/posts/47713343
You make a really great video and please it was really great, can you do something next time with the Main Part Lead sound Bass 5 Kick overlapping
Instablaster.
This is really important for people to understand.Steve Duda once said:"I could make a better mixdown with just the faders than most people would using everything at their disposal"
You don't need to MAKE sounds work together.They should already work together from the get go.And mixing should most of the time just ENHANCE the track instead of FIXING it.
Bingo
So much wise
Amen
Of course you have to make sounds work together during the actual sound design. If you are a "producer" who only uses samples you have to select them wisely though.
Big Facts!
“Create music that’s easy to mix”....
An unbelievably simple concept that I never adhered to. Thanks dude
How simple!
I generally agree with what you're saying - it's really good to make people think about what they are doing when EQing - but it can be a little misleading to think of this ("never shave of the fundamental") as another rule... if the fundamental is played by the bass the pad might not sound constrained or empty in context. Of course this relates to a lot of variables but generally a fundamental frequency is a sine and it doesn't matter too much which instrument plays it - and maybe you'd still want those harmonics of the "ghost fundamental" to connect with the subs / bass - it's really up to the situation imo. Another example for bass would be pushing a sine or any waveform into distortion and EQing afterwards to keep the harmonics but get rid of the rumble (replacing it with a cleaner sub sine) - you can't do that if you don't start with a lower note to generate those harmonics.
^ This, but i think it's important to know this piece of info (including your comment), especially if you're a beginner.
If you've been producing for a couple years or so the info in the video has probably become second nature, but i made this mistake countless times when i was starting out, would've been easy to get cleaner results had i known this from day one.
I'm at a point where i'm choosing wether to include or delete the low notes in an informed way (i'm actually working on a track with a lot of stacks rn and all of these things were extremely important for the final result), but not everyone is here yet.
This is probably the main origin of clutter in beginners/intermediates' productions so i think it's positive to have a video explaining this otherwise not obvious concept
@@matteodonato3918 yeah i agree... i mean i do often delete the lower notes of a pad or another instrument if i realize it's clashing with the bass - but as i said... it's situational and I usually try both EQing and deleting notes and see which one sounds better to my ears.
what you are refering to is the so called residual effect.
it is a psycho acustic effect that comes to play in phones for example: our brain can reconstuct the base note from it's harmonics. (a phone can not play the base note of our voices, we still hear the people in a somewhat regular voice)
in electronic production leaving the base note out as you did can be a good idea, since you'll be adding noise here and there to not make your mix sound too sterile
but when you're recording a band you don't want the filters to be as steep and you want that noise since you won't be adding anything post recording.
liked the video, keep it up!
Thanks for telling the actual name of this phenomenon!
@@JulianGrayMedia you'r welcome!
(I'm picking up my audio engineering diploma today)
@Multorum Unum yeah, i was making sound way before, but now i know what I'm doing and the basics are quite simple.
Been doing this for years. I tend to work with pianos and strings. I create a chord for the piano let’s say a major 7th. Open up another channel with the same chord and instrument, but this time an octave lower. You can eq them differently to fit together. Pan one left, the other right. Create one big sound by just splitting it up. Strings are great cause you can treat a whole chord comprised of 6 or 7 notes, separate each note to there own channel. So now you can have 7 separate channels, treated separately, panning, stereo width. I even add automatic panning left and right of the high notes and maybe some distortion on the low notes. It’s just more control of the whole sound.
That difference is really incredible. I've been told before to consider the frequency dynamics of a track during the arrangement stage, but I feel like now I understand why a little bit better. This is super helpful.
I mean I guess?? I usually don’t eq/mix my tracks individually. You gotta hear all of the tracks in context imo. Maybe I want the “clutter” in my drop, so what if it’s not technically correct? If it sounds good it IS good. Sometimes you don’t even need to eq or compress depending on the sound. Don’t overthink it or else you’re just gonna be wasting time and not finishing the important elements of the song. But keep trucking!
Truck Yea Brother
Dropping straight gems. Really clear in your explanations! Easy sub!
I am a piano player and producer and today you really open my new horizon for my compositions. Thank you again.
This is so simple yet infinitely useful! Right back from my project where I tested this approach on chords. High-pass vs. various volume of chord notes, lowest for lowest notes, gradually increasing towards higher notes. The result is EPIC. Thank you man, much love.
It is really important for us to think in terms of a puzzle we're putting the pieces together. Thanks for your video
One of the best mixing tutorials I’ve seen in the past year. Thanks a lot
I have been making this mistake for so long. Really appreciate the detailed explanation! Explaining the remnant harmonics was super helpful. Thanks!
Glad it was helpful!
Woh, you changed the way I'll approach producing. So helpful, Thx
This is awesome and very knowledgeable especially alot of information out there says to low cut everything rather than substituting notes
Low cut what you can't remove from the source itself ;)
This is such a great lesson and something that I somehow did not realize nor give much thoght.
Thank you!!
This video was super well done man! The number one question I get about my music is how to I get it to sound so tight. And it all comes down to what you're talking about. Sound selection is the majority of your mix, and with selecting sounds it all comes down to its harmonic range it fills so thank you for reiterating this in a way that is easy to understand.
Loved the messy/raw background and natural look of the video. "Make your music frequency compatible"; I think the same, but I have not experimented with it yet.
How do you utilize this to combat the RUclips EDM producer go-to -- hella layering of the same notes with new sounds? Would different sound waves clear up some of the muddiness?
Wow. Julian. I’m in awe. Looking back on past projects I think that’s my issue, particularly with by mid-low to sub range instruments. I think this will really help during my production sessions.
A lot of "traditional" instruments and performance groups automatically separate frequencies. For example in a string quartet, violin, viola, and cello have little overlap of frequencies. In a rock band bass guitar is below rhythm guitar, and lead guitar tends to use the high strings. In a solo piano recording the keys literally correspond to the spectrum so two keys can't hit the same frequency.
In electronic music you have to think about these things a bit more because a "pad" sound can end up with bass frequencies in it, chord stacks can pretty much cover the whole spectrum, and basses can have high frequencies that conflict with everything. On top of that, the presets in most synthesizers are meant to sound really good on their own, to sell synthesizers or plug-ins. These sounds sound great because they fill up the spectrum all by themselves, but when you're making a track they can be the worst choice. I use presets a lot but I often find myself removing one of their oscillators or filtering to make them sound thinner and weaker... which works much better in the mix.
Anyway, this video has some great advice and it's something I wish I learned 20 years earlier...
but in a rock band you wouldn't crank the bass dial on your amp... you would rather take it down a bit... EQing still has a place in rock music.
in orchestral music you might also not always cleanly separate every frequency - yes in a string quartet maybe but in an orchestra you might combine different low instruments for their specific timbres. I think it's good to be aware of which notes are playing but you shouldn't be afraid to double fundamentals with different harmonic structures (in moderation and depending on the register) if it sounds good to your ears... I agree with what you are saying about presets tho, especially bass presets tend to be pretty bloated. btw filtering is basically the same as a lowcut on an EQ (which again can be ok if you know what you are doing).
spot on
This was great insight. Thank you for explaining so meticulously
Great vid! Thanks Julian
Very similar to whar you said years ago when you talked about mixing in a 3D Box using octaves! This time around was with a clear example of the difference. A follow up mix breakdown would be insightful to see how we can apply the "band arrangement" in our DAW. Keep up the good work, great tutorial!
This is very helpful! Julian! thnx!
Thank you. Finally someone explains it in a straightforward way.
quality video Julian. As always ! Very clear and concise and all made perfect sense !
This is a really good insight. Thank you!
But even when you high pass you can see that you don’t completely get rid of the fundamental. It’s still there but at lower dB. So unless your bass and pad are being played by the same synth, i feel that sharing the fundamental with the lower volume of the pad fundamental can help blend it with the bass.
But it also really depends on what kind of sound you want. What’s shown in this vid can sound “cleaner” with plenty of separation to perhaps fit more elements. Whereas keeping the bass fundamental in can sound fuller and warmer (and yes, if overdone with too many sounds…muddy). So imo all depends on genre/sound you’re going for. That said, this is a great video so that people can learn to be more intentional with their writing and EQing.
And not just sharing the fundamental, but having those harmonics of the fundamental seems like it could blend the sound more too. Maybe that’s also what you were saying. I personally like a lot of the fundamental in the low end to make music sound warmer so I feel like the overlap could also be a good thing. I feel like this video should present it a little more as an intentional tool like you’re saying rather than a sort of rule.
Neat and beautifully explained.. it goes down in one gulp.. Great content bud 🔥 thanks alot for these tips
It’s different when u layering bass sounds together ;) especially in bass house music they layer so much bass sounds together that brings one bass sound in.
Learned something today. Thsnks
great explanation and example preview!
Hi, Julian.
Firstly, I'd like to admit that even though I'm not an Ableton user, I suppose your advice could be applied to almost any DAW. Your explanation and reasoning is really simple and logical, and you've done a really good job in delivering your point. But here's my two cents in relation to your advice.
In my honest opinion, I think that it's actually quite alright to have different instruments occupy the SAME notes, which would result in higher energy levels in the particular frequency of the spectrum related to those notes, ultimately causing the clutter/overlap as you mentioned.
However, I feel that it is OKAY to have an overlap of sounds. The main difference lies in how we manage the amount of overlap so as to avoid the masking of various elements present in the context of a mix; the art of mixing.
Hence, corrective/subtractive EQ is the best way to sculpt sounds and manage the energy levels caused by these overlaps within a particular frequency range with the main objective of finding a good balance throughout the frequency spectrum whilst additive EQ on the other hand, is useful in providing more clarity and character to the sounds.
Generally, I think the best advice that I've gotten from my little experience in production is this:
If it SOUNDS good, that's what matters most!
P.S. Thank you for your advice and all the best! ☺️
Alex
Great point. Why leave something in if you're just going to cut it out? Why use all MIDI when few do trick?
save time, sea world. thank.
Great video and explanation as always
really well explained, cheers
Super informative video! This is one thing that I happen to do already, just because of the way I've always done things, but it never quite occurred to me the sonic implication of stripping a sound of its fundamental frequency. On an oscilloscope, the result would be really easy to see --- you've lost the sound's foundation, and the residual harmonics are just stuck there mixed in with everything else.
Very good lesson learnt
Wonderfully explained!
This Is The Pill We ALL SEEEK
Good to see you here zen!
@plot device. it’s my life bruh my life’s work
@plot device. yes sir can’t hate me for trying to keep up with the Meta much love. Peace
@plot device. can’t please everyone as they say but thanks for being honest. All the best.
Papa Zen
Yo this is insanely helpful. Thank you!
Glad to hear it!
I've seen this opinion before and I don't disagree with it, but I'm also a bit indifferent to the specific argument because it's a judgment on whether the sound lives in the arrangement or in the mix. If you start with total control over the synths and sequence, there's no particular reason not to arrange it as best you can before mixing, just like ABBA did it in the 70's(Dancing Queen is one of my go-to examples of the entire arrangement being petted to give every element space). But if what you're working with is produced samples with their own arrangement already in place, mixing for space becomes way more critical.
Lately I've gotten interested in building up a template project again, and a major factor in making templates work well is to encode "set-and-forget" mixing decisions. In this role, the major parts are pre-shelved and given some treatments before you've written a single note or even decided on the instrument, so you end up with the mix determining the arrangement, because what will sound good is that which sounds good in that mix - and then the question of whether you want a missing fundamental is settled just by whether or not you want energy filling out the higher frequencies. Often you do, but it doesn't have to come from playing another note of that patch - it can come from FX and layering too. There are a lot of stylistic concerns entailed in the question, but my agreement mostly stems from the premise of solving as many sources of mix problems early on, so that the overall processing chain stays light with few artifacts.
I like your approach. very common sense but more subtle
Really helpful basic but essential i dont remember seeing a tutorial talking about this yet
Really helpful. Thank you
Sick stuff!
This is one of the most important mixing technique videos on youtube. The pros do this a lot! sound selection isnt only about choosing great sounds at the right frequencies its also HOW you play them and how that relates to the other sounds and the big picture of the mix. Thanks man!
Omg! Thank so much! That was simpliest best arrangment class for years!
No everyone on youtube talk about this topic so important. Thank you very much
I constantly thank God for youtube tutorials like this. This is one of the most useful pieces of insight I've ever learned here.
Appreciated!
You constantly thank God for basic RUclips eq tutorials? Like, down on your knees at 6am or how we talking
Great video to help understand how and why we make our EQ decisions. Thanks Julian!
Epic vid man. Best tut I’ve seen in a while
You're halfway there in your quest to clarity. This can be taken one step further though.
Anyone who has studied Jazz theory will get what I mean!
ELABORATE! :)
Yes, please elaborate!
Uh, maybe playing competing notes separately in time? If they never overlap in time, you don't have a problem.
I think je means that in jazz the piano (or guitar) won’t play the root note (and even the 5th) and focus on the 3rd, 7th 9th ect, playing with inversions and open voicings, while the bass will play the root and fith + a few passing notes to create what they call a « walking bass »
Very cool and straightforward tutorial btw !
@@JulianGrayMedia rootless voicings ;)
Great advice.
This is cool information about sound designing. But not enough to make me stop using the hi pass to clean my sounds up. lol!
Mixing for years and I teach exactly what you said "dont do". I guess its true what the greats say... mixing is not an exact science but is an exact science. What some say "dont do" is exactly what works for others.
But cool sound design video though. 👍🏾
Man, I really enjoyed this video!!!! You have a new sub!!!!
you cant judge a sound on how it sounds in isolation. if youre making EQ changes to get a sound to sit in the mix, then you have to here it in the context of that mix. really, listen to isolated stems from any of your favorite songs, theres a good chance that a lot of those tracks will sound really thin and shitty on their own, but its all in how they come together
Totally agree with you, is the most stupid thing i have heard what this guy has said
This! This video is such terrible advice. How an instrument sounds in isolation is irrelevant.
I agree to disagree, most of my favorite mixes are written with a similar idea to this in mind
I think what he's getting at is not so much that high passing leads to thin and shitty sounds, more that by high passing beyond the fundamental you're effectively losing the note but leaving the noise from the harmonics. By removing the note and instead playing it on a sound more suited to that register you have better seperation which actually then gives you more breadth to eq.
@@MrClarkio But there are so many examples of times you might want to keep the harmonics of a sound but don't need the fundamental.
Maybe you have a deep sub bass and you want another big saw bass hit for an impact at a drop or something. If the sub is playing at A1, you might still want the bass hit to sound as if it's playing an A1, but you don't actually want more of that fundamental. It's just going to create phase issues and your bass levels will be all over the place. All you really want are the harmonics from that bass impact.
Really good stuff. Thanks.
Thanks Julian
the puzzle i was trying to figure out is goddamn complete now, thank you, sir
4:23 You can opt to remove the notes from the synth that will be cut by the EQ. It’s like a pianist not playing the root because the bassist is covering it.
EDIT: Ah, that’s exactly what he suggested. I jumped the gun with my comment. My bad
How would I apply this idea to recorded music, like with guitar, bass keys etc?
just keep it in mind while writing
Also you probably don't have to quite as much because it's highly impractical to be playing guitar notes on a bass in the first place
The situation is getting even worse with natural sounds that have more complex harmonic series than just octaves. So that would make that A\B example even more evident. Thanks for the vid
thanks dude, good stuff!
Really interest detail, important for end arangment. And yes better like EQ !
Yes and no, in general this is a really good advice but in some cases I would go for making notes that are under the low cut. For example if you have a really low bass note, that is not equal to other bass like sounds in the mix, I would make a second bass with the same note (and a lot of distortion) but with a heavy low cut. The two Basses go together smooth because they play the same note and the second bass will make it sound good on smaller speakers. Do not underestimate the power of the low end from your Monitors, in other listening situations the "note-seperation-in-frequence-correlation-thing" might kill you juicy low bass
Remember, this was for chords generally.
Such a great video for newer producers like myself. I swear I’m just on auto pilot with hi passing lol
nice one sir
This is good advice for electronic sounds, but for samples such as pianos, strings or vocals, there will be microphone and room rumble.
It's better to highpass recorded sounds to remove these types of artifacts.
great point! it logic but many don't see it thank you
Can you do a similar video on how to eq drums?
If you double up the root note like in 6:31, I think, there's no difference between eq'ing vs. omitting the lower root note. I'll still hear a A minor chord, ie the upper harmonics don't change. Or am I missing the point here?
The upper harmonics will change when omitting the root note. The root note has some unique harmonics.
For example, A1's 3rd harmonic is E3, whereas A2's 3rd harmonic is E4. If the A1 is EQ'ed out, you will hear an E3 which doesn't belong.
These differences make it possible to hear when two notes are overlapping in different octaves.
Thank you for this
Great Tip! 💡
Nice tip!
its still worth checking if you should low cut due to aliasing on rich sounds and processing
Really helpful dude thanks
Thanks but gimmi the link to outro song plz 💗
Which is why writing and mixing aren't separate phases, but more of a back-and-forth. Of course, you might want the harmonics from a ghosted note, just depends on the situation.
Absolutely!
How do you compose full-frequency tracks (like, say, saw synth trance chorus) around vocals?
Trance is tricky because super saws are so harmonically rich. I’d say just make sure that you don’t have too much information in the octave that the vocal is singing in
This is only really applicable to digital electronic music. The entire reason a low cut/high pass filter is a thing is to remove unwanted noise (subharmonics from distortion, accidentals, body resonance, room resonances, electrical interference etc) from the mix, to allow space for other instruments. There is basically ALWAYS going to be sound that is picked up by a mic below the fundamental of what is being recorded, and it will muddy up the mix, especially if you have a bass guitar in the mix (body resonance on most instruments is in the 20-200hz range, fundamentals on a bass will be somewhere in the 40hz-1khz range).
This is natural mixing!
Wow this is helpful.
Are there ever times you want to have two instruments playing the same note? Or is this basically always the rule to follow?
Yes if you’re very careful and intentional with your layering (2 tones playing the same notes and rhythm for example)
brilliant!
good lesson ty
Really awesome stuff here!
this was really effective. but i have a confusion now on how to make the drums sit nice with the other notes in the octave i’ll be playing them in.
You totally make sense! That's why I don't play chords (with 1 instrument). Rather, I place notes on different tracks with different instruments to make a rich chord progression.
Polyphonic notes always distorts sounds and worsen it when you add effects to their respective instrument (sound). It leads you to remixing, editing, removing and adding new tracks. It's a total waste of time! Simplicity is the key!
Lastly, the base I used as a transitional sound to carry vocals to the melody.
Q: how does it work with Bass?
Often I high pass it at 100hrz so that i can create another channel for a sub bass. When i move the bass notes an octave higher its too high, but if I keep it the same it loses its foundation . What should i do then?
The point is to keep the fundimental of your sub and bass an octave apart. If the sub is interfering with the bass you could try a high cut on the sub. If you need to increase the octave of the bass you could try closing the filter cut off on the synth more less of the harmonics come through.
@@dilbydj i got the first part on the sub hi cut. As for the bass itself, if I use a higher octave notes (one octave up from the sub which is more than 100hz for example), the the bass starts to sound too mid rangy and not like a bass.. so basically i do follow the tutorial advice but the bass doesn’t end up sounding like a bass. How do I overcome this?
Don’t listen to this video. What you’re describing is just one of the many situations where you would want to cut the fundamental.
Having a seperate sub and top bass is very common and gives you much more control over the level and consistency of your sub. You want the top bass to sound like it’s playing the bass notes, but you actually only want the harmonics of the sound as the fundamental is being covered by your sub bass.
Keep doing what you’re doing.
And now I know. Thanks Julian!
Thank you
1:23 It's actually an Amin7. I know it's a bit nitpicky but when you're are talking about dissonant harmocis etc. using a chord that may be considered dissonant by some without saying so may be confusing to some people ;D
Also your room is hella reverbarant, you may want to consider some acoustic treatment.
You say the sound doesn’t sound good without it’s fundamental. But you’re not listening in context. The whole point of that high-pass is that the bass will be filling the spot of that fundamental.
THIS ^
I think you slightly missed his point there. What he was trying to say was, why use a high pass to filter out the fundamental when you can just delete the low note altogether (in the pad) and make the EQ unnecessary + end up with less clutter in the higher frequencies.
@@liamzeyle4283 because you wants overtones? Honestly this is basically pretty straightforward knowledge for a lot of people who make pad/supersaw driven music like Trance/BigRoom/Future Bass, know.
This is true for more simple chords, but arpeggios and faster chord progressions usually dont work as good with a bass so it’s still something to keep in mind.
Thanks for the vid :) I think you could easily make another video emphasizing the opposite, which would be by having other instruments play the harmonics of a missing fundamental low note, your "bass" will come through better on small speakers due to the psycho-acoustic effect that it has. Ohhh audio!
Man thank you si much, I've been producing for more than 6 years and damn I never thought of this! I will take a very close look to your channel now so I could improve myself way more.
Love this!
Dude thank you so much for this!
beautiful video man... but.... what if i was using a wave loop and not midi?... is there no hope for this technique???