Good work dude. Just to expand on something, the proximity effect isn’t so much from you “pressing more energy” into the capsule, it’s more of a nerd reason! In cardioid mode, the voltages and acoustic chamber around the capsule cancel out sound from behind - and the circuit is tuned by the manufacturers to take this into account when further away. When closer, the system essentially breaks down allowing the low end to be more pronounced, along the lines of the inverse square law. I think it’s more that the capsule was always bassy, but the lows are rejected more at distance because of that cardioid design. We hear almost no proximity effect on an Omni capsule, and an even more pronounced effect on the other end of the scale as Figure-8 mode can be really boomy if you’re not careful!
A fun (?) side effect of the design of cardioid mics is that if you cover up the grill (ahem metal vocalists), you effectively turn it into an omni mic. And now you get tons of feedback on stage!
Audio Engineer here, lots of great fundamentals here. You even hit on some of the "art" behind this where there are some differences of opinion here in areas like order of signal chain. There's the reality that you need to know the fundamentals and can actually understand what you're doing and once you're there, breaking the rules is fine. At the end of the day, the end result speaks for itself. If it sounds good, it sounds good, but you gotta know how you got there. If you don't know how you got there, you just got lucky.
Whole-heartedly agree. If it sounds good, it sounds good. Just look at the way Finneas mixed Billie Eilish's "Ocean Eyes". There is a whole list of rules that were broken, yet it still sounds amazing for what it is and what they were working with (Roland Quad-Capture, AT2020 and Stock Logic Pro).
Just got my RE20 and had to start my stream, saw this video and figured I'd have time for a quick guide and MAN 50 minutes?! I'll go through this later, skimmed through the information and it's golden!
You might be the only person I've ever heard describe the SM7B as bright, although I see why you'd come to that conclusion after looking at the graph but from my experience the mic is very dark and usually when EQing, a lot of folks actually boost those highs because it just needs it. When it comes to the SM7B vs the RE20, I think out of the box with minimal processing, the RE20 is just much better especially for darker voices such as my own. It has this really nice boost in the highs and this punchy low end that just sounds REALLY pleasing on damn near any voice you throw at it. I actually like the SM7B for the reasons you like the RE20, I feel like I can EQ and process my SM7B waaaaaay more flexibly than my RE20, but that's the beauty of this hobby and the differences of everyone's voices. Always enjoy watching audio stuff, also I agree with your recent attack on the BEACN mic, that thing just sounds not very good lmao
a bump of over 5 db @ 6k is pretty friggin' bright - certainly 'forward' sounding and sibilant. it's dark at low impedance and gain, but why be sub-optimal - it wants a lot of gain and likes higher impedance than the usual "x10 rule"
Dude!! You're my new best friend! Once upon a time, I was a really dumb 17-year-old who, as many dumb people do, thought he'd just throw a bunch of money at his audio problem and bought an RE20. And knowing absolutely jack about audio mixing, my videos sounded terrible for years. Now thanks to you teaching me about the frequency response graphs, I know why! Audio Tutorials are so sparse on RUclips, especially because the main forerunner is Mike Russle who will only teach you how to make a ridiculously compressed radio voice that sounds awful for streaming or RUclips - so I really appreciate this video. I'm actually excited to go play with my EQ after learning all this info and to check out Joe Gilders videos. Thank you so much for all the help and easy to understand explainations!
I have needed an in depth video on this for literally the entire last year 😭 Thank you for sharing! I’ve been so overwhelmed with trying to make my current microphone work, that I just bought a different one, so I’ll have to start all over with EQ and everything… this will be super helpful 🙏🏻
After watching your video the concept of EQ and how it's used to improve a person's voice finally clicked for me. A missing link between not knowing anything about audio and knowing enough to start learning on your own has been bridged. For years I would want to improve my understanding of audio but whenever I went to learn about it it was too overwhelming and I couldn't grasp what was being talked about. This of course has lead to misconceptions as "you must get the Shure SM7B", and relying a lot on audio presets. Thanks for creating such an in-depth guide for the regular consumer.
In response to “The SM7b is not a flat microphone” statement, while ‘technically’ true, the reasoning stated is not the case. You could say the microphone overall is boosted by 5dB, but since dB or ‘Decibels’ is a measure of volume, and the whole frequency is boosted instead of a portion of the frequency response spectrum, you could essentially bring that ‘Volume’ down to zero to see the frequency graph. The graph is more-so there to show you the difference in volume between the frequencies rather than the “Boost” that you are stating. Remember, as frequency response graphs are important for purchasing and understanding a microphone, they can vary from whoever does the analysis and shouldn’t be taken whole-heartedly. I have personally seen response graphs that have differed wildly from my own experience and testing. A good place to see response graphs done reliably is: Audio Test Kitchen Also in respect, for the Beacon or ‘Becon’ Microphone, we only have what the manufacturer gives us, and with a first glance at it, it reminded me of the Audio-Technica AT-2020’s frequency response by Audio Test Kitchen, albeit with a huge boost in the mid-high frequency starting at 2Khz whereas the AT-2020 starts it’s boost a bit higher in the mid-highs at 7.5khz. This adds to the fatiguing nature to the Beacon microphone as that is the range where intelligibility increases, but also sibilance. I am by no means defending the Beacon microphone, as I also do not like the Audio-Technica AT-2020 it reminds me of. If you disagree, you can comment below and I’ll respond cordially. Otherwise I’m more than willing to teach some folks about response graphs, EQ and proximity effect as I build my own microphones.
Yeah I haven’t been in the office to update description, but turns out the graphs I found from 3 different sources seemingly made differently were all wrong. That’s 100% on me, but at least I kinda corrected for it when actually describing the curve. Also why it’s valuable for me to start working on testing them in my environment with variables normalized etc. should produce something easier for me to tangibly work with. That being said it’s still not a flat mic, per the high end regardless. But yes the graphs I found were incorrect so the statement that it’s all on a +5dB shelf and “all boosted” were incorrect. Apologies.
@@EposVox Very much agreed. Though when Audio Engineers talk about flat, generally it has a lot to do with the mid frequencies. Because even a microphone like a Neumann U87ai (That is toted as one of the best workhorse microphones and very flat by many engineers) it is also technically not a flat microphone as it has a presence boost similarly to the SM7b starting at the upper-mids. Though there are very flat microphones eg. sE Electronics T2 or the Warm Audio WA-87, it is not always ideal, like you've addressed in your video. The most important thing is how well the microphone takes EQ, and that is mostly dealing with smooth responses and other factors such as resonance (Room and Body) and components. Sorry I write too much, this is just the first video you've released that pertains to my profession and I'm very passionate about it. I've been watching you for years now.
you touch on it and its always worth doing as well - but listen on different headphones as well - open backs, closed backs, ear buds. You can find key harshness and boominess on that playback that you might have EQ'd too harsh for one setup versus another. You'll find with women and men there are different areas of the voice that pop, and women don't always sound the best with boom. Then you have the dynamic versus condenser issue, then the voice use - voice over, singing, post production echo etc. Still you nail this whole video as a starter masterclass.
Also having the whole mic boosted with SM7 does have to do with gain too - so the curve is flat, just slightly phased up and off calibrated to their testing device.
This is officially my favorite video of yours, I’ve been using a lot of your guidance for my own gaming and music live streams for a long time now. Even though you say you think a lot of the words you used to describe sound are silly, I know exactly what you mean and I can’t wait to get to refining my sound even more after watching this. Thank you so much!
you've come a long way with these videos, I rememeber when you use to do audacity tutorials saying to boost the bass and the highs, most people did those back then, in fact I use to do that until 2019 when I saw potatoe jet friend was making a channel, then I learned so much about Eqing and compression, Alex Knickerbocker is his name, I'm glad to see you're clearing up a lot here even if it's rather late.
The funny thing is that at the end of the video you can see he EQ's his mic exactly the same way. Boost lows and highs. He did all this research and put so much time and effort into the subject only to do the exact same thing. If he just put a high pass filter on at around 80hz and didn't touch anything else it would sound just as good. With him boosting the lows, it's boomy in my studio monitor speakers and studio headphones (DT 770 pros), especially since he's so close to the mic. I'm sure it sounds good for people listening on laptop speakers/phones, but it's peaky in the bass and highs for me. At least he's no eqing it heavily compared to some other RUclipsrs. I legit can't watch Piximperfect videos because he has a $2000 mic that he makes sound like a $20 mic by boosting the lows way too high and adding tons of compression that sounds cheap.
One comment/correction on the SM7B: I can't agree on the explanation at 35:43. For whatever reason the frequency graph you found was just normalized differently (to 0 dB at the lowest point). Looking into the data sheet from the Shure website I find the same frequency graph for the SM7B but normalized to 0 dB at 1 kHz, just as all the other graphs in this video and as I usually see in data sheets. So it looks like there is no actual boost, but only a different offset in the graph. And as you already mentioned at 35:58 the graph in itself then is quite flat. Except for the dip at around 7 kHz the graph stays within a 2 dB window (related to the 1 kHz response) in a range from ca. 80 Hz to 13 kHz. That's not so different from the Electrovoice RE 20 shown directly before (the different scaling makes it deceptive). This doesn't affect most of the other things you said of course. But still I think it is worth mentioning. Other than that a good intro to this interesing subject. :)
Huh. I legit pulled up 3 different ones (with varying quality/fonts) and they were all that way. Goes to show the problem of wrong graphs going around! ^^; Also why I’m stoked to be testing for myself so I can get comparisons normalized for my space etc
Was about to mention this. When evaluating graphs it's very important to consider the measurement equipment used to make the graph as well as how it was compensated and what level of smoothing (if any) was used in the creation of said graph. Comparing different graphs from different manufacturers is a surefire way to get incorrect results due to different levels of accuracy being put up against one another.
@@EposVox Yeah, this graphs can be a pain in the bass. The data sheet for my mic simply has a wrong graph in it. Found this out only because of the wrong low cut drawn in it and was lucky to find the right graph in a data sheet for an older version of the mic. And I totally share your excitement for the own measurements. As a physicist and a tech nerd I like this things would love to do it myself (especially since I could get access to the equipment). ^^ I'm looking forward to your work. :)
I agree and made a pretty long-winded explanation for this. If the whole frequency response is boosted (as shown in the graph provided in the video) then it isn't boosted as dB is just a measurement of volume. You can easily bring the volume down by 5dB and it'll look relatively flat, but still not flat. In Audio Engineering, the literal act of bringing volume up or down from unity-gain is bringing up the frequency response. The most important thing to watch out for is the difference in dBs the frequency response has across the spectrum, as this affects the "Flatness" of a microphone. And in respect, the SM7b is quite close to flat.
Minute 18: One line is one Number on a logarithmic Scale. These are the basics. Kinda easy to read. Its the same as always ten lines for 10 numbers, same as with the non logarithmic scale.
I picked up a Rode Procaster a while back ago and I highly rank it. I did end up buying a used black RE20, for $280, less than 6 months after release. I love that it captures more of the lower end of my voice and I think it does great without EQ. I'm sure it could be a tad better with EQ and a multi channel compression chain.
I have never been the best at EQ or any audio mixing in general, although one key thing I would like to add is to always make sure you update your EQ for your mic whenever you notice your voice changing as you age. I was just listening to an old EQ curve I used to on myself from 2018 to try and make myself sound warmer and less bright in my voice because of how high and nasal it used to sound now I do the opposite now I just try and make my Voice sound rich and punchy. Idk if it's just the RODE NT1-A but it has a very flat response curve close to the RE20, i barely get any bass or boomy sound on it.
Cant someone make a program where it takes your raw audio with the mic you have and give you suggestions on what to modify in processing. That would be pretty helpful.
Was the RE-20 developed for broadcast? It's amazing on bass guitar cab, woodwinds, brass, drums...as well as vocals for music or broadcast. The real trick to the RE-20 is the way it deals with proximity effect. It's still present obviously, but it's more linear than a typical cardioid mic. This makes it great on source material that has a lot of low end that you want to capture without an exaggerated proximity effect.
The RE-20 was definitely developed for broadcast, but as any well-developed microphone goes, it'll work well on almost any source. Honestly speaking, microphones should not be developed for any one-use, as that'll just limit the usability of the microphone *cough* The Becon Mic. Most well-developed microphones are developed to have a minor-mild roll-off in the low-end starting at 60-80hz, then a flat-ish frequency response in the mid-range, and finally a minor-mild presence boost starting in the upper-mids to low-highs. Overall, the frequency must be smooth (meaning the response chart) or the microphone can end up being harsh or dull.
This was a very nice video for technicians and people who know a lot about this. I hope you make a useful video for normal people who don’t have expensive mics like you or the “hertz” knowledge… and teach how to get a broadcast voice on the most popular mics (wave 3, blue yeti, etc) this is just what everybody wants. Anyway, I always watch your videos. Great channel.
I mean, this was an introduction video lol I even point you towards other videos to learn the basics of EQing if you haven’t started looking into it yet. I even explain what the axes on a graph are lol
You wont get a 'broadcast' level voice going DIY until you understand and also **put into practice** basic terms and concepts: 💡Knowing what dB, impedance, capsule type, signal balancing, SPL, polar pattern, self-noise, sensitivity is ~will~ help you to sound good - much more than guess or ignoring these things will.💡 ...and that's just microphone terminology. Before you even think about what mic to get you should already have treated your space/room acoustically, and have some decent studio monitors so you can accurately hear what the hell you are doing in the first place.
Thanks. I'm going to go and play with my mic and mixer for a day or three then come back and watch this again. Maybe I also need to get some better speakers or headphones. Or maybe younger ears as I'm not sure mine work as well as they did 40 years ago.
When you talked about the RE20 and distance from it when speaking to it without degradation to tone, to me it was reminiscent to a directivity plot for loud speakers. Take a look at a blog post by orfeosound - directivity patterns, the waterfall plots show how strong a signal was recorded as a given frequency, and degree. My very loose grasp of proximity effect via wikipedia also mentions an angular dependence.
Thank you so much for this and the Joe Glider 3 Rules of EQ video rec. I use a CAD E100s/Motu M2, and some of my words always came out 'slurred' even though I said them right. I used the TDR Nova plugin and brought down the spikes and understood every word. I've never sounded so good. I tried 5 different mics until adjusting this.
How to start to tweak my mic/voice? I have the problem like all people that I hear myself totally different from my recorded voice. So how to improve it via eq etc. That would be nice video
when you stream using aac in stereo, you need 144 kbps for a very good quality. 454 kbps in aac is identical to original studio sound. unfortunately streaming isn' made for mp3.
Good thing the default for OBS and all streaming platform is 160kbps AAC and Twitch lets you feed 320kbps direct to viewers. So…it’s completely fine. Literally a nonissue
@@EposVox believe it or not, according to tests i made, mp3 is superior to aac, because mp3 is a simple compressor, mainwhile aac is a complex algorythm, which occurs high degradations to original file. for this reason, you need 450k in aac to reach the same level you get in 300k mp3. but 150k aac is better than 250k mp3. mp3 is super good but only at its highest bitrate.
So, forgive me if I misunderstood, but just about every time I heard you mention "where your voice lives", you show a graph with a smoother part, and a spikey part that you point to, so I wonder if the spikey bits are supposed to represent where your voice lives??? I'll watch that other video you linked and see what the other information I can gather from that to combine with the information I gained from your video, for a more complete understanding of how sound EQing can help me. Unfortunately my ears ain't the best, so I might not have the best luck getting an accurate audio tune...
The “spiky bits” are just the loudest parts of a signal being picked up. As explained at the start, all it is is a representation of what frequencies the mic is most sensitive to
@@EposVox This is a bit misinformed, but I can understand the idea why. Your voice technically lives across the whole spectrum. Lemme explain in layman's terms. Basically the human ear system can only hear frequencies between 20hz (Which is super low) and 20Khz (Which is super high) at best. Most people will hear a range somewhere in between as we age. Now our voices have a frequency range that extends far beyond these two limits, but we can't hear them, yet comprehensive spectrum analyzer can certainly detect them. This is just the limits of human hearing. Now in the case of "where your voice lives", it lives everywhere on that frequency graph, but in general is more pronounced in the low-mid to mid-mid frequencies eg. 160hz - 3000hz. This is where the heart of our voice comes from. The second most important part is our intelligibility or understanding of our diction, which lives in the mid-high to low-high frequencies eg. 3000hz-14000hz. This is to help so the audio does not sound muffled or box-y and also more present or forward. Next is the Highs, which is everything after 14000hz, and this can help us to sound less closed-in or narrow-sounding. It's what we call in the industry "Air" or "Sparkle". And lastly anything below 160hz is mostly just candy for the voice. Most people will use a high-pass filter/low-cut to get rid of it because it can also get really rumbly and boomy if not utilized well. It also can contribute to that 'Joe Rogan Podcast Sound' as it doesn't seem they do a lot of processing in the first place. So our voices travel across the entire frequency and more, but we don't hear the extremes very often. And since everybody's voice is different, there is no one-size-fits-all solution for this. If your voice is nasally, you'll have a very different EQ approach than somebody is very smooth or airy and vice-versa. You just need 90% of the audible frequency spectrum to sound natural and good.
Ok this is a misinterpretation of what I said. To be blamed on my poor word choice, but I thought it was pretty clear that “where your voice lives” was referring to frequencies that are inherently emphasized more by your voice. I should have explained it more clearly. Also literally every audio engineer (and person who comments on my videos to say something about it) seems to have a different benchmark at which “everything under is not needed” with numbers from 40Hz to 200Hz being cited as the ultimate truth. Based on my experience and the goals of broadcast, I stick to the range I referred to.
@@EposVox ah. That explanation makes more sense to me... I'm still going to need some work to get good with audio, but at least it's starting to make a little bit more sense, slowly but surely, the further I look into it. **Edit** my microphone cuts everything out that is below 100Hz, as well as, when I'm not speaking, everything, until I speak, all done from the software built into the wireless microphone receiver, although I believe the settings were originally turned off from the factory. I haven't noticed any glaring problems with my audio in any tests I have did with the microphone, but I definitely want to get proficient in setting it up properly, so that I can avoid any problems that might arise when using it in a personal or professional capacity that requires good sounding audio. As I mentioned before though, my ears aren't well tuned enough for that to come easily to me
@@EposVox Apologies, this was a response to the commenter, not you. And with the comment about the “Ultimate Truth”, I generally follow that rule subconsciously for the most part. 160hz is just a nice area where you’ll roll off most of the rumble but still keep some nice low-end for vocals with either a 6dB or 12dB High-Pass Filter. But these are just starting areas, not rules, as I break plenty of audio engineering rules daily. Either way, if it sounds good… it sounds good. When you get down to the nitty-gritty of Audio, it becomes very subjective and most people won’t even hear or care about many of the small mistakes that make our ears bleed.
My only issue is that some people may misuse this guide. People can use this to deepen their voice and I don't like voice changers even if it makes their voice sound more pleasing because people just need to be themselves. People that use this to enhance bad quality mics on the other hand is fine by me.
To put an end to the whole "what to do first, EQ or Compression" debate, basically, try both things. EQ first, then compress, listen, then change the order of your signal chain to compression first, then EQ. Whatever sounds best to you and your audience, that's the right order. Also, nobody is stopping you from using multiple instances of both, just be careful. If you need to use 5 EQ's, then you are definitely doing something wrong. I suggest trying this order: EQ to cut out frequencies you don't want/like Compress EQ to boost frequiencies you want/like
2 compressors (Joe Chicarelli likes to use 3!) in series can work wonders, too. Set the first to act as a mild limiter to catch the upper peaks ONLY, and then use a 2nd compressor at very mild ratio [1.5:1 to 2:1] as your 'sauce' to get everything smoother and/or bouncing together in a pleasing manner. The 1st compressor helps the 2nd to do it's thing better because 2nd comp doesn't need to react to any extreme peaks :) Some single compressors come with dual stages like dbx's 'over-easy' and FMR's 'really nice" features - these modes give a simple implementation of a 2nd compression stage. Two fully spec'd compression units are, of course, even more flexible and controllable than one unit with just a stage-2 switch.
@@shaft9000Ah yes, the infamous "1176 into an LA-2A" trick. It doesn't have to be those compressors specifically, but it's probably is the most well-known instance of this. Sidenote: it works well for music mixing, but it's probably too subtle for most people trying to compress voiceover/spoken word like broadcasting. Many seek that "radio compression" style, even though I feel like it's overused at this point.
Your videos are mys school. I am disabled and You, you beautiful person you wonderful human creatgure, you are dope as fvck!!! Thank you so much, for all you put out here.
Is it bad I want the SM7B not for the sound of the mic, but for how the mic looks? Obviously the sound plays a small part, but it's mostly cause I think it looks cool.
I don’t really knock anyone for going for looks overall, I just don’t like when people (this mostly meaning reviewers less so you) start weighing them as equally as sound or buy good looking bad mics. But SM7b is great mic so it works out either way haha
@@JohnAlzayat Total noob but I don't knock people buying a mic for their looks, especially with how visual audio has gotten with RUclips and Twitch. People will see these microphones, you will see yourself with that microphone. If I wanted to livestream with my face, I think I would choose the Shure 565SD, I honestly like the look of a handheld stage mic but the 565sd has some tasteful flashiness, imo.
Still on about…. Educating people about audio and how to make the best choices for them? Yes? I’ve been teaching it for years, no plan to stop it any time soon, why would I suddenly stop? Especially when I have new ways to show people great info. You ok?
@@EposVox I was mostly referring to the intro to your video, in relation to your Twitter rants and fighting with the Beacn folks recently. It feels like beating a dead horse at this point.
Graphs assist using your ears, especially for people who don’t know what they’re hearing. “Use your ears” is the advice I received for the first 10+ years of my career and it was completely useless, this is tangible assistance to help improve things - like using scopes for color. Especially when most of us aren’t on studio monitors that cost thousands of dollars
Good work dude. Just to expand on something, the proximity effect isn’t so much from you “pressing more energy” into the capsule, it’s more of a nerd reason!
In cardioid mode, the voltages and acoustic chamber around the capsule cancel out sound from behind - and the circuit is tuned by the manufacturers to take this into account when further away. When closer, the system essentially breaks down allowing the low end to be more pronounced, along the lines of the inverse square law. I think it’s more that the capsule was always bassy, but the lows are rejected more at distance because of that cardioid design.
We hear almost no proximity effect on an Omni capsule, and an even more pronounced effect on the other end of the scale as Figure-8 mode can be really boomy if you’re not careful!
A fun (?) side effect of the design of cardioid mics is that if you cover up the grill (ahem metal vocalists), you effectively turn it into an omni mic. And now you get tons of feedback on stage!
Great explanation!
@@nickglover As an audio-engineering, it hurts my heart to have folks cup hand-held dynamic microphones lol.
Audio Engineer here, lots of great fundamentals here. You even hit on some of the "art" behind this where there are some differences of opinion here in areas like order of signal chain. There's the reality that you need to know the fundamentals and can actually understand what you're doing and once you're there, breaking the rules is fine. At the end of the day, the end result speaks for itself. If it sounds good, it sounds good, but you gotta know how you got there. If you don't know how you got there, you just got lucky.
Whole-heartedly agree. If it sounds good, it sounds good. Just look at the way Finneas mixed Billie Eilish's "Ocean Eyes". There is a whole list of rules that were broken, yet it still sounds amazing for what it is and what they were working with (Roland Quad-Capture, AT2020 and Stock Logic Pro).
Just got my RE20 and had to start my stream, saw this video and figured I'd have time for a quick guide and MAN 50 minutes?! I'll go through this later, skimmed through the information and it's golden!
You might be the only person I've ever heard describe the SM7B as bright, although I see why you'd come to that conclusion after looking at the graph but from my experience the mic is very dark and usually when EQing, a lot of folks actually boost those highs because it just needs it. When it comes to the SM7B vs the RE20, I think out of the box with minimal processing, the RE20 is just much better especially for darker voices such as my own. It has this really nice boost in the highs and this punchy low end that just sounds REALLY pleasing on damn near any voice you throw at it. I actually like the SM7B for the reasons you like the RE20, I feel like I can EQ and process my SM7B waaaaaay more flexibly than my RE20, but that's the beauty of this hobby and the differences of everyone's voices. Always enjoy watching audio stuff, also I agree with your recent attack on the BEACN mic, that thing just sounds not very good lmao
yeah the SM7B is dark not bright, I've owned a few and always regret it when I hear it against my RE27 lol
a bump of over 5 db @ 6k is pretty friggin' bright - certainly 'forward' sounding and sibilant.
it's dark at low impedance and gain, but why be sub-optimal - it wants a lot of gain and likes higher impedance than the usual "x10 rule"
Dude!! You're my new best friend!
Once upon a time, I was a really dumb 17-year-old who, as many dumb people do, thought he'd just throw a bunch of money at his audio problem and bought an RE20. And knowing absolutely jack about audio mixing, my videos sounded terrible for years. Now thanks to you teaching me about the frequency response graphs, I know why!
Audio Tutorials are so sparse on RUclips, especially because the main forerunner is Mike Russle who will only teach you how to make a ridiculously compressed radio voice that sounds awful for streaming or RUclips - so I really appreciate this video. I'm actually excited to go play with my EQ after learning all this info and to check out Joe Gilders videos. Thank you so much for all the help and easy to understand explainations!
I have needed an in depth video on this for literally the entire last year 😭 Thank you for sharing! I’ve been so overwhelmed with trying to make my current microphone work, that I just bought a different one, so I’ll have to start all over with EQ and everything… this will be super helpful 🙏🏻
The windows sweater is hella dope, I want one
After watching your video the concept of EQ and how it's used to improve a person's voice finally clicked for me. A missing link between not knowing anything about audio and knowing enough to start learning on your own has been bridged. For years I would want to improve my understanding of audio but whenever I went to learn about it it was too overwhelming and I couldn't grasp what was being talked about. This of course has lead to misconceptions as "you must get the Shure SM7B", and relying a lot on audio presets. Thanks for creating such an in-depth guide for the regular consumer.
Yesss! I’m so glad this resonated with someone. It took me YEARS and YEARS to “get” EQ
In response to “The SM7b is not a flat microphone” statement, while ‘technically’ true, the reasoning stated is not the case.
You could say the microphone overall is boosted by 5dB, but since dB or ‘Decibels’ is a measure of volume, and the whole frequency is boosted instead of a portion of the frequency response spectrum, you could essentially bring that ‘Volume’ down to zero to see the frequency graph.
The graph is more-so there to show you the difference in volume between the frequencies rather than the “Boost” that you are stating.
Remember, as frequency response graphs are important for purchasing and understanding a microphone, they can vary from whoever does the analysis and shouldn’t be taken whole-heartedly. I have personally seen response graphs that have differed wildly from my own experience and testing.
A good place to see response graphs done reliably is: Audio Test Kitchen
Also in respect, for the Beacon or ‘Becon’ Microphone, we only have what the manufacturer gives us, and with a first glance at it, it reminded me of the Audio-Technica AT-2020’s frequency response by Audio Test Kitchen, albeit with a huge boost in the mid-high frequency starting at 2Khz whereas the AT-2020 starts it’s boost a bit higher in the mid-highs at 7.5khz. This adds to the fatiguing nature to the Beacon microphone as that is the range where intelligibility increases, but also sibilance.
I am by no means defending the Beacon microphone, as I also do not like the Audio-Technica AT-2020 it reminds me of.
If you disagree, you can comment below and I’ll respond cordially. Otherwise I’m more than willing to teach some folks about response graphs, EQ and proximity effect as I build my own microphones.
Yeah I haven’t been in the office to update description, but turns out the graphs I found from 3 different sources seemingly made differently were all wrong. That’s 100% on me, but at least I kinda corrected for it when actually describing the curve. Also why it’s valuable for me to start working on testing them in my environment with variables normalized etc. should produce something easier for me to tangibly work with.
That being said it’s still not a flat mic, per the high end regardless.
But yes the graphs I found were incorrect so the statement that it’s all on a +5dB shelf and “all boosted” were incorrect. Apologies.
@@EposVox Very much agreed. Though when Audio Engineers talk about flat, generally it has a lot to do with the mid frequencies. Because even a microphone like a Neumann U87ai (That is toted as one of the best workhorse microphones and very flat by many engineers) it is also technically not a flat microphone as it has a presence boost similarly to the SM7b starting at the upper-mids. Though there are very flat microphones eg. sE Electronics T2 or the Warm Audio WA-87, it is not always ideal, like you've addressed in your video. The most important thing is how well the microphone takes EQ, and that is mostly dealing with smooth responses and other factors such as resonance (Room and Body) and components.
Sorry I write too much, this is just the first video you've released that pertains to my profession and I'm very passionate about it. I've been watching you for years now.
Appreciate the insight!
Well done - that whole section at 15 minutes is what I used for years to work on understanding. Love it.
you touch on it and its always worth doing as well - but listen on different headphones as well - open backs, closed backs, ear buds. You can find key harshness and boominess on that playback that you might have EQ'd too harsh for one setup versus another. You'll find with women and men there are different areas of the voice that pop, and women don't always sound the best with boom. Then you have the dynamic versus condenser issue, then the voice use - voice over, singing, post production echo etc. Still you nail this whole video as a starter masterclass.
Also having the whole mic boosted with SM7 does have to do with gain too - so the curve is flat, just slightly phased up and off calibrated to their testing device.
This is officially my favorite video of yours, I’ve been using a lot of your guidance for my own gaming and music live streams for a long time now. Even though you say you think a lot of the words you used to describe sound are silly, I know exactly what you mean and I can’t wait to get to refining my sound even more after watching this. Thank you so much!
Okay, that sweater is low-key absolutely FIRE!!!!!!
you've come a long way with these videos, I rememeber when you use to do audacity tutorials saying to boost the bass and the highs, most people did those back then, in fact I use to do that until 2019 when I saw potatoe jet friend was making a channel, then I learned so much about Eqing and compression, Alex Knickerbocker is his name, I'm glad to see you're clearing up a lot here even if it's rather late.
The funny thing is that at the end of the video you can see he EQ's his mic exactly the same way. Boost lows and highs. He did all this research and put so much time and effort into the subject only to do the exact same thing. If he just put a high pass filter on at around 80hz and didn't touch anything else it would sound just as good.
With him boosting the lows, it's boomy in my studio monitor speakers and studio headphones (DT 770 pros), especially since he's so close to the mic. I'm sure it sounds good for people listening on laptop speakers/phones, but it's peaky in the bass and highs for me.
At least he's no eqing it heavily compared to some other RUclipsrs. I legit can't watch Piximperfect videos because he has a $2000 mic that he makes sound like a $20 mic by boosting the lows way too high and adding tons of compression that sounds cheap.
Love it! Mic sounds great. 💙
One comment/correction on the SM7B:
I can't agree on the explanation at 35:43. For whatever reason the frequency graph you found was just normalized differently (to 0 dB at the lowest point). Looking into the data sheet from the Shure website I find the same frequency graph for the SM7B but normalized to 0 dB at 1 kHz, just as all the other graphs in this video and as I usually see in data sheets. So it looks like there is no actual boost, but only a different offset in the graph.
And as you already mentioned at 35:58 the graph in itself then is quite flat. Except for the dip at around 7 kHz the graph stays within a 2 dB window (related to the 1 kHz response) in a range from ca. 80 Hz to 13 kHz. That's not so different from the Electrovoice RE 20 shown directly before (the different scaling makes it deceptive).
This doesn't affect most of the other things you said of course. But still I think it is worth mentioning.
Other than that a good intro to this interesing subject. :)
Huh. I legit pulled up 3 different ones (with varying quality/fonts) and they were all that way. Goes to show the problem of wrong graphs going around! ^^;
Also why I’m stoked to be testing for myself so I can get comparisons normalized for my space etc
Was about to mention this.
When evaluating graphs it's very important to consider the measurement equipment used to make the graph as well as how it was compensated and what level of smoothing (if any) was used in the creation of said graph.
Comparing different graphs from different manufacturers is a surefire way to get incorrect results due to different levels of accuracy being put up against one another.
@@EposVox Yeah, this graphs can be a pain in the bass. The data sheet for my mic simply has a wrong graph in it. Found this out only because of the wrong low cut drawn in it and was lucky to find the right graph in a data sheet for an older version of the mic.
And I totally share your excitement for the own measurements. As a physicist and a tech nerd I like this things would love to do it myself (especially since I could get access to the equipment). ^^ I'm looking forward to your work. :)
@Saikou yep! Which is why I pointed out to be careful with the visual differences and why I’ll be trying to get consistent measurements
I agree and made a pretty long-winded explanation for this. If the whole frequency response is boosted (as shown in the graph provided in the video) then it isn't boosted as dB is just a measurement of volume. You can easily bring the volume down by 5dB and it'll look relatively flat, but still not flat. In Audio Engineering, the literal act of bringing volume up or down from unity-gain is bringing up the frequency response. The most important thing to watch out for is the difference in dBs the frequency response has across the spectrum, as this affects the "Flatness" of a microphone. And in respect, the SM7b is quite close to flat.
Minute 18: One line is one Number on a logarithmic Scale. These are the basics. Kinda easy to read. Its the same as always ten lines for 10 numbers, same as with the non logarithmic scale.
I picked up a Rode Procaster a while back ago and I highly rank it. I did end up buying a used black RE20, for $280, less than 6 months after release. I love that it captures more of the lower end of my voice and I think it does great without EQ. I'm sure it could be a tad better with EQ and a multi channel compression chain.
Kinda feel like the SkiFree guy is giving me a mastering class :) Roll-off and the proximity effect are really good things to know about.
I have never been the best at EQ or any audio mixing in general, although one key thing I would like to add is to always make sure you update your EQ for your mic whenever you notice your voice changing as you age. I was just listening to an old EQ curve I used to on myself from 2018 to try and make myself sound warmer and less bright in my voice because of how high and nasal it used to sound now I do the opposite now I just try and make my Voice sound rich and punchy. Idk if it's just the RODE NT1-A but it has a very flat response curve close to the RE20, i barely get any bass or boomy sound on it.
Cant someone make a program where it takes your raw audio with the mic you have and give you suggestions on what to modify in processing. That would be pretty helpful.
Was the RE-20 developed for broadcast? It's amazing on bass guitar cab, woodwinds, brass, drums...as well as vocals for music or broadcast. The real trick to the RE-20 is the way it deals with proximity effect. It's still present obviously, but it's more linear than a typical cardioid mic. This makes it great on source material that has a lot of low end that you want to capture without an exaggerated proximity effect.
I’m pretty sure it was developed for broadcast originally (though it’s hard to find confirmation) but it’s widely used for various music uses too!
The RE-20 was definitely developed for broadcast, but as any well-developed microphone goes, it'll work well on almost any source. Honestly speaking, microphones should not be developed for any one-use, as that'll just limit the usability of the microphone *cough* The Becon Mic. Most well-developed microphones are developed to have a minor-mild roll-off in the low-end starting at 60-80hz, then a flat-ish frequency response in the mid-range, and finally a minor-mild presence boost starting in the upper-mids to low-highs. Overall, the frequency must be smooth (meaning the response chart) or the microphone can end up being harsh or dull.
@@EposVox We used it and similar mics from that company in every radio station I ever DJed or announced with!
This was a very nice video for technicians and people who know a lot about this. I hope you make a useful video for normal people who don’t have expensive mics like you or the “hertz” knowledge… and teach how to get a broadcast voice on the most popular mics (wave 3, blue yeti, etc) this is just what everybody wants. Anyway, I always watch your videos. Great channel.
I mean, this was an introduction video lol
I even point you towards other videos to learn the basics of EQing if you haven’t started looking into it yet. I even explain what the axes on a graph are lol
You wont get a 'broadcast' level voice going DIY
until you understand and also **put into practice** basic terms and concepts:
💡Knowing what dB, impedance, capsule type, signal balancing, SPL, polar pattern, self-noise, sensitivity is ~will~ help you to sound good - much more than guess or ignoring these things will.💡
...and that's just microphone terminology. Before you even think about what mic to get you should already have treated your space/room acoustically, and have some decent studio monitors so you can accurately hear what the hell you are doing in the first place.
Thanks. I'm going to go and play with my mic and mixer for a day or three then come back and watch this again. Maybe I also need to get some better speakers or headphones. Or maybe younger ears as I'm not sure mine work as well as they did 40 years ago.
Thanks for making this! Getting a new mic and setup soon so this will be really useful!
When you talked about the RE20 and distance from it when speaking to it without degradation to tone, to me it was reminiscent to a directivity plot for loud speakers. Take a look at a blog post by orfeosound - directivity patterns, the waterfall plots show how strong a signal was recorded as a given frequency, and degree. My very loose grasp of proximity effect via wikipedia also mentions an angular dependence.
Thank you so much for this and the Joe Glider 3 Rules of EQ video rec. I use a CAD E100s/Motu M2, and some of my words always came out 'slurred' even though I said them right. I used the TDR Nova plugin and brought down the spikes and understood every word. I've never sounded so good. I tried 5 different mics until adjusting this.
Heck yes
Imho in some respects the full Nova GE is better than FabFilter, but all the TDR plugs are ace.
Limiter6 is another that I use in most every session.
I was shure you will mention the Blue Yeti in the Condenser microphones section.
I will use this for sure.
32:20 The RE320 also has Variable-D technology.
Implemented differently, it still has proximity effect
How to start to tweak my mic/voice? I have the problem like all people that I hear myself totally different from my recorded voice. So how to improve it via eq etc. That would be nice video
Definitely will use this to EQ mine
Have to preface this with I use a Marantz mpm 1000. It's a $50 mic
Nothing wrong with that
when you stream using aac in stereo, you need 144 kbps for a very good quality. 454 kbps in aac is identical to original studio sound. unfortunately streaming isn' made for mp3.
Good thing the default for OBS and all streaming platform is 160kbps AAC and Twitch lets you feed 320kbps direct to viewers.
So…it’s completely fine. Literally a nonissue
@@EposVox believe it or not, according to tests i made, mp3 is superior to aac, because mp3 is a simple compressor, mainwhile aac is a complex algorythm, which occurs high degradations to original file. for this reason, you need 450k in aac to reach the same level you get in 300k mp3. but 150k aac is better than 250k mp3. mp3 is super good but only at its highest bitrate.
So, forgive me if I misunderstood, but just about every time I heard you mention "where your voice lives", you show a graph with a smoother part, and a spikey part that you point to, so I wonder if the spikey bits are supposed to represent where your voice lives??? I'll watch that other video you linked and see what the other information I can gather from that to combine with the information I gained from your video, for a more complete understanding of how sound EQing can help me. Unfortunately my ears ain't the best, so I might not have the best luck getting an accurate audio tune...
The “spiky bits” are just the loudest parts of a signal being picked up. As explained at the start, all it is is a representation of what frequencies the mic is most sensitive to
@@EposVox This is a bit misinformed, but I can understand the idea why. Your voice technically lives across the whole spectrum. Lemme explain in layman's terms. Basically the human ear system can only hear frequencies between 20hz (Which is super low) and 20Khz (Which is super high) at best. Most people will hear a range somewhere in between as we age. Now our voices have a frequency range that extends far beyond these two limits, but we can't hear them, yet comprehensive spectrum analyzer can certainly detect them. This is just the limits of human hearing.
Now in the case of "where your voice lives", it lives everywhere on that frequency graph, but in general is more pronounced in the low-mid to mid-mid frequencies eg. 160hz - 3000hz. This is where the heart of our voice comes from.
The second most important part is our intelligibility or understanding of our diction, which lives in the mid-high to low-high frequencies eg. 3000hz-14000hz. This is to help so the audio does not sound muffled or box-y and also more present or forward.
Next is the Highs, which is everything after 14000hz, and this can help us to sound less closed-in or narrow-sounding. It's what we call in the industry "Air" or "Sparkle".
And lastly anything below 160hz is mostly just candy for the voice. Most people will use a high-pass filter/low-cut to get rid of it because it can also get really rumbly and boomy if not utilized well. It also can contribute to that 'Joe Rogan Podcast Sound' as it doesn't seem they do a lot of processing in the first place.
So our voices travel across the entire frequency and more, but we don't hear the extremes very often. And since everybody's voice is different, there is no one-size-fits-all solution for this. If your voice is nasally, you'll have a very different EQ approach than somebody is very smooth or airy and vice-versa. You just need 90% of the audible frequency spectrum to sound natural and good.
Ok this is a misinterpretation of what I said. To be blamed on my poor word choice, but I thought it was pretty clear that “where your voice lives” was referring to frequencies that are inherently emphasized more by your voice. I should have explained it more clearly.
Also literally every audio engineer (and person who comments on my videos to say something about it) seems to have a different benchmark at which “everything under is not needed” with numbers from 40Hz to 200Hz being cited as the ultimate truth. Based on my experience and the goals of broadcast, I stick to the range I referred to.
@@EposVox ah. That explanation makes more sense to me... I'm still going to need some work to get good with audio, but at least it's starting to make a little bit more sense, slowly but surely, the further I look into it.
**Edit** my microphone cuts everything out that is below 100Hz, as well as, when I'm not speaking, everything, until I speak, all done from the software built into the wireless microphone receiver, although I believe the settings were originally turned off from the factory. I haven't noticed any glaring problems with my audio in any tests I have did with the microphone, but I definitely want to get proficient in setting it up properly, so that I can avoid any problems that might arise when using it in a personal or professional capacity that requires good sounding audio. As I mentioned before though, my ears aren't well tuned enough for that to come easily to me
@@EposVox Apologies, this was a response to the commenter, not you.
And with the comment about the “Ultimate Truth”, I generally follow that rule subconsciously for the most part. 160hz is just a nice area where you’ll roll off most of the rumble but still keep some nice low-end for vocals with either a 6dB or 12dB High-Pass Filter. But these are just starting areas, not rules, as I break plenty of audio engineering rules daily.
Either way, if it sounds good… it sounds good. When you get down to the nitty-gritty of Audio, it becomes very subjective and most people won’t even hear or care about many of the small mistakes that make our ears bleed.
Unless that's some heavy processing, the Presonus mic sounds like a HyperX headphones mic, IMO
how are you finding your revelator mic? my beacn has firmly been back in the box since using it.
Great video by the way :)
It’s great! Review Wednesday
My only issue is that some people may misuse this guide. People can use this to deepen their voice and I don't like voice changers even if it makes their voice sound more pleasing because people just need to be themselves. People that use this to enhance bad quality mics on the other hand is fine by me.
I mean nothing was stopping them from doing a bad job EQ-ing before lol
To put an end to the whole "what to do first, EQ or Compression" debate, basically, try both things. EQ first, then compress, listen, then change the order of your signal chain to compression first, then EQ.
Whatever sounds best to you and your audience, that's the right order.
Also, nobody is stopping you from using multiple instances of both, just be careful. If you need to use 5 EQ's, then you are definitely doing something wrong.
I suggest trying this order:
EQ to cut out frequencies you don't want/like
Compress
EQ to boost frequiencies you want/like
2 compressors (Joe Chicarelli likes to use 3!) in series can work wonders, too.
Set the first to act as a mild limiter to catch the upper peaks ONLY, and then use a 2nd compressor at very mild ratio [1.5:1 to 2:1] as your 'sauce' to get everything smoother and/or bouncing together in a pleasing manner. The 1st compressor helps the 2nd to do it's thing better because 2nd comp doesn't need to react to any extreme peaks :)
Some single compressors come with dual stages like dbx's 'over-easy' and FMR's 'really nice" features - these modes give a simple implementation of a 2nd compression stage. Two fully spec'd compression units are, of course, even more flexible and controllable than one unit with just a stage-2 switch.
@@shaft9000Ah yes, the infamous "1176 into an LA-2A" trick.
It doesn't have to be those compressors specifically, but it's probably is the most well-known instance of this.
Sidenote: it works well for music mixing, but it's probably too subtle for most people trying to compress voiceover/spoken word like broadcasting. Many seek that "radio compression" style, even though I feel like it's overused at this point.
Your videos are mys school. I am disabled and You, you beautiful person you wonderful human creatgure, you are dope as fvck!!! Thank you so much, for all you put out here.
Ps: My deslexia wrote part of that, comment, it also wanted to thank you!
Glad to help :)
Is it bad I want the SM7B not for the sound of the mic, but for how the mic looks? Obviously the sound plays a small part, but it's mostly cause I think it looks cool.
I don’t really knock anyone for going for looks overall, I just don’t like when people (this mostly meaning reviewers less so you) start weighing them as equally as sound or buy good looking bad mics.
But SM7b is great mic so it works out either way haha
@@EposVox Oh yeah sound is 100% more important than just looks. Guess I am lucky it has both then.
@@JohnAlzayat Total noob but I don't knock people buying a mic for their looks, especially with how visual audio has gotten with RUclips and Twitch. People will see these microphones, you will see yourself with that microphone. If I wanted to livestream with my face, I think I would choose the Shure 565SD, I honestly like the look of a handheld stage mic but the 565sd has some tasteful flashiness, imo.
Lol well memed, and good choice of graph m’dood.
Was the audio technica bp40 build for content creation or no?
Yeah I mean, broadcast is in its name
@@EposVox ah ok thank you also thank you for helping me understand audio better the video helped me alot. Keep up the great work
LOVING THE NT1 because of it flatness, EQ later~
Quality intro.
I'll have to find out what a compressor is and how it works, then rewatch this video.
Any ghosts voice? Time to start a paranormal podcast and interview some ghosts! :D
LMAOO that intro was hilarious
The Strimma said my name! Mahhhh get the camera!
🐐.
That's not a Presonus microphone... that's a value-sized can of "Axe Body Spray" you taped to your microphone arm with a Shure pop filter stuck on it!
That intro tho
19:00
Someone with a CRT
Holy fuck... you're still on about this?
Still on about…. Educating people about audio and how to make the best choices for them? Yes? I’ve been teaching it for years, no plan to stop it any time soon, why would I suddenly stop? Especially when I have new ways to show people great info.
You ok?
@@EposVox I was mostly referring to the intro to your video, in relation to your Twitter rants and fighting with the Beacn folks recently. It feels like beating a dead horse at this point.
It was a joke. It was funny, it was also easier than re-editing it with a different graph for no reason. This is 50+ minutes of educational content.
first?
Lies and slander
Use your ears. Stop using graphs.
Graphs assist using your ears, especially for people who don’t know what they’re hearing. “Use your ears” is the advice I received for the first 10+ years of my career and it was completely useless, this is tangible assistance to help improve things - like using scopes for color. Especially when most of us aren’t on studio monitors that cost thousands of dollars
@@EposVox Get better at using your ears then. a pair of decent headphones for podcasting is 50 bucks.