I use feedback regularly as my spectrum analyzer and it didn’t lie me ever. And also, since feedback is defined as a 1:1 ratio between the pressure arriving at the microphone and entering it, (imagine a electronic circuit in short, or a ring that closes its edges), it can be used to set the right volume on your amps to produce the right spl for the event.
7:35 that’s exactly what I apply alle the time: I set the microphone to a spl reference (practically my voice because it produces 87 dBspl at 2 centimeter), I put the microphone in the venue where I want to get 87 dBspl, I set the console on a feedback condition, then slowly I raise the amp volume until I hear the feedback and then lower the volume few dBs. Now I know that in that spot the loudspeaker, in conjunction with the amp, is producing 87 dBspl. The benefits of this method are many but the most important one in my opinion is not amplifying useless noise because the amp are now correctly settled.
Literature says, that in reverberant rooms because of the many overlaying frequecies and thus combfiltering you have even less headroom for feedback occurance. Statistically beyind a specific frequency thats dependent on Roomvolume and T60 you statistically have the same Peak to Average Ratio in the Frequency Response which is 12 dB (Schroeder 1954, Schroeder & Kuttruff 1962, Ahnert & Reichert 1981). Kuttruff recommends even more Headroom on top for Speech (5 dB) and music (12dB) (Kuttruff & Hesselmann 1976). This results in a deduction in gain before feedback of 17dB for speech and 24dB for music in reverberant rooms. So the example in the video is only applicable for freefield.
Thanks a lot for sharing this video! At first I was lazy to watch it because it lasted 1 hour, but when I started watching it I was hooked. It's exactly what I need at this moment. I have a project precisely to improve the GBF of the meeting space where I work, and all this information will help me to be able to explain it better to my colleagues and superiors.
On the question of why systems feedback when the band stops playing. I believe the most likely “culprit” is that all of your various compressors relax, effectively increasing the overall system gain. Remember, a compressor is a gain REDUCTION tool. When the signal goes below the threshold, it stops reducing gain. Most of us run compressors on individual channels and more on the various mix buses. All of those let loose when the signal level falls…overall gain goes up, feedback ensues. Gates and plugins like PSE can help combat this (along with mixing 😂)
that actually makes a lot more sense! i dont know how the decorelation of the microphones should play a role. I mean as he demonstrated, the feedback doesnt depend on Signal source level, rather the system amplification and frequency response. The source signal is not part of the equation (Ahnert & Reichardt 1981). So your explanation would fit into that theory.
Feedback doesn't occur when the sound from the loudspeaker reaches the microphone at the same level as it would directly from the person speaking. It happens when the waves from the loudspeaker can get back to the loudspeaker via the microphone at a time such that their peaks match the peaks already coming out of the loudspeaker. It's that constructive interference that causes the increase in volume of that frequency, and since it continues on the next round, that frequency builds to the familiar squeal. Being the same level doesn't matter; the waves being in sync does.
Hey Steven, thanks for checking out the video and your comment. It's great that you bring up time, but I'm afraid we can't have one without the other. Where summation is based on magnitude and phase, magnitude is the tie breaker. If the waves as you describe them arrive at the right time, but are 40dB lower than the original signal then the possibility of constructive interference is near zero. So while the signals might not need to be exactly unity to ramp up to feedback, it's gotta be close. There's an AES paper called Using a Speech Codec to Suppress Howling in Public Address Systems. I found this quote very helpful: "In other words, a linear-time invariant model of a public address system is stable if the amplifier (including audio DSP) is stable, if the room is stable, and if at any radial frequency ωh where a sinewave traveling around the loop will perfectly constructively interfere with itself (i.e. the phase response is an integer multiple of 2π around the loop), then the magnitude response around the loop must be less than unity."
Hi Nathan, and thanks for the response. The AES paper is basically stating the Barkhausen criterion: for a stable state at a given frequency, the delay around the loop must be a whole number of waveforms of that frequency (to keep them in phase), and the gain around the loop for that frequency must be =1 (i.e. unity gain, +0 dB, so they get neither louder nor quieter). Maybe a misunderstanding of that is indeed at the root of the problem here. The Barkhausen criterion is for a stable result, so not a feedback howl but a continued singing that doesn't increase in volume. For audio feedback purposes, we know having more gain just makes the feedback happen faster and louder, so we can change the Barkhausen equation from 'gain = unity' to 'gain >= unity'. (The paper says 'less than unity' for 'stable', which is incorrect for the normal use of 'stable' in Barkhausen, but maybe just an unfortunate choice of a word to mean 'not feeding back'.) Note that the unity here has nothing to do with the original input signal, but only with the gain of the whole system. It really doesn't matter what the volume of the talker is. You don't even need a signal to start feedback, as we've all seen in practice: if the gain around the system at a given frequency is > unity, and that frequency stays in phase, then simply unmuting the mic means it can start to feed back. The background noise (either acoustic or electrical) is enough that the feedback frequency will be selectively amplified by the system. We can show this in an anechoic chamber, in total silence. Or rather more easily in a DAW like Reaper, by setting up a white noise generator at -85 dB (the sound floor of a Midas M32), a delay of 5ms (so 200 Hz and 1.72m - about the distance from a singer's mic to a wedge monitor). That -85 dB is a lot less than the -40 dB you mention, and isn't at a specific frequency (pink noise would be better for some points of view, but here we want electrical noise, and that's white). If we set the whole system so it's just below unity, and give a tiny boost of 0.2 dB at any frequency matching a multiple of the delay length just to get things started, we get feedback at that frequency. If we boost a frequency that would be out of phase over that delay, we don't get feedback: . We can even turn off the white noise after a few seconds, before you can even hear the boosted frequency, and that frequency will continue to grow and feed back. The video is simply incorrect in saying (many times, but e.g. slide at 8:00) that "Feedback happens when the sound from the loudspeaker arrives at the microphone at the same level as the sound from the talker". He uses that to mean ongoing level, like a VU meter would show, but even if he meant an instantaneous level of a single sample (which would take into account the phase of the two signals, as he points out briefly later), it's still incorrect. Even if we change it to be "instantaneous level of any of the sine waves which can be considered to form the sound", and require that it continues, then we still don't have feedback: we need >= unity gain around the system. I imagine there's no actual difference in understanding here (or at least I hope not), and it's just that Jason's attempt to formulate it has crystallized on a phrase that is actually incorrect, rather than just being incomplete. The level from the talker does not matter. Indeed, if we think of feedback as being nasty and loud, and wanting to avoid having equality with that level, Jason's phrase gives the impression that having the level from the talker lower would help stop feedback. As we know, the opposite is true - even if the instinctive reaction of most talkers on hearing feedback is to move the mic away from their mouth!
Ok, but what are the odds that a person qualified to be employed as a sound engineer is going to show up to work & turn on a system in which the microphones are not going to be cardioid, the speakers are not already going to be positioned as optimum as the house will accommodate & the source is not already going to be at the optimum proximity to the microphone? Typically, when a person qualified for a position as a sound engineer resorts to using the EQ for stabilization, all the variables addressed in this video are as optimum as they are going to get &, essentially, any stabilization issues should only pertain to the stage monitors which, ideally, should be a secondary system isolated from the FOH system, but, even when they are a subsystem of FOH, the compromise in sonic quality for the sake of stabilization is tolerated. The fact of the matter is, in a professional real-world scenario, an engineer IS going to need to apply EQ'ing in an effort to achieve stability, but it should only pertain to the monitor system or subsystem... Ideally, aside from MAYBE a little roll off at the very top &/or very bottom of the spectrum & "flavoring to taste", if you will, the FOH EQ's, should be "flat"... Of course, the amount one may need to vary from "flat" to "flavor to taste" is relative to the inherent acoustic qualities of the venue... Or, if outdoors, the pertinent atmospheric conditions. That all been said... Three words: "In" "Ear" "Monitors"... "GBF" problem 𝗌̶𝗈̶𝗅̶𝗏̶𝖾̶𝖽̶...eliminated.
Hi metalhead. Thanks for your comment. I don't think Jason would spend his time talking about this stuff if he didn't think it was important because he sees a lot of his students doing it. I can also corroborate that I see lots of people reach for EQ before placement, aim, alignment, and other tools we have available.
Another question is the issue of multiple mics. The more mics, the lower the available gain. Isn’t it something like every time you double the number of open mics, you lose 3db (or is it 6?) of gain before feedback? ?
you might want to use the notch method on a Monitor on stage. but clearly only because it's affect for the feedback loop while no effective change to listener. each monitor will be able to be independantly adjusted hard limiting the monitor would really end the deal guitars with mics can be used with ISO cabs. plexi shields for drummers etc etc before you really ever need to notch the PA channels... and if you did you might only need to notch the 120s at the bottom of the tower
@nathanlively Thanks a lot for this material, got a few eye-opening things from it. Also, a big thanks to Mr Jason Romney, for sharing this unique approach, and for sharing their whole book for free. Please keep it up, it's a real value for a sound nerds like me, who's interested in how and why things work. You're awesome!
I use feedback regularly as my spectrum analyzer and it didn’t lie me ever. And also, since feedback is defined as a 1:1 ratio between the pressure arriving at the microphone and entering it, (imagine a electronic circuit in short, or a ring that closes its edges), it can be used to set the right volume on your amps to produce the right spl for the event.
7:35 that’s exactly what I apply alle the time: I set the microphone to a spl reference (practically my voice because it produces 87 dBspl at 2 centimeter), I put the microphone in the venue where I want to get 87 dBspl, I set the console on a feedback condition, then slowly I raise the amp volume until I hear the feedback and then lower the volume few dBs. Now I know that in that spot the loudspeaker, in conjunction with the amp, is producing 87 dBspl. The benefits of this method are many but the most important one in my opinion is not amplifying useless noise because the amp are now correctly settled.
Literature says, that in reverberant rooms because of the many overlaying frequecies and thus combfiltering you have even less headroom for feedback occurance. Statistically beyind a specific frequency thats dependent on Roomvolume and T60 you statistically have the same Peak to Average Ratio in the Frequency Response which is 12 dB (Schroeder 1954, Schroeder & Kuttruff 1962, Ahnert & Reichert 1981). Kuttruff recommends even more Headroom on top for Speech (5 dB) and music (12dB) (Kuttruff & Hesselmann 1976). This results in a deduction in gain before feedback of 17dB for speech and 24dB for music in reverberant rooms. So the example in the video is only applicable for freefield.
Thanks a lot for sharing this video! At first I was lazy to watch it because it lasted 1 hour, but when I started watching it I was hooked. It's exactly what I need at this moment. I have a project precisely to improve the GBF of the meeting space where I work, and all this information will help me to be able to explain it better to my colleagues and superiors.
On the question of why systems feedback when the band stops playing. I believe the most likely “culprit” is that all of your various compressors relax, effectively increasing the overall system gain. Remember, a compressor is a gain REDUCTION tool. When the signal goes below the threshold, it stops reducing gain. Most of us run compressors on individual channels and more on the various mix buses. All of those let loose when the signal level falls…overall gain goes up, feedback ensues.
Gates and plugins like PSE can help combat this (along with mixing 😂)
that actually makes a lot more sense! i dont know how the decorelation of the microphones should play a role. I mean as he demonstrated, the feedback doesnt depend on Signal source level, rather the system amplification and frequency response. The source signal is not part of the equation (Ahnert & Reichardt 1981). So your explanation would fit into that theory.
It's a natural law, that the average talker holds the microphone at belly button height 😅😂😂
This comment is MVP :D
Feedback doesn't occur when the sound from the loudspeaker reaches the microphone at the same level as it would directly from the person speaking. It happens when the waves from the loudspeaker can get back to the loudspeaker via the microphone at a time such that their peaks match the peaks already coming out of the loudspeaker. It's that constructive interference that causes the increase in volume of that frequency, and since it continues on the next round, that frequency builds to the familiar squeal. Being the same level doesn't matter; the waves being in sync does.
Hey Steven, thanks for checking out the video and your comment. It's great that you bring up time, but I'm afraid we can't have one without the other. Where summation is based on magnitude and phase, magnitude is the tie breaker. If the waves as you describe them arrive at the right time, but are 40dB lower than the original signal then the possibility of constructive interference is near zero. So while the signals might not need to be exactly unity to ramp up to feedback, it's gotta be close.
There's an AES paper called Using a Speech Codec to Suppress Howling in Public Address Systems. I found this quote very helpful:
"In other words, a linear-time invariant model of a public address system is stable if the amplifier (including audio DSP) is stable, if the room is stable, and if at any radial frequency ωh where a sinewave traveling around the loop will perfectly constructively interfere with itself (i.e. the phase response is an integer multiple of 2π around the loop), then the magnitude response around the loop must be less than unity."
Hi Nathan, and thanks for the response. The AES paper is basically stating the Barkhausen criterion: for a stable state at a given frequency, the delay around the loop must be a whole number of waveforms of that frequency (to keep them in phase), and the gain around the loop for that frequency must be =1 (i.e. unity gain, +0 dB, so they get neither louder nor quieter). Maybe a misunderstanding of that is indeed at the root of the problem here. The Barkhausen criterion is for a stable result, so not a feedback howl but a continued singing that doesn't increase in volume. For audio feedback purposes, we know having more gain just makes the feedback happen faster and louder, so we can change the Barkhausen equation from 'gain = unity' to 'gain >= unity'. (The paper says 'less than unity' for 'stable', which is incorrect for the normal use of 'stable' in Barkhausen, but maybe just an unfortunate choice of a word to mean 'not feeding back'.)
Note that the unity here has nothing to do with the original input signal, but only with the gain of the whole system. It really doesn't matter what the volume of the talker is. You don't even need a signal to start feedback, as we've all seen in practice: if the gain around the system at a given frequency is > unity, and that frequency stays in phase, then simply unmuting the mic means it can start to feed back. The background noise (either acoustic or electrical) is enough that the feedback frequency will be selectively amplified by the system.
We can show this in an anechoic chamber, in total silence. Or rather more easily in a DAW like Reaper, by setting up a white noise generator at -85 dB (the sound floor of a Midas M32), a delay of 5ms (so 200 Hz and 1.72m - about the distance from a singer's mic to a wedge monitor). That -85 dB is a lot less than the -40 dB you mention, and isn't at a specific frequency (pink noise would be better for some points of view, but here we want electrical noise, and that's white). If we set the whole system so it's just below unity, and give a tiny boost of 0.2 dB at any frequency matching a multiple of the delay length just to get things started, we get feedback at that frequency. If we boost a frequency that would be out of phase over that delay, we don't get feedback: . We can even turn off the white noise after a few seconds, before you can even hear the boosted frequency, and that frequency will continue to grow and feed back.
The video is simply incorrect in saying (many times, but e.g. slide at 8:00) that "Feedback happens when the sound from the loudspeaker arrives at the microphone at the same level as the sound from the talker". He uses that to mean ongoing level, like a VU meter would show, but even if he meant an instantaneous level of a single sample (which would take into account the phase of the two signals, as he points out briefly later), it's still incorrect. Even if we change it to be "instantaneous level of any of the sine waves which can be considered to form the sound", and require that it continues, then we still don't have feedback: we need >= unity gain around the system.
I imagine there's no actual difference in understanding here (or at least I hope not), and it's just that Jason's attempt to formulate it has crystallized on a phrase that is actually incorrect, rather than just being incomplete. The level from the talker does not matter. Indeed, if we think of feedback as being nasty and loud, and wanting to avoid having equality with that level, Jason's phrase gives the impression that having the level from the talker lower would help stop feedback. As we know, the opposite is true - even if the instinctive reaction of most talkers on hearing feedback is to move the mic away from their mouth!
Your best friend when the band stops playing and everything starts feeding back id the most underrated tool, it's the gate
Ok, but what are the odds that a person qualified to be employed as a sound engineer is going to show up to work & turn on a system in which the microphones are not going to be cardioid, the speakers are not already going to be positioned as optimum as the house will accommodate & the source is not already going to be at the optimum proximity to the microphone?
Typically, when a person qualified for a position as a sound engineer resorts to using the EQ for stabilization, all the variables addressed in this video are as optimum as they are going to get &, essentially, any stabilization issues should only pertain to the stage monitors which, ideally, should be a secondary system isolated from the FOH system, but, even when they are a subsystem of FOH, the compromise in sonic quality for the sake of stabilization is tolerated.
The fact of the matter is, in a professional real-world scenario, an engineer IS going to need to apply EQ'ing in an effort to achieve stability, but it should only pertain to the monitor system or subsystem...
Ideally, aside from MAYBE a little roll off at the very top &/or very bottom of the spectrum & "flavoring to taste", if you will, the FOH EQ's, should be "flat"...
Of course, the amount one may need to vary from "flat" to "flavor to taste" is relative to the inherent acoustic qualities of the venue...
Or, if outdoors, the pertinent atmospheric conditions.
That all been said...
Three words: "In" "Ear" "Monitors"...
"GBF" problem 𝗌̶𝗈̶𝗅̶𝗏̶𝖾̶𝖽̶...eliminated.
Hi metalhead. Thanks for your comment. I don't think Jason would spend his time talking about this stuff if he didn't think it was important because he sees a lot of his students doing it. I can also corroborate that I see lots of people reach for EQ before placement, aim, alignment, and other tools we have available.
Another question is the issue of multiple mics. The more mics, the lower the available gain. Isn’t it something like every time you double the number of open mics, you lose 3db (or is it 6?) of gain before feedback? ?
also interested in this as multiple open mics is a normal scenario for mixing bands x
Yes a doubling of open mics increases signal level (if you sum them) by 3 dB. and thus decreases gain before feedback.
you might want to use the notch method on a Monitor on stage. but clearly only because it's affect for the feedback loop while no effective change to listener. each monitor will be able to be independantly adjusted hard limiting the monitor would really end the deal
guitars with mics can be used with ISO cabs. plexi shields for drummers etc etc before you really ever need to notch the PA channels... and if you did you might only need to notch the 120s at the bottom of the tower
Thanks for such a great video! I’m just starting to watch it. What about when the person on stage is whispering?
Hey jthunderbass1, tell me more.
Hi Nathan,
Do you have the link to where I could access this simulator ?
Unfortunately, no. You might reach out the Jason directly or on his YT channel, but I think the last time I asked him he said it was not up.
Sensacional!
So we can give headphones to the listeners and make the gain 200db. What a wonderful world!
Thank you.
Make a video for car audio measurements with single microphone 🎤 with smart and what setting keep while taking a measurement in car
4 * 1.67 = 6.68
That's a typo: it should be 1.167 (the 7/6 in the previous line). The other lines work then.
@nathanlively Thanks a lot for this material, got a few eye-opening things from it. Also, a big thanks to Mr Jason Romney, for sharing this unique approach, and for sharing their whole book for free. Please keep it up, it's a real value for a sound nerds like me, who's interested in how and why things work. You're awesome!