you talk about all the points that lectures and even books sometimes miss. Well done sir. Waiting for a video on PA linearity and saturation and effects on ACLR EVM etc
Thank you sir your courses are very helpful. in a previous course you made about relation between error probability and Q function, which was very clear by the way, and in this topic you are speaking about BER, the Q function form represanting the probability of error is different from that of the BER. How do we get to that form related to the SNR ?
Not sure if this helps, but it might be worth watching this video: "How are erf(.), Q(.), and Gaussian Tails Related?" ruclips.net/video/6VBRoJj7aJQ/видео.html
I have a long-standing question: What is the relationship between SER, as a function of SNR, and Channel Capacity, again, as a function of SNR? Suppose the channel capacity in a given SNR0 is 10 Mbps, meaning in that SNR0 we can send 10 Mega bits every second "error-freely". However, if in that SNR0, the SER is 10^-5, it means we will have some error! I guess this "paradox" is related to channel coding, but I do not know how.
Great question. This is not a "paradox" though. If, at a certain SNR, the SER is 10^-5, then it means that you are either sending at a rate that is above the capacity, or you are using a modulation-and-coding scheme that is not "optimal". You can think of drawing a box around the transmitter, channel, and detector/receiver, and then modelling that as a single "overall channel" that is a binary-symmetric-channel (BSC), with cross-over bit-error probability given by the SER of 10^-5. Then you can calculate the capacity of that BSC. Perhaps this video will help: "What are Channel Capacity and Code Rate?" ruclips.net/video/P0WY96WBUyA/видео.html
I am a bit confused with the SER. Aren't P (N>1) and P(|N|>1) overlapping probabilities? If so calculating it twice sounds incorrect to me. Can you please elaborate on that a bit?
The term P (N>1) relates to the symbols that are at the levels -3 and +3, and the term P(|N|>1) relates to the symbols that are at the levels -1 and +1. You need to calculate the probability of making a symbol error for each of the possible symbols, and then multiply by the probability of sending that signal, and then add them together. Since there is symmetry in this case, adding the four terms (each multiplied by 1/4) is the same as writing just the two terms I showed (where the multiple is 1/2).
@@iain_explains Now I feel just stupid :D.... If we assume the noise variance is the same for all symbols, then we can just have one probability for |P|>1, regardless of the symbol. If the questions is too dumb, feel free to ignore it.
Then BER and SER are not the same? When I do in MATLAB both give the same results as the error number is 3. Suppose A = [1 0 0; 0 1 0] , B = [0 1 0; 1 1 0]. Both biterr and symerr gives same results. Here u explained both are different, how come?
Sorry, I'm not sure what your example is relating to. However, I can tell you that if the digital modulation is binary, then the BER and SER will be the same, since there is only one bit per symbol.
Yes, but two bits are being transmitted per "symbol", and only one of them is in error in that situation (so that's why the probability of "symbol error" needs to be multiplied by 1/2 in order to get the probability of "bit" error).
I assume you mean the number 5 in the BER expression. Well that term corresponds to when a 00 is sent and a 10 is received. In this case, the "sent symbol" level is -3 and the "received symbol" level is greater than 2. So it means the noise must have been greater than 5.
This deserves more views. Extremely helpful!
you talk about all the points that lectures and even books sometimes miss. Well done sir. Waiting for a video on PA linearity and saturation and effects on ACLR EVM etc
Thanks for you comment, and the suggested topics. I've added those to my "to do" list (although it's getting quite long, ...)
You are a born teacher! Thank you so much!
Glad it was helpful!
Thank you so much for the clear explanation (as always in your videos). Are Bit Error Rate and Bit Error Ratio the same? Thanks
I've never heard anyone use the term "Bit Error Ratio".
Thank you sir your courses are very helpful. in a previous course you made about relation between error probability and Q function, which was very clear by the way, and in this topic you are speaking about BER, the Q function form represanting the probability of error is different from that of the BER. How do we get to that form related to the SNR ?
This video might help: "What are SNR and Eb/No?" ruclips.net/video/bNYvXr6tzXQ/видео.html
No I was wondering how to find the relationship between Q((a-mean)/sigma) and Q(sqrt(Ed/N))
@@iain_explains
and that formula differs between antipodal and non-antipodal
Not sure if this helps, but it might be worth watching this video: "How are erf(.), Q(.), and Gaussian Tails Related?" ruclips.net/video/6VBRoJj7aJQ/видео.html
I'm looking forward to seeing the video on Trellis encoding.
Here's the link to the Trellis Coding video: ruclips.net/video/rnjy4_gXLAg/видео.html
I have a long-standing question: What is the relationship between SER, as a function of SNR, and Channel Capacity, again, as a function of SNR? Suppose the channel capacity in a given SNR0 is 10 Mbps, meaning in that SNR0 we can send 10 Mega bits every second "error-freely". However, if in that SNR0, the SER is 10^-5, it means we will have some error! I guess this "paradox" is related to channel coding, but I do not know how.
I guess this video is an answer to my question: ruclips.net/video/P0WY96WBUyA/видео.html
Great question. This is not a "paradox" though. If, at a certain SNR, the SER is 10^-5, then it means that you are either sending at a rate that is above the capacity, or you are using a modulation-and-coding scheme that is not "optimal". You can think of drawing a box around the transmitter, channel, and detector/receiver, and then modelling that as a single "overall channel" that is a binary-symmetric-channel (BSC), with cross-over bit-error probability given by the SER of 10^-5. Then you can calculate the capacity of that BSC. Perhaps this video will help: "What are Channel Capacity and Code Rate?" ruclips.net/video/P0WY96WBUyA/видео.html
Great Video!
Glad you enjoyed it
Can you do one on how BER and SNR are related as well? Thanks.
Thanks for the suggestion. I've just uploaded a video on this topic. Check it out: ruclips.net/video/vtJ6mAy3xMc/видео.html
@@iain_explains Thanks a lot Iain, that video really helped.
I am a bit confused with the SER. Aren't P (N>1) and P(|N|>1) overlapping probabilities? If so calculating it twice sounds incorrect to me. Can you please elaborate on that a bit?
The term P (N>1) relates to the symbols that are at the levels -3 and +3, and the term P(|N|>1) relates to the symbols that are at the levels -1 and +1. You need to calculate the probability of making a symbol error for each of the possible symbols, and then multiply by the probability of sending that signal, and then add them together. Since there is symmetry in this case, adding the four terms (each multiplied by 1/4) is the same as writing just the two terms I showed (where the multiple is 1/2).
@@iain_explains Thanks for the clarification. I think my mistake was that I assumed save noise variation can be applied/considered for all symbols.
Well, the same noise variance _is_ considered for all symbols.
@@iain_explains Now I feel just stupid :D.... If we assume the noise variance is the same for all symbols, then we can just have one probability for |P|>1, regardless of the symbol. If the questions is too dumb, feel free to ignore it.
Then BER and SER are not the same? When I do in MATLAB both give the same results as the error number is 3.
Suppose A = [1 0 0; 0 1 0] , B = [0 1 0; 1 1 0]. Both biterr and symerr gives same results. Here u explained both are different, how come?
Sorry, I'm not sure what your example is relating to. However, I can tell you that if the digital modulation is binary, then the BER and SER will be the same, since there is only one bit per symbol.
P(1
Yes, but two bits are being transmitted per "symbol", and only one of them is in error in that situation (so that's why the probability of "symbol error" needs to be multiplied by 1/2 in order to get the probability of "bit" error).
@@iain_explains First: Thank you for your reply
Second: According to my understanding, P(1
I think you're not quite understanding. If a 00 was sent, and a 01 was received, this is because the noise was in the range 1
Thank you
You explained the idea in an excellent way.
how this is 5?
I assume you mean the number 5 in the BER expression. Well that term corresponds to when a 00 is sent and a 10 is received. In this case, the "sent symbol" level is -3 and the "received symbol" level is greater than 2. So it means the noise must have been greater than 5.