The Capacity is the ultimate highest rate that information (eg. data) can be sent without errors. If you impose restrictions, such as specifying a particular modulation scheme, then it won't be "the capacity". This video might help: "What is a Gaussian Codebook?" ruclips.net/video/Gx3rq5QERPw/видео.html
what would be the channel capacity for the MISO system formula? is it the number of antennas at input multiply with basic SISO channel capacity ? as following: c= mt*bandwidth*log2(1+SNR), where mt= number of antennas at the input (1,2,3...), is I'm right?
The most common case where the "1/2" appears is for a system where you are transmitting samples from a continuous random process signal, X(t), sampled at the Nyquist rate of 2W samples per second (ie. twice the highest frequency component W of the random process), and when the transmission channel has a bandwidth of W (matching the signal bandwidth) and additive white Gaussian noise (AWGN), and where the capacity is written in the units of "bits per transmission" or "bits per transmitted sample". The general capacity formula (which has the W out the front, that I wrote at the 14 min mark of the video) is in the units of "bits per second", so to convert to "bits per transmitted sample" you need to divide by 2W (since there are 2W samples per second). This leaves you with a "1/2" term out the front.
Firstly, thanks Iain for this (and the other) awesome content! The style is great for me at least, as I'm a visual learner. The current thing I am tying to understand is with an AWGN channel and WGN codes, is the general idea the same - to have the largest 'hamming' style distance between the codes? And then from that point, if we took long codes (and some FPGAs), what's preventing us from creating crazy n-order modulation schemes to claw back what we paid in the time domain? Cheers, Heath
Yes, for your first question. For the second question, the issue is delay. Most mobile/wireless standards define multiple modulation-order & code-rate pairs. If the SNR is high enough, then the modulation order can be very high. When there are symbol errors though (due to either low SNR or from using a modulation order that is too high) the code rate needs to be lower, since more redundancy is needed to overcome the errors. In general this means longer codewords are needed, which in general either requires more bandwidth, or more delay.
Hi Iain. Many thanks for great video. I have one question: In 5G MCS table, MCS 10 correspond to Target code rate 340/1024 bits. This means for 340 data bits we will use 1024 coded bits. Why people which make standard decide to go with 1024 bits (instead of 512 or 2048 for example)? Is there any mathematical reason behind this, expect that this is 2^10?
It's really a trade-off that depends on many factors, including receiver sensitivity, transmit power level, distance from the access point (WiFi) or base station (5G), level of interference (5G) or chance of packet collision (WiFi), and channel coding rate and decoding power. There's certainly no fundamental "mathematical" reason for the choice.
Hello, I would like to add one more point here. The reason for choosing 2^x factor here is because it is easier to handle in fixed point DSP. To store a code rate of 0.332 in a fixed point processor, you will need to make it an integer by multiplying the value by 2^x. And the reason for selecting 1024, according to me, is because its the closest value to 1000. So, by simply looking at value 340, you can understand the actual code rate is approx 340/1000 ~ 0.34. And this is closest to 340/1024 = 0.332.
Codewords can be viewed as vectors in multidimensional space (eg. in the most basic case, the number of dimensions equals the length of the codeword). Therefore a distance metric can be defined between any two codewords (eg. Euclidian, or Hamming). The "minimum distance" is the smallest distance amongst all the possible codeword pairs in the codebook.
Yes, you're right. Sorry about that. I must have had a mind blank when I wrote the equation. Thanks. I've added a comment into the description below the video.
I'm not sure what you mean exactly, sorry. I think you're asking whether I am talking about the Hamming Distance. If so, then the answer is, yes. The Hamming Distance between 01 and 10 is exactly 2 bits (both the first bit and the second bit are different, ie. there is a distance of 2 bits between the two vectors).
Well, no, that's not actually the case, sorry. The CLT shows that the limiting distribution will be Gaussian, when multiple random variables are added together, in certain cases. It doesn't make any claims about optimality of coding choices.
This was an incredibly helpful video for a confusing topic. Thank you for this!
Glad it was helpful!
I really love your videos on signals and system. Great work
Glad you like them!
Very different clever way to describe capacity. Thank you professor
Glad you liked it!
Thank you so much for your dedicated lectures
My pleasure. Glad you are finding them helpful.
Iain, Excellent and clear explanation. Thank you for your work.
Glad it was helpful!
Thank you for nice video. I have one question. Why Shannon's channel capacity formula does not consider modulation scheme?.
The Capacity is the ultimate highest rate that information (eg. data) can be sent without errors. If you impose restrictions, such as specifying a particular modulation scheme, then it won't be "the capacity". This video might help: "What is a Gaussian Codebook?" ruclips.net/video/Gx3rq5QERPw/видео.html
Really good explanation
Glad it was helpful!
what would be the channel capacity for the MISO system formula? is it the number of antennas at input multiply with basic SISO channel capacity ? as following: c= mt*bandwidth*log2(1+SNR), where mt= number of antennas at the input (1,2,3...), is I'm right?
sometimes people use 1/2 before the log instead of w . Are there any implication for this use ?
The most common case where the "1/2" appears is for a system where you are transmitting samples from a continuous random process signal, X(t), sampled at the Nyquist rate of 2W samples per second (ie. twice the highest frequency component W of the random process), and when the transmission channel has a bandwidth of W (matching the signal bandwidth) and additive white Gaussian noise (AWGN), and where the capacity is written in the units of "bits per transmission" or "bits per transmitted sample". The general capacity formula (which has the W out the front, that I wrote at the 14 min mark of the video) is in the units of "bits per second", so to convert to "bits per transmitted sample" you need to divide by 2W (since there are 2W samples per second). This leaves you with a "1/2" term out the front.
@@iain_explains Oh I did not know that it was even that deep ! Thank you so much
Very helpful thank you, Lain.
You're welcome. Glad it was helpful!
Very helpful videos.. thank you so much...
Most welcome!
You sir are a saint
Thanks. That's high praise indeed! Glad you found the video helpful.
It was so helpful. Thank you so much
Glad it was helpful!
Firstly, thanks Iain for this (and the other) awesome content! The style is great for me at least, as I'm a visual learner. The current thing I am tying to understand is with an AWGN channel and WGN codes, is the general idea the same - to have the largest 'hamming' style distance between the codes? And then from that point, if we took long codes (and some FPGAs), what's preventing us from creating crazy n-order modulation schemes to claw back what we paid in the time domain? Cheers, Heath
Yes, for your first question. For the second question, the issue is delay. Most mobile/wireless standards define multiple modulation-order & code-rate pairs. If the SNR is high enough, then the modulation order can be very high. When there are symbol errors though (due to either low SNR or from using a modulation order that is too high) the code rate needs to be lower, since more redundancy is needed to overcome the errors. In general this means longer codewords are needed, which in general either requires more bandwidth, or more delay.
Thanks, I think channel capacity for BSC is 1+ p log(p) + (1-p) log(1-p)
@@iain_explains the graph that you discuss is for the expression in the comment not for what was written on the paper
Dam it, yes, you're right. I must have had a mind blank when I wrote the equation. Thanks. I'll add a comment into the description.
Hi Iain. Many thanks for great video. I have one question: In 5G MCS table, MCS 10 correspond to Target code rate 340/1024 bits. This means for 340 data bits we will use 1024 coded bits. Why people which make standard decide to go with 1024 bits (instead of 512 or 2048 for example)? Is there any mathematical reason behind this, expect that this is 2^10?
It's really a trade-off that depends on many factors, including receiver sensitivity, transmit power level, distance from the access point (WiFi) or base station (5G), level of interference (5G) or chance of packet collision (WiFi), and channel coding rate and decoding power. There's certainly no fundamental "mathematical" reason for the choice.
Hello, I would like to add one more point here. The reason for choosing 2^x factor here is because it is easier to handle in fixed point DSP. To store a code rate of 0.332 in a fixed point processor, you will need to make it an integer by multiplying the value by 2^x. And the reason for selecting 1024, according to me, is because its the closest value to 1000. So, by simply looking at value 340, you can understand the actual code rate is approx 340/1000 ~ 0.34. And this is closest to 340/1024 = 0.332.
same thing is taught by our teacher Sumit
Very helpful sir,,,but what i feel is ur voice is somewhat low. Make it increased
Interesting. I just listened to it again, and the audio level is fine on my devices. Maybe you could increase the volume on your device.
what is minimum distance
Codewords can be viewed as vectors in multidimensional space (eg. in the most basic case, the number of dimensions equals the length of the codeword). Therefore a distance metric can be defined between any two codewords (eg. Euclidian, or Hamming). The "minimum distance" is the smallest distance amongst all the possible codeword pairs in the codebook.
The channel capacity for BSC is 1+ p log(p) + (1-p) log(1-p)
If p
Yes, you're right. Sorry about that. I must have had a mind blank when I wrote the equation. Thanks. I've added a comment into the description below the video.
Minimum distance between 01 and 10 =2, maybe you mean gray coding
I'm not sure what you mean exactly, sorry. I think you're asking whether I am talking about the Hamming Distance. If so, then the answer is, yes. The Hamming Distance between 01 and 10 is exactly 2 bits (both the first bit and the second bit are different, ie. there is a distance of 2 bits between the two vectors).
I think its gaussian because of the central limit theorem
Well, no, that's not actually the case, sorry. The CLT shows that the limiting distribution will be Gaussian, when multiple random variables are added together, in certain cases. It doesn't make any claims about optimality of coding choices.