Thanks for the video! I am a bit confused about the "number of samples" e.g. at 6:47 and later. From my understanding, Nyquist only tells us about minimum required sampling _rates_ to recover the original signal, but nothing about actual hard _numbers of samples_ (since the theorem assumes infinite-time signals)? Also, why would it tell us about the number of samples found in such an interval, shouldnt that depend on the sampling rate as well? And would t_c then not be more akin to rate (as in t_c complex samples required per symbol to decode) instead of an absolute number of samples? You then further equate the number of samples with the number of symbols, but do we not need a large number of samples to recover one symbol through its modulated waveform? So why can we equate them? Another thing I am confused about is channel hardening: Why is the quantity |g|^2 / M a sensible quantity to look at, specifically the division by M? Is there any physical significance to this? The channel itself is still described by |g| which has unit variance, is it not?
The channel capacity is achieved when transmitting infinite-time signals, so this is aligned with the sampling theorem. We use the sampling theorem "backwards" in the sense that we want to create an data-carrying signal that has bandwidth B by letting the data determine the signal samples. We take one data symbol at puts it as a sample of the signal to be generated. When considering the Nyquist sampling rate, we can choose the samples freely since any sequence of samples results in a signal with bandwidth B. You are right that the Nyquist sampling rate gives the minimum number of samples per second needed to represent the signal, so one can sample it more quickly. Suppose if we sample at twice the Nyquist rate, then we can only choose every other sample freely. The intermediate samples are deterministically determined by the ones that we choose, because if they don't have the right vales, the bandwidth will become larger than B. So even if we oversample, we cannot squeeze in more data into a desired bandwidth B. You are right that practical signals are not bandlimited in the strict sense that the sampling theorem requires, so we need to use oversampling at the receiver. However, the transmitter still sends information at (roughly) the Nyquist rate. We often use root-raised cosine pulses when generating the signals instead of the sync-function, which results in sending fewer than B symbols per seconds. The quantity |g|^2/M is somewhat made up to show an essential property. One cannot study the asymptotic behavior of |g|^2 because it diverges. The idea is that E{|g|^2} = M, possibly multiplied with a scaling factor. Hence, the quantity is the ratio between the channel gain and its average, which converges to 1 under channel hardening. So even if there are random variations in the channel, they are small compared to the average value.
The answer depends on the wavelength, propagation environment (what other objects are moving), and the definition of "stationary" (is it human who holds the phone almost still, or the phone truly static). But maybe you can count something like 0.1-1 s.
Thanks, with regard to this interval tau=T_c*B_c, isn't the number of symbols dependent of some parameters? The symbol duration depends on SCS. The higher the SCS, the shorter the symbol duration. Here, I obviously do not to know how SCS is to do a statement about the interval. Is that because of a linear relation between symbol duration and symbol B? Thank you.
There are two things to keep in mind: 1. The total bandwidth and subcarrier spacing which is predefined by the system. 2. The actual channel coherence in time and frequency for a specific user channel. To make the system work well in practice, we need to design it so that the coherence blocks presumed by the system (1) are smaller or equal to the coherence blocks that you actually have for the specific channel (2). tau=T_c*B_c is a theoretical measure of the coherence block size. One then needs a waveform that fits nicely into this block. Chapter 2 in the book Fundamentals of Massive MIMO has a short explanation of this with math. The symbol duration for a single-carrier transmission is 1/B seconds, where B is the bandwidth in Hz. If you consider OFDM with S subcarriers, then the subcarrier spacing is B/S. This means that you take S symbols in the time domain and turn them into an "OFDM symbol". The effective symbol time becomes S/B seconds per OFDM symbol, but within one such OFDM symbol, you are actually sending S symbols on different subcarriers. So, in the long run, you always have B symbols per second. One wants to adapt the subcarrier spacing so that one or a few subcarriers fit into each coherence block.
Thanks for sharing the video. Nice content. Will you be covering Zero forcing beamforming for uplink MU MIMO? If not then can you share some reference that explains the impact of user's channel correlation in uplink ZFBF and SINR performance per user? I could not find these details in the Massive MIMO Networks book.
Hi, zero-forcing is covered in a new video called Lecture 9b (ruclips.net/video/GZTD8aveQ4o/видео.html). Zero-forcing isn’t practically useful since regularized zero-forcing performs better and has the same complexity, but it is convenient to analyze it for spatially uncorrelated channels since one can then derive insightful rate formulas. That is not possible for spatially correlated channels, which is why you cannot find that in textbooks.
The motivation that massive mimo is just a possibility in BS (because of large number of antenna) sounds reasonable for lower frequency. But now we see wireless devices at millimeter-wave frequency with phased arrays with 30-60 elements that can easily fit into a phone. Does that mean we can do massive mimo in UEs? Or the limiting factor here is that we need a separate RF chain per antenna element?
This is a good point. My colleague Stefano Buzzi calls the setup that you describe “doubly massive MIMO”, where also the UE has a larger number of antennas. The UE antennas can be utilized for beamforming (stronger signal) and spatial multiplexing (resolving signals that arrive from different directions). It is only the beamforming option that can be achieved using a phased array (analog beamforming) and mostly works for situations where the channel with a dominant angular direction, since phased arrays must use the same beam for the entire frequency band. What if we would use digital beamforming with a separate RF chain per antenna element? Then we could achieve higher beamforming gains for channel without a dominant angular direction, and we can make use of spatial multiplexing. However, the number of strong signal direction between the BS and UE will determine how many signals that can practically be multiplexed. We will for sure see digital beamforming implementations in UEs in the future. Here is a blog post about that: ma-mimo.ellintech.se/2020/11/14/digital-millimeter-beamforming-for-5g-terminals/
thank you alot , as always i have lot of questions to ask 😃 , I'm still confused about bandwidth and symbol duration (lets assume that we have 20Mhz channel in LTE which is divided by 1200 subcarrier (each 15KHz wide) we know due to multipath and intersymbol interference we can't make symbol rate so short but isn't the subcarrier limit us itself too ? can we make symbol duration 50 microseconds for example or we can't make it less than 66 microsecond ??? second question is about subcarrier in 5g NR which is support 240 KHz if we use 240KHz subcarrier isn't it worse for mobility since its symbol duration is much shorter than 15KHz ?? then how 5G promise support faster mobility ? one more question its about MASSIVE MIMO this time your favourite topic hahah when we have a base station with massive MIMO let's say (64x64) could it support beamforming and SU MIMO and MU MIMO simultaneously ? or we have to configure it to one of these three application of multiple antenna ? (i mean when there is less user in the cell it gives more capacity to each of them through SU MIMO (4X4) and when there was a user in the edge of the cell needs data it employee multiple antenna to make a narrower and stronger beam to make this user in the cell edge have better SNR)
Hi! This lecture describes the block fading model, which divides the bandwidth into pieces with independent fading realizations, so we can study them separately. I recommend you to read Chapter 2 on "Fundamentals of Massive MIMO" to learn to connection between the idealized block fading model and practical OFDM systems. Question 2: The subcarrier spacing must be smaller than the coherence bandwidth, otherwise you will get intersymbol interference. However, the coherence bandwidth is wider than 240 kHz in indoor and short-range outdoor scenarios. The speed of mobility isn't related to the coherence bandwidth but coherence time. The purpose of having a wider bandwidth is to shorten the symbol time duration, which reduces latency. So it is a solution for URLLC applications. Question 3: SU-MIMO can be included into MU-MIMO. Practical 64T64R systems support two layers per user, and multiplexing of eight users. Beamforming is utilized to transmit each layer. See this blog post: ma-mimo.ellintech.se/2020/10/02/reciprocity-based-massive-mimo-in-action/
@@WirelessFuture thank you i think Q&A is the most effective way to learn especially when the questions answered by an expert , thank you again i really appreciate your useful answers
Thanks for the video!
I am a bit confused about the "number of samples" e.g. at 6:47 and later. From my understanding, Nyquist only tells us about minimum required sampling _rates_ to recover the original signal, but nothing about actual hard _numbers of samples_ (since the theorem assumes infinite-time signals)? Also, why would it tell us about the number of samples found in such an interval, shouldnt that depend on the sampling rate as well? And would t_c then not be more akin to rate (as in t_c complex samples required per symbol to decode) instead of an absolute number of samples? You then further equate the number of samples with the number of symbols, but do we not need a large number of samples to recover one symbol through its modulated waveform? So why can we equate them?
Another thing I am confused about is channel hardening: Why is the quantity |g|^2 / M a sensible quantity to look at, specifically the division by M? Is there any physical significance to this? The channel itself is still described by |g| which has unit variance, is it not?
The channel capacity is achieved when transmitting infinite-time signals, so this is aligned with the sampling theorem. We use the sampling theorem "backwards" in the sense that we want to create an data-carrying signal that has bandwidth B by letting the data determine the signal samples. We take one data symbol at puts it as a sample of the signal to be generated. When considering the Nyquist sampling rate, we can choose the samples freely since any sequence of samples results in a signal with bandwidth B.
You are right that the Nyquist sampling rate gives the minimum number of samples per second needed to represent the signal, so one can sample it more quickly. Suppose if we sample at twice the Nyquist rate, then we can only choose every other sample freely. The intermediate samples are deterministically determined by the ones that we choose, because if they don't have the right vales, the bandwidth will become larger than B. So even if we oversample, we cannot squeeze in more data into a desired bandwidth B.
You are right that practical signals are not bandlimited in the strict sense that the sampling theorem requires, so we need to use oversampling at the receiver. However, the transmitter still sends information at (roughly) the Nyquist rate. We often use root-raised cosine pulses when generating the signals instead of the sync-function, which results in sending fewer than B symbols per seconds.
The quantity |g|^2/M is somewhat made up to show an essential property. One cannot study the asymptotic behavior of |g|^2 because it diverges. The idea is that E{|g|^2} = M, possibly multiplied with a scaling factor. Hence, the quantity is the ratio between the channel gain and its average, which converges to 1 under channel hardening. So even if there are random variations in the channel, they are small compared to the average value.
What is the coherence time for a stationary mobile user? (@3:05 ) It cannot be infinity, right? I guess that there are other factors impacting T_c.
The answer depends on the wavelength, propagation environment (what other objects are moving), and the definition of "stationary" (is it human who holds the phone almost still, or the phone truly static). But maybe you can count something like 0.1-1 s.
Thanks, with regard to this interval tau=T_c*B_c, isn't the number of symbols dependent of some parameters? The symbol duration depends on SCS. The higher the SCS, the shorter the symbol duration. Here, I obviously do not to know how SCS is to do a statement about the interval.
Is that because of a linear relation between symbol duration and symbol B?
Thank you.
There are two things to keep in mind:
1. The total bandwidth and subcarrier spacing which is predefined by the system.
2. The actual channel coherence in time and frequency for a specific user channel.
To make the system work well in practice, we need to design it so that the coherence blocks presumed by the system (1) are smaller or equal to the coherence blocks that you actually have for the specific channel (2).
tau=T_c*B_c is a theoretical measure of the coherence block size. One then needs a waveform that fits nicely into this block. Chapter 2 in the book Fundamentals of Massive MIMO has a short explanation of this with math.
The symbol duration for a single-carrier transmission is 1/B seconds, where B is the bandwidth in Hz. If you consider OFDM with S subcarriers, then the subcarrier spacing is B/S. This means that you take S symbols in the time domain and turn them into an "OFDM symbol". The effective symbol time becomes S/B seconds per OFDM symbol, but within one such OFDM symbol, you are actually sending S symbols on different subcarriers. So, in the long run, you always have B symbols per second.
One wants to adapt the subcarrier spacing so that one or a few subcarriers fit into each coherence block.
Thanks for sharing the video. Nice content. Will you be covering Zero forcing beamforming for uplink MU MIMO? If not then can you share some reference that explains the impact of user's channel correlation in uplink ZFBF and SINR performance per user? I could not find these details in the Massive MIMO Networks book.
Hi, zero-forcing is covered in a new video called Lecture 9b (ruclips.net/video/GZTD8aveQ4o/видео.html). Zero-forcing isn’t practically useful since regularized zero-forcing performs better and has the same complexity, but it is convenient to analyze it for spatially uncorrelated channels since one can then derive insightful rate formulas. That is not possible for spatially correlated channels, which is why you cannot find that in textbooks.
The motivation that massive mimo is just a possibility in BS (because of large number of antenna) sounds reasonable for lower frequency. But now we see wireless devices at millimeter-wave frequency with phased arrays with 30-60 elements that can easily fit into a phone. Does that mean we can do massive mimo in UEs? Or the limiting factor here is that we need a separate RF chain per antenna element?
This is a good point. My colleague Stefano Buzzi calls the setup that you describe “doubly massive MIMO”, where also the UE has a larger number of antennas. The UE antennas can be utilized for beamforming (stronger signal) and spatial multiplexing (resolving signals that arrive from different directions). It is only the beamforming option that can be achieved using a phased array (analog beamforming) and mostly works for situations where the channel with a dominant angular direction, since phased arrays must use the same beam for the entire frequency band.
What if we would use digital beamforming with a separate RF chain per antenna element? Then we could achieve higher beamforming gains for channel without a dominant angular direction, and we can make use of spatial multiplexing. However, the number of strong signal direction between the BS and UE will determine how many signals that can practically be multiplexed. We will for sure see digital beamforming implementations in UEs in the future. Here is a blog post about that: ma-mimo.ellintech.se/2020/11/14/digital-millimeter-beamforming-for-5g-terminals/
thank you alot , as always i have lot of questions to ask 😃 , I'm still confused about bandwidth and symbol duration (lets assume that we have 20Mhz channel in LTE which is divided by 1200 subcarrier (each 15KHz wide) we know due to multipath and intersymbol interference we can't make symbol rate so short but isn't the subcarrier limit us itself too ? can we make symbol duration 50 microseconds for example or we can't make it less than 66 microsecond ???
second question is about subcarrier in 5g NR which is support 240 KHz if we use 240KHz subcarrier isn't it worse for mobility since its symbol duration is much shorter than 15KHz ?? then how 5G promise support faster mobility ?
one more question its about MASSIVE MIMO this time your favourite topic hahah when we have a base station with massive MIMO let's say (64x64) could it support beamforming and SU MIMO and MU MIMO simultaneously ? or we have to configure it to one of these three application of multiple antenna ? (i mean when there is less user in the cell it gives more capacity to each of them through SU MIMO (4X4) and when there was a user in the edge of the cell needs data it employee multiple antenna to make a narrower and stronger beam to make this user in the cell edge have better SNR)
Hi! This lecture describes the block fading model, which divides the bandwidth into pieces with independent fading realizations, so we can study them separately. I recommend you to read Chapter 2 on "Fundamentals of Massive MIMO" to learn to connection between the idealized block fading model and practical OFDM systems.
Question 2: The subcarrier spacing must be smaller than the coherence bandwidth, otherwise you will get intersymbol interference. However, the coherence bandwidth is wider than 240 kHz in indoor and short-range outdoor scenarios. The speed of mobility isn't related to the coherence bandwidth but coherence time. The purpose of having a wider bandwidth is to shorten the symbol time duration, which reduces latency. So it is a solution for URLLC applications.
Question 3: SU-MIMO can be included into MU-MIMO. Practical 64T64R systems support two layers per user, and multiplexing of eight users. Beamforming is utilized to transmit each layer. See this blog post: ma-mimo.ellintech.se/2020/10/02/reciprocity-based-massive-mimo-in-action/
@@WirelessFuture thank you i think Q&A is the most effective way to learn especially when the questions answered by an expert , thank you again i really appreciate your useful answers
Thanks a lot .