What is Entropy? and its relation to Compression
HTML-код
- Опубликовано: 16 сен 2024
- Explains Entropy in information theory and gives and example that shows its relationship to compression.
Related videos: (see: iaincollings.com)
• What are Channel Capacity and Code Rate? • What are Channel Capac...
• What is Water Filling for Communications Channels? • What is Water Filling ...
• What is a Gaussian Codebook? • What is a Gaussian Cod...
• What is Fisher Information? • What is Fisher Informa...
• How are Throughput, Bandwidth, and Data Rate Related? • How are Throughput, Ba...
Full categorised list of videos and PDF Summary Sheets: iaincollings.com
.
I can’t express how amazing and helpful is the work you are doing! Keep going man, May God lighten your way, satisfy you, and reward you immensely!
Thanks so much. I'm glad the videos have been helpful.
This dude is such a blessing lol, wish I'd found these years ago.
Whoa, I am reading my textbook for Communications Engineering, an EE course, and the book does a terrible job at explaining these concepts. You knocked it out of the park with that explanation, thanks!!!
I'm glad you liked it. Have you seen my web page? It's got a categorised listing of all the videos on the channel. I've got lots more that I'm sure will help with a range of other concepts you're reading about. One of my main motivations in making these videos, is to help explain concepts that can be confusing in textbooks. iaincollings.com
OMG! How did people miss this fabulous Lecture?
Thank you Professor, you are really good at saving paper and keeping us hungry for in-depth knowledge.
I'm so glad you like the videos.
Incredibly helpful!
Glad you think so!
Excellent work!
I love your lectures. It's really helpful. My professor hardly bothers to explain, like in our class, he told us the method to perform Huffman coding without any context!
By the way, I observed a slight inconsistency in the video towards the end as you compared the improved coding efficiency on stringing '⍺β'; The 'β' from the previous case is half as likely as ⍺, so if '⍺' is always followed by 'β', both have to equally likely in the previous example
You're right, but the two examples are not the same (and I didn't intend them to be the same) - which is why the latter example can be compressed more (which is the point I was making).
Thank you Professor. Your videos are incredibly helpful.
Glad you like them!
Thank you so much for this, basically explain what my professor tried to do... but did so much better.
Glad it helped!
Wow, this is actually teaching !!
Love you Sir, pls record all your knowledge 🙏 ❤
I'm so glad you like the videos.
thank you. you teach better than my professor
Happy to help.
Thanks from Norway!!
Glad you liked the video.
Complex concepts are explained by an amazing Professor
So nice of you to say. Thanks.
Thanx a lot
Most welcome
what i needed
You are gold.
Thanks. I'm glad you liked the video.
Thank you so much for the intuitive understanding!
Glad it was helpful!
best explanation and good example taken
Glad you liked it!
Sir In love with your teaching ! One doubt ( not releated to this video )
What role poles and zeroes play in making a system odd and even,today I read zeros at origin makes the signal odd ! Sir can you xplain sir,it was given as
S/(s+1)(s-1) => odd, (s+1)(s-1)/s neither odd nor even (but 2nd one violtes X(s)=X(-s)=>even
X(s)=-X(-s)=>odd
Thank you
You're welcome
Good job 👍 Man, you are an amazing teacher!
Thanks. Glad you think so!
Amazing lecturer! Thanks a lot :)
You're very welcome.
Why is information an exponential function?
Also how does a system map symbols to bits ?
Good question. It is possible to define other "information measures". It doesn't have to be an exponential/logarithmic function, but that function is good because it fits the requirements. The "amount" of information in a message is related to the probability of that message (as discussed in the video). And there are natural boundary conditions: it must equal zero when a message always happens (probability = 1, ie. the message doesn't actually tell you anything you didn't already know, because it always happens, so you were already expecting that message), and it must equal almost infinity when a message almost never happens (probability = 0, ie. those messages are extremely unexpected and surprising, so they are extremely informative when they actually happen), and the function must be continuous between those boundary conditions. The negative log function fits the bill. And as for your question about mapping bits, this video should help: "What is a Constellation Diagram?" ruclips.net/video/kfJeL4LQ43s/видео.html You might also be interested in this video: "What is Fisher Information?" ruclips.net/video/82molmnRCg0/видео.html
@@iain_explains thanks for the response... appreciate it!!!
Great work. Keep going!!
Could this method of producing the codebook based on entropy value be applied in wireless communications? Because each codeword has not equal bits, overlapping in a sequence of symbols must not be done such us your examples.
Ηow such a thing can be avoided with the presence of noise?
Yes, definitely. The codeword length is not the same thing as the frame length, or packet length. The compression encoder takes a sequence of letters/symbols/bits/words (whatever), and converts it into a sequence of 1's and 0's (made up of codewords). That sequence is then divided up into packets according to whatever communication protocol is being used. I plan to make a video soon on source and channel coding, so keep a look out for that.
Amazing it's helpful !
Glad to hear that!
Great video!
Glad you enjoyed it
Why do Gamma and Delta have to have 3 bits? Couldn't you get away with alpha: 0, Beta:1, Gamma:10, Delta: 11? the average there would be 1.25 bits/symbol wouldn't it?
It needs to be uniquely decodable. With your suggested mapping, if you received a 1, you wouldn't have any way of knowing if it was from a Beta having been sent, or if you needed to wait for the next bit because either Gamma or Delta had been sent.
this is the place that you can learn complex concepts only on one paper of sheet.
Glad you like the format.