Thanks man, the teacher in class didn't really explain it and I was so trying so hard to make myself understand but didn't succeed but then there I got your video.
Minimum Hamming distance for error detection To design a code that can detect d single bit errors, the minimum Hamming distance for the set of codewords must be d + 1 (or more). That way, no set of d errors in a single bit could turn one valid codeword into some other valid codeword. Minimum Hamming distance for error correction To design a code that can correct d single bit errors, a minimum distance of 2d + 1 is required. That puts the valid codewords so far apart that even after bit errors in d of the bits, it is still less than half the distance to another valid codeword, so the receiver will be able to determine what the correct starting codeword was. incase you wanna understand the formulaes...from michegan university
For d bits of error detection, the minimum Hamming Distance is d + 1. I don't really understand it. Take the example from the video: 0000 0001 0010 0011 ... The minimum Hamming Distance is 1 since the number of different bits between 0000 and 0001 is 1. However, it is 5 (4 bit + 1)...did I miss something? Also, where did 2d + 1 is the minimum Hamming Distance for d bits of error correction come from?
For example, if you want to detect the 1 bit of error, then the coding scheme which you choose for the encoding, should have minimum hamming distance of 2 between the each possible codes. On the other hand, for the 1 bit error correction, the minimum hamming distance between the code after the coding should be equal to 3 ( 2d + 1). For example, if you use the 3 bit repetitive code for the encoding, then 1 is encoded 111 and 0 is encoded as 000. So, if you see the hamming distance between the code then it is 3. That's why these coding scheme can detect upto 2-bits of error and it can correct upto 1 bits of error in the encoded code at the receiver. I hope, it will clear your doubt.
Timestamps:
0:00 Error-Correcting Codes (ECC)
1:23 Repetition Code
3:18 Hamming Distance and Minimum Hamming Distance
Sir, you are really a great teacher. 👍
I'm so much interested on your digital electronics lectures.
Happy teachers' day sir. 🙏
Can you give whatsap number ?
Really great explanation brother. It helped me a lot to understand this hard concept. Thank you..
Happy teacher's day sir🎉🎊
Thanks man, the teacher in class didn't really explain it and I was so trying so hard to make myself understand but didn't succeed but then there I got your video.
how d+1 and 2d+1 formula came??
Watching 6 hours before exams. respect from Pakistan
Minimum Hamming distance for error detection
To design a code that can detect d single bit errors, the minimum Hamming distance for the set of codewords must be d + 1 (or more). That way, no set of d errors in a single bit could turn one valid codeword into some other valid codeword.
Minimum Hamming distance for error correction
To design a code that can correct d single bit errors, a minimum distance of 2d + 1 is required. That puts the valid codewords so far apart that even after bit errors in d of the bits, it is still less than half the distance to another valid codeword, so the receiver will be able to determine what the correct starting codeword was.
incase you wanna understand the formulaes...from michegan university
Who read like this long comment
Very nice 👍👍
Nice 👍👍👍 Vedio
Please make a video on linear block code.
Very nice
Can you make a video on Hsiao code?
muchas gracias
thank you
obrigado❤
What kind of english it's that? It's interesting
indian
For d bits of error detection, the minimum Hamming Distance is d + 1. I don't really understand it. Take the example from the video:
0000
0001
0010
0011
...
The minimum Hamming Distance is 1 since the number of different bits between 0000 and 0001 is 1. However, it is 5 (4 bit + 1)...did I miss something? Also, where did 2d + 1 is the minimum Hamming Distance for d bits of error correction come from?
For example, if you want to detect the 1 bit of error, then the coding scheme which you choose for the encoding, should have minimum hamming distance of 2 between the each possible codes. On the other hand, for the 1 bit error correction, the minimum hamming distance between the code after the coding should be equal to 3 ( 2d + 1). For example, if you use the 3 bit repetitive code for the encoding, then 1 is encoded 111 and 0 is encoded as 000. So, if you see the hamming distance between the code then it is 3. That's why these coding scheme can detect upto 2-bits of error and it can correct upto 1 bits of error in the encoded code at the receiver. I hope, it will clear your doubt.
@@ALLABOUTELECTRONICS Very helpful! Quite understand it now!
why does it sound robotic
😿😿
how d+1 and 2d+1 formula came??