Hamming & low density parity check codes

Поделиться
HTML-код
  • Опубликовано: 19 ноя 2018
  • Information Theory Society presents the key concepts needed to understand low-density parity-check codes (LDPC codes). It's a blend of repetition codes, parity check bits and hamming codes. This paper was highly influential and has over 9000 citations.

Комментарии • 152

  • @user-ti7tu9bk3l
    @user-ti7tu9bk3l 5 месяцев назад +6

    I will Rate this video #1 in teaching concepts. There should be a youtube AWARDS yearly for best video in teaching, reporting, .... all genre !!! I am sure this video will find an award there.

  • @TypicalRararian757
    @TypicalRararian757 5 лет назад +44

    Wow, what a great explanation! I have professors who try to explain this lecture after lecture using cluttered equations that hide the essence of the idea. Maths is important, but it is useless without a context or an understand of the idea. Sometimes I wonder if they do this deliberately, to make us think that they are really clever for understanding such a long string of equations.

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +19

      Thanks for the feedback Bashcode. I think most people do it by accident because it's hard to be both clear and accurate. Most people are one or the other, rarely do you get a Feynman who can do both (I can't). This video is the result of a team of writers to achieve the same result (clear and correct). At the beginning we didn't have any sort of 'narrative' for how to teach this in mind. Only after a few months of back and fourths did we find a 'teachable sequence' which was learner friendly and accurate for a PhD thesis no less. A very fun, challenging and rewarding process.

    • @chris_1337
      @chris_1337 5 лет назад +3

      @@ArtOfTheProblem so grateful for this channel and the channel's team! Also, that Robert Gallager had an awesome message.. bet you AI is going to do the same thing, resurrecting interesting solutions to problems that are going to be suddenly very important

    • @kingkaufee
      @kingkaufee 3 года назад

      ditto!

  • @_._enril_._
    @_._enril_._ 5 лет назад +29

    Such a great, high quality video. Thank you

  • @hymao1
    @hymao1 5 лет назад +63

    Probably a stupid question, but how does the receiving computer know which bit is tied to which parity bit if they are random?

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +73

      Good question, it's defined at the protocol level and so you can assume both sides know the pattern in advance.

    • @glaxmattbas
      @glaxmattbas 5 лет назад +2

      does that mean the message header is sent without error correction codes?

    • @samuelfeder9764
      @samuelfeder9764 5 лет назад +3

      @@ArtOfTheProblem It seems to me that an awfull lot of bits would be required to commucicate the random allocation between bits and parity bits. Which would of course defeat the purpose since those would need to be protected as well. How is the allocation communicated efficiently? Is it maybe the same random pattern repeated over and over?

    • @randomname3669
      @randomname3669 5 лет назад +2

      Thats what im wondering as well, surely there should be some sort of standardised mapping or a pattern to it

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +13

      @@charaleet6494 Great questions. The simple answer is "the same random pattern is repeated over and over" as you say. The randomness is defined at the protocol level. They are also correct to note that it's not "so simple" to use just random structure, today it's more subtle. This video only covers the key insight to use randomness (and small sets) in the creation of error correction codes.

  • @LnX53
    @LnX53 2 года назад +1

    that was a brilliant explanation, the reasoning of the naming and how you connected that part was perfect, thank you

  • @falpato
    @falpato 5 лет назад +3

    You are truly gifted in explaining complicated things in a simple and interesting way and I just want to thank you for sharing your knowledge in an entertaining manner. Your effort in making those educational videos in your channel is highly appreciated and I'm looking forward for more videos to watch. I just wish you could write a technical ebook about the contents of your channel since I really love your elegant style of teaching of getting to the bare essentials of a certain concept. Cheers and more power to your channel!

  • @NeilRieck
    @NeilRieck 5 лет назад +1

    This video made me recall an ECC lecture at DEC (Digital Equipment Corp) during a maintenance course on the PDP-11/44. Prior to that, tape decks only had vertical parity (the ninth bit of every byte) and longitudinal parity which was a byte appended to the end of each block of 512 or 1024 bytes. With this scheme, vertical and longitudinal errors would point directly to a single correctable error. Things improved when the longitudinal byte was replaced with a 16-bit ECC implemented as 2-bytes (many implementations were based upon CRC-11). Getting back to the PDP-11/44, every 32-bits of data were implemented with 39-bits of memory (every 8-bit byte had a parity bit; 3 additional bits were necessary to implement the hamming code). With this scheme all single-bit errors were correctable as well as many double-bit errors)

  • @alderiatefanclub1576
    @alderiatefanclub1576 3 года назад +1

    Thank you so much for this video, I had trouble understanding LDPC codes but now it's crystal clear.

  • @raresmircea
    @raresmircea 5 лет назад +6

    Quality wise i think AotP makes it easily in the top 10 educational channels. Every single upload is elegant, entertaining and informative.. I miss those explanations done with strings and peas and bowls though :)

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +2

      thanks for your kind feedback Rares. These Information theory videos produced with IEEE have a slightly different style (animation heavy) compared to my regular episodes (which are mostly live action). I will continue both styles in the years ahead.

  • @goosew3266
    @goosew3266 5 лет назад +4

    As someone who has done a mathematics and computer science degree, these videos are PERFECT to watch during lunch break at work. Thanks!

  • @CrucialMuzic
    @CrucialMuzic 5 лет назад +2

    You know it's going to be a great day when *Art of the Problem* uploads a new video!
    Happy Thanksgiving, everyone!

  • @jayadevashok2070
    @jayadevashok2070 4 месяца назад +1

    You explained it so incredibly well!

  • @tim40gabby25
    @tim40gabby25 4 месяца назад

    Beautiful explanation.

  • @rastislavledaj3110
    @rastislavledaj3110 Год назад

    Big thumb up! Very nice and logical explanation of LDPC. 👍

  • @jincyquones
    @jincyquones 5 лет назад +2

    I learned a lot from this, thanks!

  • @Niarbeht
    @Niarbeht 3 года назад +2

    "Halfway between a modern cell phone, and a coffee machine"
    I died.

    • @tedchirvasiu
      @tedchirvasiu 3 года назад

      I think that's still an overestimation considering how powerful modern cell phones are.

  • @jean-marclugrin1902
    @jean-marclugrin1902 2 года назад

    Finally I understood the essence of LDPC codes. Thanks

  • @filimonmiki
    @filimonmiki 3 года назад +2

    Thanks for making this video! You made complicated and theoretical concepts easy to grasp.

  • @ahmedsanaullah4924
    @ahmedsanaullah4924 3 года назад

    Simple yet informative

  • @adrian7914
    @adrian7914 3 года назад

    Great explanatory video, thank you

  • @NoNTr1v1aL
    @NoNTr1v1aL 5 лет назад +18

    Awesome video.

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +2

      Thanks for your feedback

    • @NoNTr1v1aL
      @NoNTr1v1aL 5 лет назад

      @@ArtOfTheProblem 9:25 can you tell me where Peter Alias(?) proved this?

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +1

      I believe this is the reference web.mit.edu/6.441/www/reading/hd2.pdf 1956@@NoNTr1v1aL

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +1

      Also list of important papers by Elias is here (www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/elias-peter.pdf)

    • @NoNTr1v1aL
      @NoNTr1v1aL 5 лет назад

      @@ArtOfTheProblem thank u very much.

  • @EmilFihlman
    @EmilFihlman 5 лет назад +8

    This is a superb video!

  • @jaiminrathwa6431
    @jaiminrathwa6431 3 года назад

    best explanation ever on LDPC

  • @chalaurgessa4489
    @chalaurgessa4489 3 года назад

    Best explanation

  • @jeremymartin6563
    @jeremymartin6563 3 года назад

    What a great video! So intuitive! Very well done!!

  • @bibhandebiswas1599
    @bibhandebiswas1599 4 года назад +1

    Great video!

  • @sarakawas4060
    @sarakawas4060 2 года назад

    Excellent content. Thank you for the clean work

    • @ArtOfTheProblem
      @ArtOfTheProblem  2 года назад +1

      yes this one came out crisp, but it was a very hard one to make I recall.

  • @esuelle
    @esuelle 5 лет назад

    The ending note was very inspiring,

  • @RawbyTheSheeth
    @RawbyTheSheeth 4 года назад

    Great visuals.
    I dig it.

  • @nbme-answers
    @nbme-answers 5 лет назад

    Brit, as a non-computer scientist with aspirations to better become one, you continually blow my mind with these videos. Remarkable tutes. So clear. Great examples & animations. Zero pretension. I hope Khan Academy is still featuring all your stuff. Will contribute to your Patreon soon.

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      Thanks for your continued support. Indeed you can find Episode 1 & 2 on KA

  • @MrQwe38
    @MrQwe38 5 лет назад +1

    IT IS AWESOME VIDEO!! everything is so clearly explained. Thank you so much!

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      thanks so much for the feedback glad this was helpful

  • @andreastedile
    @andreastedile 5 лет назад +1

    I subscribed back when you made the RSA video, superb channel 💪🏻

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +1

      Nice to know you are still around Andrea. Hope to see you again

  • @gupgupgupgup9790
    @gupgupgupgup9790 5 лет назад +1

    Very well explained!

  • @pascalisnala8174
    @pascalisnala8174 5 лет назад

    well explained! thanks

  • @codericj
    @codericj 3 года назад +4

    I think you should have focused more on switched bits in the beginning, instead of only unknown ones, as you can't correct a switched bit with one parity bit, because you don't know where it sits.
    Despite that, great video.

  • @verystablegenius4720
    @verystablegenius4720 2 года назад

    What a fantastic explanation ...

  • @JackErickson17
    @JackErickson17 4 года назад +1

    This is really well-done!

  • @raghav5520
    @raghav5520 3 года назад

    Such a great Video Guys.Good Job!!

  • @haaskaiiitt
    @haaskaiiitt 10 месяцев назад

    What an awesome video, thank you!

    • @ArtOfTheProblem
      @ArtOfTheProblem  6 месяцев назад

      Took 2 years to finish this one, finally live would love your feedback: ruclips.net/video/OFS90-FX6pg/видео.html

  • @GWebcob
    @GWebcob 2 года назад

    Nice message :)

  • @RakeshBalakrishnan
    @RakeshBalakrishnan 3 года назад

    such a simple explanation !

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 года назад +1

      thank you we worked hard on this to make it simple, only one our there :)

  • @LeoYengo
    @LeoYengo 3 года назад

    amazing explanation!

  • @dr.capt.kallamravindrababu9797
    @dr.capt.kallamravindrababu9797 3 года назад

    Excellent sir, thank you so much for video

  • @gianni8209
    @gianni8209 5 лет назад

    This is a really good explanation 👍 definitely going into my bag of training for interns.

  • @alan2here
    @alan2here 5 лет назад

    To get a feel for this, using this process, what proportion of the bits of a 1 megabit file can be randomly invalidated before I'm no longer at least 95% sure that the result is fault free?

  • @user-uk9yy1xu6r
    @user-uk9yy1xu6r 5 месяцев назад +1

    Some question here:
    Code and Parity Bits are send in the same channel. What happens when the Parity Bits are getting broken? Or is this somehow prevented?

  • @yonatanwoldeleul4411
    @yonatanwoldeleul4411 4 года назад +1

    Well done! subscribing right away :)

  • @StephaneBura
    @StephaneBura 5 лет назад +4

    Stupendous!

  • @alan2here
    @alan2here 5 лет назад +2

    I've been thinking along the lines of file wide single checksums. But overlapping randomly connected single bit checksums work better.

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      Interesting, what are you working on?

    • @alan2here
      @alan2here 5 лет назад +1

      I just mean my understand before watching this video vs after watching it.

  • @LongtanTube
    @LongtanTube 4 года назад

    Thank you very much for the video! Well explained! I would like to know whether LDPC codes are only used for Binary Erasure Channel. This is an impression which I got from your video.

  • @mehdis.7404
    @mehdis.7404 2 года назад

    could randomness relate to sparsity of errors in the data?

  • @sonoshee422
    @sonoshee422 5 лет назад

    Great video! What's the song at 11:40? sounds like Geinoh Yamashirogumi

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      All sounds are original compositions for AOP by Cameron Murray.

  • @hassangillani11
    @hassangillani11 4 года назад

    great video keep it up and waiting for more :)

    • @ArtOfTheProblem
      @ArtOfTheProblem  4 года назад +1

      new video coming next week! and more after that

    • @hassangillani11
      @hassangillani11 4 года назад

      @@ArtOfTheProblem great. I clicked the 🔔 notifications button for the new videos.
      😊

  • @nate22621
    @nate22621 5 лет назад

    Thank you

  • @garychap8384
    @garychap8384 3 года назад +2

    Interesting stuff.
    Unfortunately, it just raises more questions in my mind. I mean, if it relies on identifying erasures, then it's specifically suited to analogue channels as you'd find in the real world. But that makes me wonder...
    The level of 'erasure' must also exist in a continuum... so surely there has to be more layers to this scheme in practice. For example... lets say we have a set with three 'erasures' that are only slightly below the detection threshold _(so have some small confidence in the signal - some non-zero level of confidence)_- compared to another set which has two 'hard' erasures _(where we have no confidence in the signal)_
    The theory, says to try the set with the least erasures first...
    But it may make sense, in such scenarios, to try solving the more confident set first despite it actually having more erasures... as well as simultaneously trying to solve the low-erasure set first. In each case, the strategies will have knock-on effects ... and the number of knock-on effects may yield some sort of final _"correction distance"_
    ... the final correction distance of each strategy could then perhaps allow us to favour one solution over another. Could this more probablistic approach lead to a small but measurable improvement in detection/correction?
    I'm guessing it's complicated. It may depends on the type of noise and the modulation used.
    I'm not sure how to explain my thinking, except to say that, in the analogue world, not all erasures are created equal. So, treating them as equal may mean missing vital clues.
    I might code something up to play with this, simulating noisy messages over various channels... perhaps a QAM channel, an FM channel and a logic-level channel... to see if there is a measurable benefit to 'weighting' erasures under various types of noise - or whether doing so just hinders correction.
    Well, I guess there goes MY week... : /

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 года назад

      please post what you find. I want to guess that it helps

  • @Jouzou87
    @Jouzou87 5 лет назад

    How do you deal with conflicting parity bits?

  • @lakshmimanasamanasa5052
    @lakshmimanasamanasa5052 4 года назад

    How many bits an ldpc can detect and correct in general?

  • @shyrwinsteelsia
    @shyrwinsteelsia 5 лет назад +2

    yey! new video!! =)

  • @jondonnelly4831
    @jondonnelly4831 4 года назад +1

    I'm doing an open university course ; LDPC and the low density was described as the encoding matrix having a tiny amount of 1's compared to 0's. Like a ratio of 100:1 It's this accurate? It dosn't make sense to me why that would be the case! I much prefer the explaination of low density = each bit connected to fewer parity bits. This way larger codes can be decoded faster. I can share my course text if that is useful.

  • @animowany111
    @animowany111 5 лет назад +9

    Aw, I missed the premiere. I got the notification 18 minutes late.

    • @Wruczek
      @Wruczek 5 лет назад

      Same, i just got notified

    • @MrSushant3
      @MrSushant3 5 лет назад

      #me2bruh

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +5

      Sorry I just figured out how to use this feature. It was really fun! Sorry about the late notice, I will give 24 hours next time. Plus we have another video in less than 2 weeks!

    • @Architector_4
      @Architector_4 5 лет назад +2

      +Art of the Problem
      Why even give a premiere though? It's like dangling a carrot on a stick for the subscribers, and if you happen to get to the time when it's premiering you either have to settle with not being able to speed the video up, or pausing it while having everyone else in the chat spoil what happens in the later part of the video by the nature of it. Or close it and wait for the premiere to finish to finally be able to watch it normally.
      I just don't get any reason for it!

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      @@Architector_4 Good points. I thought it would be good for A. early notification of when a video is coming (as long as you keep it < 24 hours) especially when there are sometimes months between videos. B. chance for live chat (it was fun today) -- I'm still exploring the idea and will only do it if subscribers enjoy the options.

  • @Cih2001
    @Cih2001 5 лет назад

    How comes that at 11:52 , there exist 4 black box but only 3 parity bits?

  • @YogeshPersonalChannel
    @YogeshPersonalChannel 5 лет назад +1

    If the parity bits are also in multiple sets then wouldn't there be cases where the same parity bit needs to be 1 as per one set and 0 as per another. What happens in those cases? And if such a case cannot arise, why is that?
    Please clarify.

    • @MasterTeeee
      @MasterTeeee 5 лет назад +1

      The random allocation of the parity bit mappings won't allow this to happen basically. It will only ever be correct for each data and parity bit sets it's linked to... If that makes sense?

  • @TheZ10Z
    @TheZ10Z 3 года назад

    How I could correct a 64 number (1 or 0) code that has been generated randomly and I don’t have parity check?

  • @garychap8384
    @garychap8384 3 года назад

    So... about erasures! I'm probably wrong, but it seems to me that an erasure _(as considered here)_ is different from an error, in that you know when an erasure exists.
    If the sending channel is analogue, and the signal is digital ... then perhaps you're ideally looking for a string of +1 and -1 symbols. With channel noise you might consider some threshold such that anything over .75 will be a +1 and anything under -0.75 to be a -1 ... and anything closer to zero to be a faded or 'erased' symbol...
    But what about noise that introduces or flips symbols in a bold way? Now we're not just filling in known gaps in our knowledge about the message, but we're trying to identify, locate and correct otherwise confidently obtained _(but nevertheless, misleading)_ symbols.
    Is this covered in the "erasure" strategy ... or, does an erasure strategy rely on the existence of some underlying transport strategy for identifying weak symbols - and then restrict itself to resolving those 'low-confidence' symbols?
    I note that in hamming codes you can clearly identify a single, perfectly flipped, symbol. I'm just wondering if the same is true when discussing erasure - as the video seems to concern itself with the correction of red question marks rather than locating false positives : )
    I'm probably just missing something obvious. Anyone?

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 года назад

      great catch. Indeed they are different (erasures carry location information) and that simplified this explanation quite a bit...

  • @wiiu7640
    @wiiu7640 2 года назад

    I know this is the current standard and all, but it seems quite counter intuitive to use all these checks with this many parity bits. What if you could compress the data into a few bits with a lot more parity checks instead of using a lot of uncompressed data and parity checks? This would increase reliability and size at the same time. From what I understand, this simply has a ton of low density bits with a ton of parity checks that means more bits of data is needed to check all of them. I suppose you would need a fast compression algorithm that could do this, but would that not be the solution to transmit more data at a reliable rate?

  • @daniell.6463
    @daniell.6463 3 года назад

    Dr. Gallager seems like a cool dude.

  • @toranoshiryou
    @toranoshiryou 5 месяцев назад

    Maybe I missed it, but you never explain how the receiver determines which bits have been erased. Hamming Codes only provide error correction for a single bit per code word because the receiver can not know which bits have been erased without knowing the message.

  • @congminhvu3928
    @congminhvu3928 3 года назад

    Can you tell me what tools you used to make this video?

  • @qdwang-main
    @qdwang-main 5 лет назад +2

    But we actually don’t know 1?1??1. What we received is 111101, and your first step to recover is impossible. In your first step, 1?1 can be easily solved to 101, but the actual number we get is 111. We don't know if the answer is 011 or 101 or 110

  • @thomasbuckler3977
    @thomasbuckler3977 5 лет назад +1

    who does this music, i love it?

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад +1

      Cameron Murray makes all music for AOP videos (cameronmichaelmurray.bandcamp.com/)

  • @gblargg
    @gblargg Год назад

    1:18 Repetition code. That's how the old TI 99/4A saved things on cassette. Every packet was repeated twice. Thus saving took almost twice as long as necessary. Minutes. Ugh.

  • @bigsluggy
    @bigsluggy 2 года назад +1

    There I was bouncing around trying to understand N = NP and finally found a video that wasn't confusing AF... yours.
    Now here I am learning all sorts of information theory because I apparently find it very intersting! Lol.

  • @shadigamal9726
    @shadigamal9726 Год назад

    A worldie and a half!

  • @tomschneider9460
    @tomschneider9460 5 лет назад +2

    In this video he talks about an "erase" state (the '?'). How does the receiver know there was an "erase" state as opposed to a flipped bit? For signals that are 0 or 1 volt, isn't the signal always going through circuits that force the value to be either 0 or 1? Is there a detector at the receiver that calls "erase" if the value is (say) between 0.4 and 0.6 volts? If not, then the method based on "erasure" would fail. As far as I know, there is no "erase" state, there are only bit flips. If there really are no 'erase' detection circuits, then the entire video is wrong. It should be redone using bit flips.

    • @andormenczer9864
      @andormenczer9864 5 лет назад

      This video doesn't make much sense to be honest. The channel output (the message received by the decoder) is a sequence of values between 0.0 and 1.0, where the i-th value is the probability that the i-th bit in the original message is 1.

  • @joey66436
    @joey66436 2 года назад +1

    if only it could detect corruption inside the bank as well.. would also be useful to check government, but often more than one bit will be corrupt..

  • @Crystalheard
    @Crystalheard 2 года назад

    A basic concept is somehow wrong in this vid: a corrupted bit is not turned as a question mark at the receiver, it is just inverted as binary 1 is turned as 0. For maximum-likelihood receiver, a decision maker is designed to determine the received binaries as 0s or 1s before error detection/correction algorithm is applied. So, 1 parity bit cannot correct single-bit error. As an example: if the sent message is 11001(1), last bit is parity bit, and received message is 10001(1), the receiver only knows that the received message is wrong but cannot tell which bit is wrong.

  • @lynmarness2471
    @lynmarness2471 5 лет назад

    - God has Joined the Server

  • @proweiqi
    @proweiqi 5 лет назад

    Repetition is a perfect code. The problem is code rate only...

  • @cbrpnk
    @cbrpnk 5 лет назад +1

    Can you make a video a week? k thx by.

  • @planktonfun1
    @planktonfun1 5 лет назад

    what if the parity check has error themselves, nvm

  • @hafty9975
    @hafty9975 5 лет назад +1

    :)))

  • @OxidoPEZON
    @OxidoPEZON 5 лет назад

    soooo... we end up with 50% code rate again

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 лет назад

      Using a simple (small) example yes. The nice part is LDPC has a variable code rate which can depend on the erasure rate on a given channel. So whether the code rate needs to be 90% or 10%, it can match what is needed. Other models have a 'fixed' code rate which isn't ideal. Remember it's the speed which is important too, not just the code rate. This method is fast & flexible.

    • @OxidoPEZON
      @OxidoPEZON 5 лет назад

      @@ArtOfTheProblem That's pretty neat, thank you for the clarification

    • @PhysicsHelps
      @PhysicsHelps 5 лет назад

      @@OxidoPEZON It's also not vulnerable to the case where the same bit is erased in the repetition.

  • @anonymousvevo8697
    @anonymousvevo8697 Год назад

    explanations are good, the music is horrible sorry

  • @mizumizumizu0
    @mizumizumizu0 11 месяцев назад

    Highly informative, yet very easy to understand. Thank you for this video! The visual elements really help linking the examples to the theoretical notions behind them.

  • @amirnazari6427
    @amirnazari6427 5 лет назад

    Great video!