Arithmetic With... Continued Fractions??

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • Arithmetic! On continued fractions! It's possible, but not well known or widely used in practice. This video explores the basics of this underappreciated area of math.
    This is my submission for SoME2 ( • Summer of Math Exposit... )
    SOURCES & FURTHER READING:
    Continued Fraction Arithmetic (1977)
    Speculatively Redundant Continued Logarithm Representation (2010)
    Finite Precision Number Systems Arithmetic (2010)
    CONTINUED FRACTIONS FOR HIGH-SPEED AND HIGH-ACCURACY COMPUTER ARITHMETIC (1983)
    Hardware Implementation of Continued Logarithm Arithmetic (2006)
    Exact Arithmetic on the Stern-Brocot Tree (2003)
    Continued Logarithms And Associated Continued Fractions (2016)
    Generalized Continued Logarithms and Related Continued Fractions (2017)
    High-Precision Arithmetic and Mathematical Physics (2015)

Комментарии • 49

  • @chennebicken372
    @chennebicken372 2 года назад +40

    I got really lost even in the first few minutes. I know, that I‘m not the measure of everything, but I would suggest to you explaining more what every step means and what you want to do, so one can follow.
    So 1. More talking about the words and mathematical entities, you throw in.
    2. Maybe a structure with a few headlines. Something like „And now let‘s explore how we can do arithmetic an them“.
    To be concrete: I understand matrix multiplication. But just having an x and a y out of nowhere and transforming it in an unexplained way and then claiming this corresponds to a matrix multiplication? A Natural thing to ask is „What are we doing he, what do we want to achieve?“ If the answer is „Determining the actual fraction from the continued-fraction-represantation.“, then I‘m quite happy, that I got, what you were saying. But I can‘t just assume, what‘s happening next. So (not so Pro-)tip: Please guide us through the topic and through the video and just say it out loud, what you are doing and especially, why you are doing it. That can be 1 or 2 short sentences for every „Scene“.
    I Hope, you had fun, creating this content, and I hope, I could help by telling the thinks that help me follow along.

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад +4

      Thanks for the feedback! I was a little uncertain how to motivate some of the ideas (the matrix multiplication bit in particular, because what the main use of that is to demonstrate that the ingestion step is both fast and reversible). Prefacing each exercise with what the point of it is seems like a good way to do that, I'll definitely do that in future.

    • @santerisatama5409
      @santerisatama5409 Год назад

      @@allystreetcarsthings8207 Giving abstract ideas flesh by concrete examples is very important. Unary representations of Stern-Brocot type structures are as concrete as concrete gets (which is why it's very foundational and so important for proof theory), and we should be also able to eli5 this stuff.

  • @identityelement7729
    @identityelement7729 2 года назад +30

    Its really refreshing, not really understanding a 20 min long math video on yt :)

  • @tonyallen4265
    @tonyallen4265 6 месяцев назад +3

    I understand just a fraction of a fraction of a fraction of a fraction of a fraction of this.

  • @mostly_mental
    @mostly_mental 2 года назад +16

    This is really cool. I'd never seen continued logarithms before, but they seem like a powerful tool.

  • @vector8310
    @vector8310 8 месяцев назад +1

    I'm all in favor of videos on math, especially in this age of obscurantism and publicly acceptable math phobia, but this video does very little to shed light on applications of continued fractions.
    All you do is rush through a list of results. Zero motivation, no worked examples.
    This kind of video is counterproductive. I suggest you run your videos past friends or associates for constructive criticism before posting.

  • @plesleron
    @plesleron 2 года назад +21

    Very interesting subject! I ended up a little lost on the ingest/egest formulas and their relationship with matrix multiplication and it's currently 3AM so I'll try giving this another watch when I'm a bit more awake. Otherwise, I think that you spoke very clearly in spite of the mic quality which I greatly appreciate.
    I'm looking forward to more videos!

  • @maxreenoch1661
    @maxreenoch1661 2 года назад +5

    Is this Bill Gosper the same guy who discovered the Gosper glider gun in Game of Life?

  • @tricanico
    @tricanico 2 года назад +9

    This is an awesome topic! Thanks for sharing.
    However I must say, the video feels rushed.
    I know about continued fractions and still you managed to lose me very quickly. Can't imagine how it's like for those who didn't know about them.
    You use terms you don't define like it's nothing.
    I must say again, great topic and thanks for sharing. And it's never too late for a second edition! haha. Cheers!

  • @lame_lexem
    @lame_lexem 2 года назад +4

    yeah.. i should revise this later (you lost me when cube appeared), definitely a good video!

  • @xj8713
    @xj8713 2 года назад +2

    I love dense videos, I paused and proved 11 times through the video.

  • @FareSkwareGamesFSG
    @FareSkwareGamesFSG 2 года назад +6

    Is there a mistake in the second continued fraction's representation at 2:04?

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад +2

      Oh yeah I don't know how I didn't notice that. The actual continued fraction for 4/pi is 1 + (1/(3 + 4/(5 + 9/(7 + 16/...)))), so the representation on the right is correct.

    • @hopeg97
      @hopeg97 2 года назад +4

      @@allystreetcarsthings8207 The continued fraction on the left also converges to 4/π, it's just a different one from the one on the right

    • @hopeg97
      @hopeg97 2 года назад +3

      @@allystreetcarsthings8207 Also, the representation on the right at 2:04 (and at 5:40) has the squares of the odds for the numerators, where it should have all the squares.
      Using a Matlab-esque notation here,
      4/π = [1, 2, 2, 2, ..., 2; _, 1², 3², 5², ..., (2n+1)²]
      and
      4/π = [1, 3, 5, 7, ..., (2n+1); _, 1², 2², 3², ..., n²]
      are both true (according to the Wikipedia page on Generalized continued fractions)

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад +3

      @@hopeg97 Oh that's unfortunate, I think I mixed those two up in my head when writing them down. Thanks for noticing, I should have caught that :P

  • @tejing2001
    @tejing2001 2 года назад +3

    I was aware that you could compute with continued fractions representations through an injest/ejest approach like this, but I had never encountered the continued logarithms. I'm going to have to play with those a bit.
    One suggestion: Keep more data on the screen at a time, especially when you're demonstrating something. The sequence at 12:04, for example, would be much easier to understand if the entire sequence was displayed at once, so you could refer back to previous values and compare to understand what's happening.

    • @tejing2001
      @tejing2001 2 года назад +1

      Also, I just noticed there's a mistake in the codomain for "zero underscore": the upper end of the first interval should be open, since -2 cannot actually be produced by this transformation given the domain.

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад

      @@tejing2001 Ope good catch.
      Also yeah that probably would have been a better way to animate it - didn't occur to me to do that for some reason.

  • @canaDavid1
    @canaDavid1 2 года назад +3

    This is very cool!
    One thing i might have missed, how would one convert these [continued logarithms] to binary and/or decimal, to output?

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад +2

      Oh yeah I never actually went over that, I probably should have. Here's my explanation:
      You do the same y = (ax + b)/(cx + d), but instead of outputting continued logarithm/fraction terms to y, whenever a/c and b/d have integer parts of the same magnitude and leading digit/bit, multiply b & d by the base and that digit/bit, and output that digit/bit to y. This can also be done on the fly! Although, I haven't tested it, so I'm not 100% certain this is correct or practical, but I believe this is how you'd do it.

    • @canaDavid1
      @canaDavid1 2 года назад

      @@allystreetcarsthings8207 Does this also work for outputting decimals?

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад

      @@canaDavid1 I think it should, though again I haven't actually tested it.

  • @becker8192
    @becker8192 Год назад

    Enjoyable video, but the audio sounds muffled. Potato-quality.

  • @mikewatman5445
    @mikewatman5445 2 года назад +3

    1:02 6:48 8:17 8:56 14:07
    C U B E

  • @fuuryuuSKK
    @fuuryuuSKK 2 года назад +3

    the "only three bits to specify a term" made me think about how one might use a Huffman tree to encode the number using on average even less bits:
    If the tree is maximally unbalanced, the biggest codepoints of a set of eight would have length 7 (example codepoints might be 0, 10, 110, 1110, 11110, 111110, 1111110, 1111111). As such, we can represent the tree using just three bits for the length of each codepoint, with the tree structure being implicit since it doesn't actually matter *which* codepoint is used to represent an element, so we can just go through the at most 7x3=21 bits (since the last codepoint length is redundant too, seeing as the tree only has the one remaning spot at most, which is left over for the eighth element to fill with it's codepoint. Similarly, if you have filled the tree using any subset of the set of the first six elements, you can omit the subsequent zeroes.), and if we have the element present at all (meaning the codepoint has length >0), assign the smallest available codepoint to the respective element. Thus, if we have a very long representation, we would be able to store that representation far more densely, with a bit of added overhead for the decryption.
    This all hinges on the fact that usually, you would likely not have all eight symbols be part of your representation, or at least not all in approximately equal amounts, and thus would benefit from the space-saving properties of Huffman encoding. If however you do have a number that would not save the 22 bits necessary to offset the space the "tree" list of seven bit triplets occupies and gain space, you *can* just save it unencoded.
    I do recognize that this almost entirely missed the point of the video, but the video itself assumes prior knowledge on the specific field of continued fractions, which I mostly lack.

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  2 года назад

      Interesting idea - although in practice I'd think the amount of space needed to store the representation wouldn't be the limiting factor in computation. Since the arithmetic is pipelined you'd be sending many small packets of terms, so this approach wouldn't make sense for that. Also, though I haven't actually tested it, at least in the examples Brabec gave all the terms were being used significantly it was just a matter of chance which ones came up more often for a given representation, even the speculative ones, so I don't know that this would actually be more efficient. Perhaps this would make more sense for storing really really long representations, but even so it'd need to be lazily reconverted into the regular terms to do arithmetic.

    • @fuuryuuSKK
      @fuuryuuSKK 2 года назад +1

      @@allystreetcarsthings8207 again, this'd be a way to condense the size of a specific saved representation, which is why you build the tree for each number's particular distribution. You also have a runtime that shouldn't be in a worse O() than the linear read/write of the standard version, because you just read the next bit and progress along a state machine of "give this term to the next lazy eval and go to the tree root" and "this is a prefix, continue at this node for the next bit. This means linear runtime, though as linear goes "read a bit, either return the term or repeat with next bit" is worse than "read 3 bits, lookup". However, saving a number as this huffman encoded version does sneak in a subtle logarithmic component in building the tree, with the aggregated counts of each terms frequency. I wonder if this might be combatable by utilizing a different method where you record term frequencies in order, and the relative differences between them. These differences would then be discarded afterwards, and just the order kept for building the tree. It would mean that numbers that are close to others in frequency of occurrence would have faster runtime, and the absolute number of occurrences does not actually matter for the tree, just their relative positions. Except I am a derp and it does because you have the issue of multiple terms in a subtree adding their weights, and thus needing to account for it twice. Dangit.
      But, if the tree encoding I detailed originally turns out to be "011011011011011011011" you can just use regular encoding. This method of saving what is essentially just a fancy octal number is optimized for numbers that have remarkably different frequencies of their terms, which need to be stored in a way where you have little necessity for speedy access, but need to store them very densely.
      It occurs to me, that this could be expanded to other bases too, and usually would just require first a single bit flag of "is compressed", then log_2(b)+1 bits for the base you use (using the next largest base that has 2^n, and representing it prefix-free as n=0:='0', n+1:='1'+n), then the n*2^n bits of tree representation, and then the compressed number. This does necessitate you save 1+n+n*2^n bits in the encoded number though, because otherwise it's more efficient to just use plain encoding.

    • @canaDavid1
      @canaDavid1 2 года назад +3

      This could work in long-time storage, but would be inconvenient for short-term, as the amount of reads of each input would not necessarily be the same, significantly slowing down potential to do things like look-ahead.
      In the most pipelined case,where each processing step is sending the next 'bit' over a wire connected directly to the next component, it'd be very unpractical.

    • @fuuryuuSKK
      @fuuryuuSKK 2 года назад

      @@canaDavid1 Very true, I did intend for this compression to be used in a context where, for some reason, you want to store these numbers (or any other octal numbers, really, and it can be extended fairly easily to other bases too), in a way that prioritizes compactness over ease of access.

  • @johnchessant3012
    @johnchessant3012 2 года назад +3

    very cool

  • @tanchienhao
    @tanchienhao 2 года назад +2

    Interesting topic!

  • @dalembertianblitz
    @dalembertianblitz Год назад

    you wouldnt happen to have a link to a pdf of either the paper by GOSPER or BRABEC would you? The latter seems to be requiring a university log in, but I'd love the opportunity to read them both...

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  Год назад +1

      Gosper (as a txt; there's no pdf version): www.tweedledum.com/rwg/cfup.htm
      Brabec's paper is unfortunately paywalled, but SciHub exists for a reason :)

    • @dalembertianblitz
      @dalembertianblitz Год назад

      @@allystreetcarsthings8207 Thanks so much. I was absolutely fascinated by this video. I had to pause every minute or so to take in the equations but it was illuminating on so many levels.

  • @NUCLEICUS
    @NUCLEICUS Год назад +1

    Verry interesting work. I just want to know what sotware are you using for animation.
    Thank you

    • @allystreetcarsthings8207
      @allystreetcarsthings8207  Год назад

      I used manim... probably unnecessary, it made creating the video a lot more work than it had to be. In the future I would reserve manim for special animations.

  • @denelson83
    @denelson83 Месяц назад

    2:20 - Möbius transformation?

  • @animeshjain8932
    @animeshjain8932 2 года назад +1

    audio could have been way better in otherwise great video

  • @monoman4083
    @monoman4083 2 года назад +2

    complicated...

  • @elkinmontoya9640
    @elkinmontoya9640 2 года назад

    Random thought: given the expression for egestion: y=ax+b/cx+d, and also the relationship with matrices and quadratic forms; does this have any relation to automorphic functions?

  • @symbolspangaea
    @symbolspangaea 2 года назад +1

    Thank you!

  • @TheKivifreak
    @TheKivifreak 2 года назад

    Super nice! I think you could implement continued fractions with codata and coinduction in Haskell 🤔

  • @kasugaryuichi9767
    @kasugaryuichi9767 2 года назад +1