What Is Big O Notation?

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 359

  • @DanGM123
    @DanGM123 4 года назад +2143

    i love how 3blue1brown has created a new genre

    • @vgarzareyna
      @vgarzareyna 4 года назад +112

      I’ve seen a lot of videos similar to 3b1b’s lately and I love them

    • @evanward5045
      @evanward5045 4 года назад +21

      You commented exactly what I and many others were going to comment

    • @S1lentG4mer
      @S1lentG4mer 4 года назад +11

      I thought this was 3b1b

    • @chrisvinciguerra4128
      @chrisvinciguerra4128 4 года назад +51

      Well he did make the Python library that this video used for its graphics so yeah that’s a reason why it’s similar

    • @markgross9582
      @markgross9582 4 года назад +2

      Yeah. The animation styles seem similar too, so they may be using his library

  • @tiff4839
    @tiff4839 4 года назад +496

    Soo good. You broke down a commonly misunderstood topic in an intuitive and engaging way. I love it!

    • @Reducible
      @Reducible  4 года назад +28

      Thanks Tiffany! I appreciate it!

  • @revowolf7413
    @revowolf7413 4 года назад +160

    I cant understand how youtube isn't recommending these videos. So elegantly explained, and the animations, OMG!

    • @sifsif2725
      @sifsif2725 4 года назад +4

      It just recommended this to me xD

    • @astphaire
      @astphaire 3 года назад +1

      I mean his enunciation isn’t great

    • @Warpadable
      @Warpadable 2 года назад +2

      Well every people watching this had this recommended... You included. Otherwise how did you find it?
      I agree it is awesome though.

  • @robertpoole9258
    @robertpoole9258 4 года назад +176

    A 3Blue1Brown for computer science. Awesome. This might end up being my favourite channel.

  • @klaik30
    @klaik30 3 года назад +105

    Wow, I actually think 3B1B has created a revolution in how we should think of structuring educational videos. I'm willing to bet that in the near future his visualization program and video structure are going to be used by MANY more people and hopefully even inspire some teachers to use his methods in class.
    Amazing work @Reducible! You should think about doing an "Essence of Computer Science" in the future!

  • @saulbeck3398
    @saulbeck3398 4 года назад +10

    I came to this video with high hopes as you used Manim, yet, I was blown away with how detailed and thorough you were. I paused the video so many times, yet, not only was told what I was thinking, I was told it was normal, to think and ponder on it. I am actually blown away on how you able to teach something so simple that is a basic need for computer scientises in a way that makes me wonder how tf didn't I think of that!

  • @henryroc1969
    @henryroc1969 4 года назад +44

    Easily some of the best computer science videos on youtube and definitely better than my CS professors :). Thank you for putting in all this work!

  • @TheEndermanMob
    @TheEndermanMob 4 года назад +35

    I've seen many channels using manim. Beside its creator, this is the most original and really good one.
    I'm trying to have my own channel where a explain my area with other students, and I'm having trouble programing the animations, so I need to say thanks for this beautiful work of yours.

  • @caloz.3656
    @caloz.3656 3 года назад +4

    this is the on video that finally settled my confusion with asymptotic notations. THANK YOU SO. MUCH. THIS IS INSANELY GOOD AND HIGH-QUALITY!!

  • @mehtubbhai9709
    @mehtubbhai9709 3 года назад +2

    Best explanation of Big O I have come across!

  • @ishikuultra3637
    @ishikuultra3637 3 года назад +2

    This is the best explanation I've ever seen.

  • @strandedinanisland457
    @strandedinanisland457 3 года назад +4

    This is so easy to understand....roughly 2 years ago I tried to gather this information on Computational complexity from a big book and ended up understanding very little.

  • @shubhamsingh-xw3tf
    @shubhamsingh-xw3tf 3 года назад +1

    Definitely not forgetting Big O notation for a really long time! Excellent video sir! Thanks

  • @playerscience
    @playerscience 3 года назад +2

    This is hands down the best explanation of big O notation!!!
    Instantly subscribed to your channel.
    👌👌👌😘😘
    BTW your voice is just like 3blue 1brown!

  • @garr_inc
    @garr_inc 4 года назад +3

    Never was into computer science, but it feels good to finally understand what the notation actually mean after encountering it in my higher-end math classes from time to time. Thank you!

  • @nolanfaught6974
    @nolanfaught6974 3 года назад +132

    A professor once asked the big O of an algorithm, to which I replied "O(n!^(n!))." He had to admit that I was right, then he rephrased the question as "what is the smallest big-O of the algorithm?" Never forget that a big-O is not necessarily the best bound on an algorithm, just one of many bounds

    • @12-343
      @12-343 3 года назад +44

      If the answer to the question is really O(n!^n!), I think whoever wrote it is having a very rough time

    • @Magnogen
      @Magnogen 3 года назад +49

      @@12-343 it's probably my code tbh

    • @wolfranck7038
      @wolfranck7038 2 года назад +4

      If I'm not mistaken, the "smallest big O" is called big theta !

    • @klobiforpresident2254
      @klobiforpresident2254 2 года назад +5

      @@wolfranck7038
      Not quite.
      There are O and Omega. Big O is *an* upper bound. Often we mean the smallest (known) upper bound. Similarly Ω is *a* lower bound. Often we mean the largest (known) lower bound.
      When O and Ω are identical then that is called Θ.

    • @wolfranck7038
      @wolfranck7038 2 года назад +2

      @@klobiforpresident2254 yeah but if you think about it, the "smallest" big O will be of the same order as the function concerned, thus also being big omega, and it will be in big theta
      Because smaller than that would be only big omega and bigger thant that would only be big O
      (It's clearly not a rigorous explanation but can't do much better in a YT comment)

  • @alexplastow9496
    @alexplastow9496 Год назад

    Thanks for the time you put into this video, being able to reduce complex things to something I could explain to a highschool student is thrilling in a nerdy way

  • @codehorse8843
    @codehorse8843 4 года назад +3

    Thanks these are lifesavers for my current course.

  • @sharanphadke4954
    @sharanphadke4954 4 года назад +3

    just put some thinking pi's and this literally becomes 3blue 1brown video!!! Really great channel

    • @playerscience
      @playerscience 3 года назад +1

      Exactly!
      He is the 3blue 1brown of Computer science. 😎🔥🔥🔥

  • @dmitriyogureckiy8292
    @dmitriyogureckiy8292 Год назад

    the best video about O notation, you bring all together, even lim, cool

  • @olanrewajubabalola2322
    @olanrewajubabalola2322 3 года назад

    BEST VIDIEO ON BIG O NOTATION!!! every other video out there just leaves me more confused. thank you.

  • @chrislam1341
    @chrislam1341 3 года назад

    by far it is the best explanation after years of search.

  • @skyslasher6267
    @skyslasher6267 3 года назад

    as a junior in cs right now, in my opinion, this is a must watch for all people trying to get a degree in programming

  • @discreet_boson
    @discreet_boson 4 года назад +4

    It would be an understatement to say this channel is underrated

  • @1DInciner
    @1DInciner 3 года назад +3

    Was expecting several animations about other big O examples, like exponential one.
    It is very interesting to see them in comparison.

  • @hiroyukikuwana3105
    @hiroyukikuwana3105 4 года назад +1

    Hope your channel keeps growing!

  • @ClearerThanMud
    @ClearerThanMud 2 года назад +1

    Clearest, most complete Big-O video I've seen. Kudos.
    It looks like you used Grant's manim package to create this. I have been thinking of learning manim so I can create a video giving some insight into why the radix sort is so fast. Work doesn't leave me much time for such pleasures, though; you do such a good job with this, if you are interested I can privately explain the insight so you could make the video if it appeals to you.

  • @HienNguyenHMN
    @HienNguyenHMN 4 года назад +24

    @14:55 "we can actually use this 'symme-tree' to help us"

  • @lucha6262
    @lucha6262 4 года назад +2

    What a great video! Thanks so much!

  • @NovaWarrior77
    @NovaWarrior77 4 года назад +4

    ABSOLUTELY AWESOME I-

  • @suzalwakhley5329
    @suzalwakhley5329 2 года назад

    A very simple and detailed explanation. Thank you very much

  • @kudzem
    @kudzem 2 года назад +2

    O(2^N) : "Yo, I'm so complex"
    O(N!) : "What was that, punk?"

  • @anwarulbashirshuaib5673
    @anwarulbashirshuaib5673 4 года назад +3

    How come I noticed this masterpiece so late!? Definitely subscribing!

  • @AshutoshKumar-de8wn
    @AshutoshKumar-de8wn 3 года назад

    I love your videos for accuracy and clear content. Please upload videos on different algorithms and shortest paths.

  • @absolutezero6190
    @absolutezero6190 4 года назад +16

    I noticed a small error in your video. You typed two quotation marks in LaTeX at 9:59. They are both ending quotation marks. The way to fix this is to use this markup:
    ``this will be surrounded with proper quotation marks when rendered in \LaTeX’’
    Notice that I used two single quotes at the end (‘’) not a double quote (“).

    • @johnmeyers8542
      @johnmeyers8542 2 года назад +1

      Good catch. Although you missed 'defintiion' spelled with the lesser used two 'i's. Neither latex or other rubberised products will help with that one.

  • @givrally
    @givrally 3 года назад +3

    Now I want big O of big O notation, to group like growth rates together : polynomials, exponentials, and so on.

  • @kaleabfetene6258
    @kaleabfetene6258 3 года назад +1

    Such a great video thank you so much I honestly don’t have enough words to thank you appreciate that

  • @hisyamzayd
    @hisyamzayd 3 года назад +1

    Love how to find big o with graph explanations.. thank you 😀😀

  • @weirongwu4964
    @weirongwu4964 4 года назад +3

    This masterpiece omg thank you so much!

  • @Bloody_Mary_
    @Bloody_Mary_ 3 года назад

    Beautiful presentation accompanied with fantastic music of enlightenment!

  • @jayantverma6196
    @jayantverma6196 4 года назад +2

    You are amazing, please upload more videos.

  • @BeeshoStudying
    @BeeshoStudying 2 года назад

    you have the best explanation. thanks for the video

  • @dehilihind2916
    @dehilihind2916 2 года назад

    thanks for your efforts , I just discovered your channel and it's INCREDIBLY helpful !

  • @hotpushupguy4203
    @hotpushupguy4203 3 года назад +1

    These are so wonderful - thank you !

  • @junior3082
    @junior3082 Месяц назад

    Brilliant video. Thank you.

  • @marpin6162
    @marpin6162 4 года назад +178

    who came from 3blue1brown’s post?

  • @isaacfernandez2243
    @isaacfernandez2243 2 года назад

    Wow, this was incredibly good. Thank you.

  • @isabellelindblad2835
    @isabellelindblad2835 3 года назад +4

    This was amaaaazing

  • @samuelatienzo4627
    @samuelatienzo4627 3 года назад

    Jeez I learned more here in 15 mins than a few weeks of my statistics and numerical methods class...

  • @КузьменкоІгор-к6ы
    @КузьменкоІгор-к6ы 4 года назад +1

    At 15:53 table 1->1, 2->2, 3->4, 4->8, 5->16; at 16:15 If n=k, Count = 2^(k-1)= 2^k/2

  • @erv993
    @erv993 4 года назад +1

    Thanks so much! You have invaluable content!

  • @ronaldboulder308
    @ronaldboulder308 3 года назад +3

    I remember this topic being explained in algorithm courses very similarly.

  • @FRANKFIFORM
    @FRANKFIFORM 2 года назад

    Great video!! I’ve always have doubts about this topic.

  • @joaquingutierrez3072
    @joaquingutierrez3072 4 года назад +1

    I loved this video. Thank you !

  • @tucan1309
    @tucan1309 4 года назад

    i didnt come from 3b1b but im suprised at quality of these videos

  • @rjmbowie
    @rjmbowie 4 года назад

    Amazing video! Love the use of manim.

  • @ccdavis94303
    @ccdavis94303 4 года назад +12

    O(?) is deep and important in understanding processes in general, not just computing. Social structures scale in scary ways. Some stuff works great at dining room table scale (N~6), is still functional at the scale of a monastery (N ~ 100) but when they scale to small countries ( N ~ 20MM) let alone large countries ( N ~ 300 MM ) or huge countries ( N ~ 1 Billion) it hits the fan. Central planning for example.
    The Soviets set up an office to make sure everything was fairly priced relative to other stuff. Easy right? But there were N ~ 25 MM items in the economy. So relative fairness is N^2, but everything is fair compared to itself, so N^2 - N. Further comparison is sort of symmetric, so (N^2 -N)/2. Big improvement but when N = 25 MM, O(fair) ~ 312 trillion. Poor aparatchicks. (Tom Sowell cited this case)
    { if a is fair relative to b and b is fair relative to c, is a fair relative to c? How much would this improve things?}

    • @jursamaj
      @jursamaj 2 года назад

      That argument is obviously fallacious. You don't need to compare the price of every type of watch to every type of car, and every type of bread, etc. The watches only need compared amongst themselves, and to some baseline.

  • @eamonnsiocain6454
    @eamonnsiocain6454 Год назад

    Well presented! Thank you.

  • @manamsetty2664
    @manamsetty2664 3 года назад +1

    The Man.The Myth.The Teacher

  • @snoopy1alpha
    @snoopy1alpha 2 года назад

    Very well explained! I guess I will quote this video in my next complexity discussion at work :-D

  • @timhansen5468
    @timhansen5468 5 месяцев назад

    Thank you for saving my discrete math grade

  • @Xxnightwalk1
    @Xxnightwalk1 3 года назад

    Really instructive, as always
    Keep up the amazing work ;)

  • @danser_theplayer01
    @danser_theplayer01 Год назад +1

    You missed a O(√n) class, which also can be written as O(n^0.5). There is a thing where in the worst case scenario the algorithm will take at most √n "operations" to finish, for any given n, doesn't matter if it's 1 thousand n or 95 billion billion n.
    (Technically there's also one with √(n)/2 but they're in the same scaling class)
    Since nobody seems to be mentioning it anywhere I've decided to call it "rooted" running time class.
    And the use case isn't that uncommon, it's meant to substitute a 2d array to tradeoff lookup time for common insertions/deletions.

  • @_maus
    @_maus 2 года назад

    Thank you so much and I really appreciate the video.

  • @nomi98
    @nomi98 2 года назад +1

    Glad my teachers taught me well lol.

  • @jaumm84
    @jaumm84 4 года назад

    Thanks! I finally undestood it!

  • @hamzaich7034
    @hamzaich7034 4 года назад +2

    Maaaaan this is amazing ♥️♥️♥️

  • @the_cheese_cultist
    @the_cheese_cultist 2 года назад +3

    you probably should've added O(sqrt n) and O(n!) to your "common running times" list
    since they do appear quite often as well

  • @berkay453
    @berkay453 2 месяца назад

    it's great and very clear expression thx bro

  • @tal3541
    @tal3541 4 года назад +6

    There's an inaccuracy in 11:30. If f=O(g) it doesn't necessarily mean that the limit of g(n)/f(n) exists. For example take g(n)=n and f(n) to alternate between n and 2n (for evens and odds). The mathematically correct way to put it is the the liminf of g(n)/f(n) is equal to C which is greater than 0.

    • @willnewman9783
      @willnewman9783 3 года назад

      Good point, but I still think this is wrong, at least according to his definition.
      By what be says, n should be O(n^2), because n0.

  • @joaofrancisco8864
    @joaofrancisco8864 4 года назад +2

    So good!

  • @JonathanMandrake
    @JonathanMandrake 3 года назад +2

    As a mathematician, I also have to say that it is easier to find the number of operations made if you try to say what the program does exactly and condensing all the parts that don't really matter to the runtime. For example for this algorithm:
    define f(n):
    f(1)=1
    f(n)=f(n-1)+f(n-1)
    Thus, we have 1+2+4+8+...+2^(n-1)=2^n -1 calls, thus O(2^n) calls, as well as around one addition per call, exactly 2^(n-1) -1, being O(2^n).
    One simple step makes this a linear problem:
    define f(n):
    f(1)=1
    f(n)=2*f(n-1)
    Then, we have n-1 multiplications (being O(n)) and n calls of f (also O(n)).
    But at this point, we can see, that this is a much simpler definition:
    define f(n):
    f(n)=2^n
    Here, we have n-1 multiplications, and 1 call of f, thus we have O(n) multiplications and O(1) calls of the function, making this algorithm much better. And we also have to consider what range of numbers we want to use. It may be great to use a highly complex algorithm for multiplication with an extremely good runtime O(f(n)), but if we only use numbers less than 10^10, it might still be much faster to use the algorithm with the worse O(f(n)) runtime simply because the n is not large enough in this practical application

    • @geradoko
      @geradoko 2 года назад

      I'm not sure but I think you missed the point. The subject of the video is to explain what O(f(n)) means, not how we can build a better algorithm for a given problem. In this sense, the example is very well-chosen since it is very clear that it's not the best solution to the problem.
      But there is still one thing in your answer which made me remember of times when processors could only add and multiply (in fact they could only shift bytewise :), and that is that an algorithm of class O(n) which calls an exponential function might not be faster than an algorithm of class O(2^n) which uses only addition. But I don't think these questions are still important for coders nowadays.

    • @JonathanMandrake
      @JonathanMandrake 2 года назад +1

      @@geradoko I do understand the point quite well agtually, but I was trying to note ways to find out what the actual order of runtime is (i. e. condensing unnecessary code, which also helps with cleaning up and improving the code) and why runtime isn't everything, because often enough the n aren't large enough

    • @zapazap
      @zapazap 2 года назад

      Which illustrates that (by definition) complexity here describes not the problem but the algorithm used to solve it.

    • @JonathanMandrake
      @JonathanMandrake 2 года назад

      @@zapazap Well, yes and no. If there is an algorithm which has optimal order of operations, then we can say that that problem also has that order. For example, if calculating the n-th term of some specific recursive series is at its best O(n), then we can say that that series has a computational effort of O(n)

    • @zapazap
      @zapazap 2 года назад +1

      @@JonathanMandrake Yes, you are correct. The very question of P=NP involves problem classes and not merely particular algorithms. Good call.

  • @herlusz
    @herlusz 4 года назад +1

    The second definition is quite understandable

  • @Pewpewpew230
    @Pewpewpew230 2 года назад

    Great work, thanks!

  • @mychannel-te5ke
    @mychannel-te5ke 4 года назад +14

    11:45 It's not really true that there should be such a limit. It may not exist. For example g(n) / f(n) may by 1 for even n's and 2 for odd n's. So there's no limit in this case.
    Even more. It's true that for f(n) = n and g(n) = n^2 it's true that n = O(n^2). But g(n) / f(n) = n has no limit.

    • @diegocfq
      @diegocfq 4 года назад +2

      Yeah, it's supposed to be lim sup instead of just lim and an absolute value operator should be applied on the numerator of that fraction.

    • @DavidPysnik
      @DavidPysnik 3 года назад +1

      How would lim sup help Сергей Обритаев's example? if f(n) = n and g(n) = n^2, the limit as n goes to infinity of g(n)/f(n) would not exist because it goes to infinity, so the lim sup of this expression going to infinity would also not exist. Even with lim sup in that definition shown at 11:45, it doesn't allow n = O(n^2) as the original definition does, so
      Сергей Обритаев's criticism does not seem fixed.

    • @johnmcleodvii
      @johnmcleodvii 2 года назад +1

      O notation does not require that the function has a strict limit in the terms that calculus does. If odd numbers for input are one growth in n and even numbers are something other growth in n, the O notation would be for the faster growing one. So in the example where odd numbers had linear growth and even numbers had n² growth, the O notation would be O(n²).
      Off the top of my head, I can think of no algorithms that have significantly different behaviors for even and odd n.

  • @sbsyr5555
    @sbsyr5555 3 года назад

    Nicely explained...

  • @shimavalipour5992
    @shimavalipour5992 3 года назад

    u explained it sooooo goooood, thanks

  • @matthewcrunk4165
    @matthewcrunk4165 3 года назад

    Oh I thought this was 3blue1brown but with an odd accent.
    Great video, might I recommend making your visual style a tiny bit more distinct.
    Since honestly its great to have a thing that makes your channel stand out a bit.
    Like I think you even picked the same font, unless Im mistaken

  • @Adomas_B
    @Adomas_B 4 года назад +15

    I keep reading the text in 3blue1brown's voice

  • @Starwort
    @Starwort 3 года назад

    That last example emphasises the importance of caching - caching would reduce the runtime to O(n) worst-case and O(1) best-case (of course, it could also be reduced to O(n) simply by multiplying the recursive result by 2)

  • @anasasim3856
    @anasasim3856 3 года назад

    I want to give you a hug bro!

  • @mikesolo7993
    @mikesolo7993 3 года назад

    I bring up BigO at work and people's eyes glaze over. I've stopped trying to explain, but damn with this video I think anyone could understand it!

  • @diegovasquezrevilla
    @diegovasquezrevilla 4 года назад

    Great video keep up the good work

  • @georgeharrisonOK
    @georgeharrisonOK 4 года назад

    Good video, keep up the good work!!

  • @CheckmatesSpeedruns
    @CheckmatesSpeedruns 3 года назад +6

    For the problem at the start, there is a better solution:
    for a in range(n+1):
    for b in range(n-a):
    c = n-b-a
    print(a, b, c)
    Efficiency: O(n²)
    It is still faster than Alice's solution, because instead of (n+1)² operations, this one takes n(n+1)/2, or binomial(n+1, 2) operations.

  • @Texa8
    @Texa8 3 месяца назад +1

    10:00 is still not clear to me. What is g(n) and why do we even need this weird definition..........

  • @nadonadia2521
    @nadonadia2521 2 года назад

    Great video i love the topics and the way that have been presented may teacher have not your skills, please more videos on running algorithmes programming ideas, thank you, i have subscribed to tthe channel.

  • @abdelbassetlabbi852
    @abdelbassetlabbi852 4 года назад

    incridible! just continue bro

  • @cross_roadz
    @cross_roadz 4 года назад +1

    If only I saw this before my discrete mathematics exam

  • @brianherring3776
    @brianherring3776 4 года назад +1

    Well done! What tool(s) are you using to create this content?

    • @askii2004
      @askii2004 4 года назад

      It's liked in the description! It utilizes manim, which was developed by Grant Sanderson (3Blue1Brown).

  • @legendgames128
    @legendgames128 2 года назад

    12:20 I argue that you can cram extra items between constant and logarithmic, by considering the inverse of the Ackermann function. n=3 would equal the cube root of 3. n=2 would be 1 (2/2) and n=1 would be 0 (1-1)

  • @mileslyfe5239
    @mileslyfe5239 4 года назад +3

    Is it “O”kay to think about “big O” as the end behavior of the function which represents the number of operations needed for an algorithm to execute?

    • @johnmcleodvii
      @johnmcleodvii 2 года назад

      It's a rough approximation of the number of operations to execute relative to the "size" of the input.
      What the size of the input is depends on the algorithm. In some cases it is the value of a number, in others it is the count of items. For factorization it is the number to be factored. For sorting, it is the count of items.

  • @darkelwin02
    @darkelwin02 4 года назад

    Hey good video explaining it. I think including the Factorial class would have been useful. While yes you dont see those outside graphs often, making an algoritmh on your own and recognizing to stay away from that class is practical

  • @HamzaQayyum
    @HamzaQayyum Год назад

    Error at 11:37. It should be lim[n->infinity] of f(n)/g(n), not g(n)/f(n).

  • @dwolrdcojp
    @dwolrdcojp 3 года назад

    I love this channel

  • @kakizakichannel
    @kakizakichannel 2 года назад

    It's showtime!

  • @doBobro
    @doBobro 4 года назад +3

    You've got strange timings for O(n^2) and O(n^3). For python code there is a clear n^2 and n^3 relation:
    For twice the data count_bob times as x8:
    165us count_bob(10)
    1.02ms count_bob(20)
    6.91ms count_bob(40)
    51ms count_bob(80)
    For twice the data count_alice times as x4:
    18us count_alice(10)
    60.4us count_alice(20)
    221us count_alice(40)
    848us count_alice(80)

    • @indian_otaku2388
      @indian_otaku2388 2 года назад

      Isn't it machine dependant?

    • @doBobro
      @doBobro 2 года назад

      @@indian_otaku2388 for some degree yes, but there is a clear error in measurements.

  • @moonst6020
    @moonst6020 3 года назад

    I am here to cry over my upcoming tests🙂

  • @MuradBeybalaev
    @MuradBeybalaev 4 года назад +3

    Your passing explanation of machine dependence is incorrect.
    It's not about a supercomputer being faster. That wouldn't affect the plot shape.
    It's that different machines can have varying instruction execution costs AND we generally speak of multitasking execution environments that can dynamically tailor performance mid-execution.

  • @nicoles_handle
    @nicoles_handle 4 года назад +3

    3:47 can anyone explain why he says we gotta wait hours just for 1 data point? what data point is he referring to?

    • @Reducible
      @Reducible  4 года назад +11

      What I meant there is just an overall critique of the idea of using actual time in seconds as way to measure efficiency of an algorithm. If you wanted to get a sense for comparing algorithms based on time it takes to run for various inputs, for large inputs, you would literally have to wait hours to get a "data point" for comparison -- e.g in the Bob vs Alice situation for n = 10^6 we had to wait more than half an hour to get the answer for how long it ran. There are other algorithms with even worse running times where it may take several hours for it to finish running. In practice, this just doesn't make much sense to do and that's the point I was trying to make in that comment. Apologies for any confusion in that section.

    • @nicoles_handle
      @nicoles_handle 4 года назад +4

      @@Reducible no don't be sorry i'm just relatively new to all this, thanks a lot!!

    • @johnmcleodvii
      @johnmcleodvii 2 года назад

      For example if the algorithm is O(n!), Anything with n > 15 or so is going to take a long time. You don't ever want to feed an O(n!) algorithm anything as large as 60 since you would never finish. The traveling salesman problem (think deliver truck routing) is one algorithm where the only known algorithms with perfect solutions are O(n!). However rumor has it that UPS has come up with a clever work around. Draw circles on the map that include exactly 5 nodes. Draw circles on the map that include exactly 5 of those circles. Draw circles on the map that include exactly 5 of the larger circles. These largest circles are about right for a days route. Route each of the smallest circles. Route the middle circles. While it may not be perfectly optimal, it's good enough and it can be calculated quickly. The perfect route for 125 stops would take the rest of the life of the universe...