Wow, I actually think 3B1B has created a revolution in how we should think of structuring educational videos. I'm willing to bet that in the near future his visualization program and video structure are going to be used by MANY more people and hopefully even inspire some teachers to use his methods in class. Amazing work @Reducible! You should think about doing an "Essence of Computer Science" in the future!
I came to this video with high hopes as you used Manim, yet, I was blown away with how detailed and thorough you were. I paused the video so many times, yet, not only was told what I was thinking, I was told it was normal, to think and ponder on it. I am actually blown away on how you able to teach something so simple that is a basic need for computer scientises in a way that makes me wonder how tf didn't I think of that!
I've seen many channels using manim. Beside its creator, this is the most original and really good one. I'm trying to have my own channel where a explain my area with other students, and I'm having trouble programing the animations, so I need to say thanks for this beautiful work of yours.
This is so easy to understand....roughly 2 years ago I tried to gather this information on Computational complexity from a big book and ended up understanding very little.
Never was into computer science, but it feels good to finally understand what the notation actually mean after encountering it in my higher-end math classes from time to time. Thank you!
A professor once asked the big O of an algorithm, to which I replied "O(n!^(n!))." He had to admit that I was right, then he rephrased the question as "what is the smallest big-O of the algorithm?" Never forget that a big-O is not necessarily the best bound on an algorithm, just one of many bounds
@@wolfranck7038 Not quite. There are O and Omega. Big O is *an* upper bound. Often we mean the smallest (known) upper bound. Similarly Ω is *a* lower bound. Often we mean the largest (known) lower bound. When O and Ω are identical then that is called Θ.
@@klobiforpresident2254 yeah but if you think about it, the "smallest" big O will be of the same order as the function concerned, thus also being big omega, and it will be in big theta Because smaller than that would be only big omega and bigger thant that would only be big O (It's clearly not a rigorous explanation but can't do much better in a YT comment)
Thanks for the time you put into this video, being able to reduce complex things to something I could explain to a highschool student is thrilling in a nerdy way
Clearest, most complete Big-O video I've seen. Kudos. It looks like you used Grant's manim package to create this. I have been thinking of learning manim so I can create a video giving some insight into why the radix sort is so fast. Work doesn't leave me much time for such pleasures, though; you do such a good job with this, if you are interested I can privately explain the insight so you could make the video if it appeals to you.
I noticed a small error in your video. You typed two quotation marks in LaTeX at 9:59. They are both ending quotation marks. The way to fix this is to use this markup: ``this will be surrounded with proper quotation marks when rendered in \LaTeX’’ Notice that I used two single quotes at the end (‘’) not a double quote (“).
Good catch. Although you missed 'defintiion' spelled with the lesser used two 'i's. Neither latex or other rubberised products will help with that one.
O(?) is deep and important in understanding processes in general, not just computing. Social structures scale in scary ways. Some stuff works great at dining room table scale (N~6), is still functional at the scale of a monastery (N ~ 100) but when they scale to small countries ( N ~ 20MM) let alone large countries ( N ~ 300 MM ) or huge countries ( N ~ 1 Billion) it hits the fan. Central planning for example. The Soviets set up an office to make sure everything was fairly priced relative to other stuff. Easy right? But there were N ~ 25 MM items in the economy. So relative fairness is N^2, but everything is fair compared to itself, so N^2 - N. Further comparison is sort of symmetric, so (N^2 -N)/2. Big improvement but when N = 25 MM, O(fair) ~ 312 trillion. Poor aparatchicks. (Tom Sowell cited this case) { if a is fair relative to b and b is fair relative to c, is a fair relative to c? How much would this improve things?}
That argument is obviously fallacious. You don't need to compare the price of every type of watch to every type of car, and every type of bread, etc. The watches only need compared amongst themselves, and to some baseline.
You missed a O(√n) class, which also can be written as O(n^0.5). There is a thing where in the worst case scenario the algorithm will take at most √n "operations" to finish, for any given n, doesn't matter if it's 1 thousand n or 95 billion billion n. (Technically there's also one with √(n)/2 but they're in the same scaling class) Since nobody seems to be mentioning it anywhere I've decided to call it "rooted" running time class. And the use case isn't that uncommon, it's meant to substitute a 2d array to tradeoff lookup time for common insertions/deletions.
There's an inaccuracy in 11:30. If f=O(g) it doesn't necessarily mean that the limit of g(n)/f(n) exists. For example take g(n)=n and f(n) to alternate between n and 2n (for evens and odds). The mathematically correct way to put it is the the liminf of g(n)/f(n) is equal to C which is greater than 0.
As a mathematician, I also have to say that it is easier to find the number of operations made if you try to say what the program does exactly and condensing all the parts that don't really matter to the runtime. For example for this algorithm: define f(n): f(1)=1 f(n)=f(n-1)+f(n-1) Thus, we have 1+2+4+8+...+2^(n-1)=2^n -1 calls, thus O(2^n) calls, as well as around one addition per call, exactly 2^(n-1) -1, being O(2^n). One simple step makes this a linear problem: define f(n): f(1)=1 f(n)=2*f(n-1) Then, we have n-1 multiplications (being O(n)) and n calls of f (also O(n)). But at this point, we can see, that this is a much simpler definition: define f(n): f(n)=2^n Here, we have n-1 multiplications, and 1 call of f, thus we have O(n) multiplications and O(1) calls of the function, making this algorithm much better. And we also have to consider what range of numbers we want to use. It may be great to use a highly complex algorithm for multiplication with an extremely good runtime O(f(n)), but if we only use numbers less than 10^10, it might still be much faster to use the algorithm with the worse O(f(n)) runtime simply because the n is not large enough in this practical application
I'm not sure but I think you missed the point. The subject of the video is to explain what O(f(n)) means, not how we can build a better algorithm for a given problem. In this sense, the example is very well-chosen since it is very clear that it's not the best solution to the problem. But there is still one thing in your answer which made me remember of times when processors could only add and multiply (in fact they could only shift bytewise :), and that is that an algorithm of class O(n) which calls an exponential function might not be faster than an algorithm of class O(2^n) which uses only addition. But I don't think these questions are still important for coders nowadays.
@@geradoko I do understand the point quite well agtually, but I was trying to note ways to find out what the actual order of runtime is (i. e. condensing unnecessary code, which also helps with cleaning up and improving the code) and why runtime isn't everything, because often enough the n aren't large enough
@@zapazap Well, yes and no. If there is an algorithm which has optimal order of operations, then we can say that that problem also has that order. For example, if calculating the n-th term of some specific recursive series is at its best O(n), then we can say that that series has a computational effort of O(n)
11:45 It's not really true that there should be such a limit. It may not exist. For example g(n) / f(n) may by 1 for even n's and 2 for odd n's. So there's no limit in this case. Even more. It's true that for f(n) = n and g(n) = n^2 it's true that n = O(n^2). But g(n) / f(n) = n has no limit.
How would lim sup help Сергей Обритаев's example? if f(n) = n and g(n) = n^2, the limit as n goes to infinity of g(n)/f(n) would not exist because it goes to infinity, so the lim sup of this expression going to infinity would also not exist. Even with lim sup in that definition shown at 11:45, it doesn't allow n = O(n^2) as the original definition does, so Сергей Обритаев's criticism does not seem fixed.
O notation does not require that the function has a strict limit in the terms that calculus does. If odd numbers for input are one growth in n and even numbers are something other growth in n, the O notation would be for the faster growing one. So in the example where odd numbers had linear growth and even numbers had n² growth, the O notation would be O(n²). Off the top of my head, I can think of no algorithms that have significantly different behaviors for even and odd n.
Oh I thought this was 3blue1brown but with an odd accent. Great video, might I recommend making your visual style a tiny bit more distinct. Since honestly its great to have a thing that makes your channel stand out a bit. Like I think you even picked the same font, unless Im mistaken
That last example emphasises the importance of caching - caching would reduce the runtime to O(n) worst-case and O(1) best-case (of course, it could also be reduced to O(n) simply by multiplying the recursive result by 2)
For the problem at the start, there is a better solution: for a in range(n+1): for b in range(n-a): c = n-b-a print(a, b, c) Efficiency: O(n²) It is still faster than Alice's solution, because instead of (n+1)² operations, this one takes n(n+1)/2, or binomial(n+1, 2) operations.
Great video i love the topics and the way that have been presented may teacher have not your skills, please more videos on running algorithmes programming ideas, thank you, i have subscribed to tthe channel.
12:20 I argue that you can cram extra items between constant and logarithmic, by considering the inverse of the Ackermann function. n=3 would equal the cube root of 3. n=2 would be 1 (2/2) and n=1 would be 0 (1-1)
It's a rough approximation of the number of operations to execute relative to the "size" of the input. What the size of the input is depends on the algorithm. In some cases it is the value of a number, in others it is the count of items. For factorization it is the number to be factored. For sorting, it is the count of items.
Hey good video explaining it. I think including the Factorial class would have been useful. While yes you dont see those outside graphs often, making an algoritmh on your own and recognizing to stay away from that class is practical
You've got strange timings for O(n^2) and O(n^3). For python code there is a clear n^2 and n^3 relation: For twice the data count_bob times as x8: 165us count_bob(10) 1.02ms count_bob(20) 6.91ms count_bob(40) 51ms count_bob(80) For twice the data count_alice times as x4: 18us count_alice(10) 60.4us count_alice(20) 221us count_alice(40) 848us count_alice(80)
Your passing explanation of machine dependence is incorrect. It's not about a supercomputer being faster. That wouldn't affect the plot shape. It's that different machines can have varying instruction execution costs AND we generally speak of multitasking execution environments that can dynamically tailor performance mid-execution.
What I meant there is just an overall critique of the idea of using actual time in seconds as way to measure efficiency of an algorithm. If you wanted to get a sense for comparing algorithms based on time it takes to run for various inputs, for large inputs, you would literally have to wait hours to get a "data point" for comparison -- e.g in the Bob vs Alice situation for n = 10^6 we had to wait more than half an hour to get the answer for how long it ran. There are other algorithms with even worse running times where it may take several hours for it to finish running. In practice, this just doesn't make much sense to do and that's the point I was trying to make in that comment. Apologies for any confusion in that section.
For example if the algorithm is O(n!), Anything with n > 15 or so is going to take a long time. You don't ever want to feed an O(n!) algorithm anything as large as 60 since you would never finish. The traveling salesman problem (think deliver truck routing) is one algorithm where the only known algorithms with perfect solutions are O(n!). However rumor has it that UPS has come up with a clever work around. Draw circles on the map that include exactly 5 nodes. Draw circles on the map that include exactly 5 of those circles. Draw circles on the map that include exactly 5 of the larger circles. These largest circles are about right for a days route. Route each of the smallest circles. Route the middle circles. While it may not be perfectly optimal, it's good enough and it can be calculated quickly. The perfect route for 125 stops would take the rest of the life of the universe...
i love how 3blue1brown has created a new genre
I’ve seen a lot of videos similar to 3b1b’s lately and I love them
You commented exactly what I and many others were going to comment
I thought this was 3b1b
Well he did make the Python library that this video used for its graphics so yeah that’s a reason why it’s similar
Yeah. The animation styles seem similar too, so they may be using his library
Soo good. You broke down a commonly misunderstood topic in an intuitive and engaging way. I love it!
Thanks Tiffany! I appreciate it!
I cant understand how youtube isn't recommending these videos. So elegantly explained, and the animations, OMG!
It just recommended this to me xD
I mean his enunciation isn’t great
Well every people watching this had this recommended... You included. Otherwise how did you find it?
I agree it is awesome though.
A 3Blue1Brown for computer science. Awesome. This might end up being my favourite channel.
Wow, I actually think 3B1B has created a revolution in how we should think of structuring educational videos. I'm willing to bet that in the near future his visualization program and video structure are going to be used by MANY more people and hopefully even inspire some teachers to use his methods in class.
Amazing work @Reducible! You should think about doing an "Essence of Computer Science" in the future!
I came to this video with high hopes as you used Manim, yet, I was blown away with how detailed and thorough you were. I paused the video so many times, yet, not only was told what I was thinking, I was told it was normal, to think and ponder on it. I am actually blown away on how you able to teach something so simple that is a basic need for computer scientises in a way that makes me wonder how tf didn't I think of that!
Easily some of the best computer science videos on youtube and definitely better than my CS professors :). Thank you for putting in all this work!
I've seen many channels using manim. Beside its creator, this is the most original and really good one.
I'm trying to have my own channel where a explain my area with other students, and I'm having trouble programing the animations, so I need to say thanks for this beautiful work of yours.
this is the on video that finally settled my confusion with asymptotic notations. THANK YOU SO. MUCH. THIS IS INSANELY GOOD AND HIGH-QUALITY!!
Best explanation of Big O I have come across!
This is the best explanation I've ever seen.
This is so easy to understand....roughly 2 years ago I tried to gather this information on Computational complexity from a big book and ended up understanding very little.
Definitely not forgetting Big O notation for a really long time! Excellent video sir! Thanks
This is hands down the best explanation of big O notation!!!
Instantly subscribed to your channel.
👌👌👌😘😘
BTW your voice is just like 3blue 1brown!
Never was into computer science, but it feels good to finally understand what the notation actually mean after encountering it in my higher-end math classes from time to time. Thank you!
A professor once asked the big O of an algorithm, to which I replied "O(n!^(n!))." He had to admit that I was right, then he rephrased the question as "what is the smallest big-O of the algorithm?" Never forget that a big-O is not necessarily the best bound on an algorithm, just one of many bounds
If the answer to the question is really O(n!^n!), I think whoever wrote it is having a very rough time
@@12-343 it's probably my code tbh
If I'm not mistaken, the "smallest big O" is called big theta !
@@wolfranck7038
Not quite.
There are O and Omega. Big O is *an* upper bound. Often we mean the smallest (known) upper bound. Similarly Ω is *a* lower bound. Often we mean the largest (known) lower bound.
When O and Ω are identical then that is called Θ.
@@klobiforpresident2254 yeah but if you think about it, the "smallest" big O will be of the same order as the function concerned, thus also being big omega, and it will be in big theta
Because smaller than that would be only big omega and bigger thant that would only be big O
(It's clearly not a rigorous explanation but can't do much better in a YT comment)
Thanks for the time you put into this video, being able to reduce complex things to something I could explain to a highschool student is thrilling in a nerdy way
Thanks these are lifesavers for my current course.
just put some thinking pi's and this literally becomes 3blue 1brown video!!! Really great channel
Exactly!
He is the 3blue 1brown of Computer science. 😎🔥🔥🔥
the best video about O notation, you bring all together, even lim, cool
BEST VIDIEO ON BIG O NOTATION!!! every other video out there just leaves me more confused. thank you.
by far it is the best explanation after years of search.
as a junior in cs right now, in my opinion, this is a must watch for all people trying to get a degree in programming
It would be an understatement to say this channel is underrated
Was expecting several animations about other big O examples, like exponential one.
It is very interesting to see them in comparison.
Hope your channel keeps growing!
Clearest, most complete Big-O video I've seen. Kudos.
It looks like you used Grant's manim package to create this. I have been thinking of learning manim so I can create a video giving some insight into why the radix sort is so fast. Work doesn't leave me much time for such pleasures, though; you do such a good job with this, if you are interested I can privately explain the insight so you could make the video if it appeals to you.
@14:55 "we can actually use this 'symme-tree' to help us"
What a great video! Thanks so much!
ABSOLUTELY AWESOME I-
A very simple and detailed explanation. Thank you very much
O(2^N) : "Yo, I'm so complex"
O(N!) : "What was that, punk?"
How come I noticed this masterpiece so late!? Definitely subscribing!
I love your videos for accuracy and clear content. Please upload videos on different algorithms and shortest paths.
I noticed a small error in your video. You typed two quotation marks in LaTeX at 9:59. They are both ending quotation marks. The way to fix this is to use this markup:
``this will be surrounded with proper quotation marks when rendered in \LaTeX’’
Notice that I used two single quotes at the end (‘’) not a double quote (“).
Good catch. Although you missed 'defintiion' spelled with the lesser used two 'i's. Neither latex or other rubberised products will help with that one.
Now I want big O of big O notation, to group like growth rates together : polynomials, exponentials, and so on.
Such a great video thank you so much I honestly don’t have enough words to thank you appreciate that
Love how to find big o with graph explanations.. thank you 😀😀
This masterpiece omg thank you so much!
Beautiful presentation accompanied with fantastic music of enlightenment!
You are amazing, please upload more videos.
you have the best explanation. thanks for the video
thanks for your efforts , I just discovered your channel and it's INCREDIBLY helpful !
These are so wonderful - thank you !
Brilliant video. Thank you.
who came from 3blue1brown’s post?
Me
Which one?
Me
Wow, this was incredibly good. Thank you.
This was amaaaazing
Jeez I learned more here in 15 mins than a few weeks of my statistics and numerical methods class...
At 15:53 table 1->1, 2->2, 3->4, 4->8, 5->16; at 16:15 If n=k, Count = 2^(k-1)= 2^k/2
Thanks so much! You have invaluable content!
I remember this topic being explained in algorithm courses very similarly.
Great video!! I’ve always have doubts about this topic.
I loved this video. Thank you !
i didnt come from 3b1b but im suprised at quality of these videos
Amazing video! Love the use of manim.
O(?) is deep and important in understanding processes in general, not just computing. Social structures scale in scary ways. Some stuff works great at dining room table scale (N~6), is still functional at the scale of a monastery (N ~ 100) but when they scale to small countries ( N ~ 20MM) let alone large countries ( N ~ 300 MM ) or huge countries ( N ~ 1 Billion) it hits the fan. Central planning for example.
The Soviets set up an office to make sure everything was fairly priced relative to other stuff. Easy right? But there were N ~ 25 MM items in the economy. So relative fairness is N^2, but everything is fair compared to itself, so N^2 - N. Further comparison is sort of symmetric, so (N^2 -N)/2. Big improvement but when N = 25 MM, O(fair) ~ 312 trillion. Poor aparatchicks. (Tom Sowell cited this case)
{ if a is fair relative to b and b is fair relative to c, is a fair relative to c? How much would this improve things?}
That argument is obviously fallacious. You don't need to compare the price of every type of watch to every type of car, and every type of bread, etc. The watches only need compared amongst themselves, and to some baseline.
Well presented! Thank you.
The Man.The Myth.The Teacher
Very well explained! I guess I will quote this video in my next complexity discussion at work :-D
Thank you for saving my discrete math grade
Really instructive, as always
Keep up the amazing work ;)
You missed a O(√n) class, which also can be written as O(n^0.5). There is a thing where in the worst case scenario the algorithm will take at most √n "operations" to finish, for any given n, doesn't matter if it's 1 thousand n or 95 billion billion n.
(Technically there's also one with √(n)/2 but they're in the same scaling class)
Since nobody seems to be mentioning it anywhere I've decided to call it "rooted" running time class.
And the use case isn't that uncommon, it's meant to substitute a 2d array to tradeoff lookup time for common insertions/deletions.
Thank you so much and I really appreciate the video.
Glad my teachers taught me well lol.
Thanks! I finally undestood it!
Maaaaan this is amazing ♥️♥️♥️
you probably should've added O(sqrt n) and O(n!) to your "common running times" list
since they do appear quite often as well
it's great and very clear expression thx bro
There's an inaccuracy in 11:30. If f=O(g) it doesn't necessarily mean that the limit of g(n)/f(n) exists. For example take g(n)=n and f(n) to alternate between n and 2n (for evens and odds). The mathematically correct way to put it is the the liminf of g(n)/f(n) is equal to C which is greater than 0.
Good point, but I still think this is wrong, at least according to his definition.
By what be says, n should be O(n^2), because n0.
So good!
As a mathematician, I also have to say that it is easier to find the number of operations made if you try to say what the program does exactly and condensing all the parts that don't really matter to the runtime. For example for this algorithm:
define f(n):
f(1)=1
f(n)=f(n-1)+f(n-1)
Thus, we have 1+2+4+8+...+2^(n-1)=2^n -1 calls, thus O(2^n) calls, as well as around one addition per call, exactly 2^(n-1) -1, being O(2^n).
One simple step makes this a linear problem:
define f(n):
f(1)=1
f(n)=2*f(n-1)
Then, we have n-1 multiplications (being O(n)) and n calls of f (also O(n)).
But at this point, we can see, that this is a much simpler definition:
define f(n):
f(n)=2^n
Here, we have n-1 multiplications, and 1 call of f, thus we have O(n) multiplications and O(1) calls of the function, making this algorithm much better. And we also have to consider what range of numbers we want to use. It may be great to use a highly complex algorithm for multiplication with an extremely good runtime O(f(n)), but if we only use numbers less than 10^10, it might still be much faster to use the algorithm with the worse O(f(n)) runtime simply because the n is not large enough in this practical application
I'm not sure but I think you missed the point. The subject of the video is to explain what O(f(n)) means, not how we can build a better algorithm for a given problem. In this sense, the example is very well-chosen since it is very clear that it's not the best solution to the problem.
But there is still one thing in your answer which made me remember of times when processors could only add and multiply (in fact they could only shift bytewise :), and that is that an algorithm of class O(n) which calls an exponential function might not be faster than an algorithm of class O(2^n) which uses only addition. But I don't think these questions are still important for coders nowadays.
@@geradoko I do understand the point quite well agtually, but I was trying to note ways to find out what the actual order of runtime is (i. e. condensing unnecessary code, which also helps with cleaning up and improving the code) and why runtime isn't everything, because often enough the n aren't large enough
Which illustrates that (by definition) complexity here describes not the problem but the algorithm used to solve it.
@@zapazap Well, yes and no. If there is an algorithm which has optimal order of operations, then we can say that that problem also has that order. For example, if calculating the n-th term of some specific recursive series is at its best O(n), then we can say that that series has a computational effort of O(n)
@@JonathanMandrake Yes, you are correct. The very question of P=NP involves problem classes and not merely particular algorithms. Good call.
The second definition is quite understandable
Great work, thanks!
11:45 It's not really true that there should be such a limit. It may not exist. For example g(n) / f(n) may by 1 for even n's and 2 for odd n's. So there's no limit in this case.
Even more. It's true that for f(n) = n and g(n) = n^2 it's true that n = O(n^2). But g(n) / f(n) = n has no limit.
Yeah, it's supposed to be lim sup instead of just lim and an absolute value operator should be applied on the numerator of that fraction.
How would lim sup help Сергей Обритаев's example? if f(n) = n and g(n) = n^2, the limit as n goes to infinity of g(n)/f(n) would not exist because it goes to infinity, so the lim sup of this expression going to infinity would also not exist. Even with lim sup in that definition shown at 11:45, it doesn't allow n = O(n^2) as the original definition does, so
Сергей Обритаев's criticism does not seem fixed.
O notation does not require that the function has a strict limit in the terms that calculus does. If odd numbers for input are one growth in n and even numbers are something other growth in n, the O notation would be for the faster growing one. So in the example where odd numbers had linear growth and even numbers had n² growth, the O notation would be O(n²).
Off the top of my head, I can think of no algorithms that have significantly different behaviors for even and odd n.
Nicely explained...
u explained it sooooo goooood, thanks
Oh I thought this was 3blue1brown but with an odd accent.
Great video, might I recommend making your visual style a tiny bit more distinct.
Since honestly its great to have a thing that makes your channel stand out a bit.
Like I think you even picked the same font, unless Im mistaken
I keep reading the text in 3blue1brown's voice
ikr!!
That last example emphasises the importance of caching - caching would reduce the runtime to O(n) worst-case and O(1) best-case (of course, it could also be reduced to O(n) simply by multiplying the recursive result by 2)
I want to give you a hug bro!
I bring up BigO at work and people's eyes glaze over. I've stopped trying to explain, but damn with this video I think anyone could understand it!
Great video keep up the good work
Good video, keep up the good work!!
For the problem at the start, there is a better solution:
for a in range(n+1):
for b in range(n-a):
c = n-b-a
print(a, b, c)
Efficiency: O(n²)
It is still faster than Alice's solution, because instead of (n+1)² operations, this one takes n(n+1)/2, or binomial(n+1, 2) operations.
10:00 is still not clear to me. What is g(n) and why do we even need this weird definition..........
Great video i love the topics and the way that have been presented may teacher have not your skills, please more videos on running algorithmes programming ideas, thank you, i have subscribed to tthe channel.
incridible! just continue bro
If only I saw this before my discrete mathematics exam
Well done! What tool(s) are you using to create this content?
It's liked in the description! It utilizes manim, which was developed by Grant Sanderson (3Blue1Brown).
12:20 I argue that you can cram extra items between constant and logarithmic, by considering the inverse of the Ackermann function. n=3 would equal the cube root of 3. n=2 would be 1 (2/2) and n=1 would be 0 (1-1)
Is it “O”kay to think about “big O” as the end behavior of the function which represents the number of operations needed for an algorithm to execute?
It's a rough approximation of the number of operations to execute relative to the "size" of the input.
What the size of the input is depends on the algorithm. In some cases it is the value of a number, in others it is the count of items. For factorization it is the number to be factored. For sorting, it is the count of items.
Hey good video explaining it. I think including the Factorial class would have been useful. While yes you dont see those outside graphs often, making an algoritmh on your own and recognizing to stay away from that class is practical
Error at 11:37. It should be lim[n->infinity] of f(n)/g(n), not g(n)/f(n).
I love this channel
It's showtime!
You've got strange timings for O(n^2) and O(n^3). For python code there is a clear n^2 and n^3 relation:
For twice the data count_bob times as x8:
165us count_bob(10)
1.02ms count_bob(20)
6.91ms count_bob(40)
51ms count_bob(80)
For twice the data count_alice times as x4:
18us count_alice(10)
60.4us count_alice(20)
221us count_alice(40)
848us count_alice(80)
Isn't it machine dependant?
@@indian_otaku2388 for some degree yes, but there is a clear error in measurements.
I am here to cry over my upcoming tests🙂
Your passing explanation of machine dependence is incorrect.
It's not about a supercomputer being faster. That wouldn't affect the plot shape.
It's that different machines can have varying instruction execution costs AND we generally speak of multitasking execution environments that can dynamically tailor performance mid-execution.
3:47 can anyone explain why he says we gotta wait hours just for 1 data point? what data point is he referring to?
What I meant there is just an overall critique of the idea of using actual time in seconds as way to measure efficiency of an algorithm. If you wanted to get a sense for comparing algorithms based on time it takes to run for various inputs, for large inputs, you would literally have to wait hours to get a "data point" for comparison -- e.g in the Bob vs Alice situation for n = 10^6 we had to wait more than half an hour to get the answer for how long it ran. There are other algorithms with even worse running times where it may take several hours for it to finish running. In practice, this just doesn't make much sense to do and that's the point I was trying to make in that comment. Apologies for any confusion in that section.
@@Reducible no don't be sorry i'm just relatively new to all this, thanks a lot!!
For example if the algorithm is O(n!), Anything with n > 15 or so is going to take a long time. You don't ever want to feed an O(n!) algorithm anything as large as 60 since you would never finish. The traveling salesman problem (think deliver truck routing) is one algorithm where the only known algorithms with perfect solutions are O(n!). However rumor has it that UPS has come up with a clever work around. Draw circles on the map that include exactly 5 nodes. Draw circles on the map that include exactly 5 of those circles. Draw circles on the map that include exactly 5 of the larger circles. These largest circles are about right for a days route. Route each of the smallest circles. Route the middle circles. While it may not be perfectly optimal, it's good enough and it can be calculated quickly. The perfect route for 125 stops would take the rest of the life of the universe...