Guys, if you liked this video & want many more such tech educational videos on this channel then please support me by subscribing to this channel & also share it with your friends too ✌
I guess im randomly asking but does anybody know a tool to get back into an instagram account?? I was stupid lost my password. I would love any help you can offer me.
Never stop making videos. This legit prepared me for my exam 100 times better than my professor did. I got an A on the exam because of you. Thank you so much!
That's amazing to know Jacob ✌️ super happy for you and your amazing results too. Would be a great help if you could share our channel & videos with your friends too 😊
Yes that is true i can not understand nothing from my professor too. This guy is pure gold learned almost everything from him. Never stop uploading man you are gifted! Thank you for everything
I have not even watched the video yet and I already know this this the best video I have every seen. I legitimately screamed in joy when I realized this was a Simple Snippets video.
Hehehe whats the full form of ASU ? which institute is this ? I wish I made that kinda money but surely in time I will earn a lot too. Right now my only goal is to provide high quality education to everyone 😇
Most welcome Shreyas, please do share the videos & our channel with your friends too. Thats the biggest help and support you can give back to this channel! 😇
I have a doubt. If we need to find the closest fit to the best case time like you said, then shouldn't Big-Omega(n) have the constant as 2 instead of 1?? 2n < 2n+3 always Instead of 1n as 2n is a closer fit. Please tell me if I'm wrong with reason
With big omega you don't actually need to write what constant you use, whether it's 2n or 2000n it's still just O(n), you just have to find any constant to satisfy g(n) being bigger after n0 and you're set
I don't understand where these constant values are taken from. How do determine if c should be 1 or 2 or whatever? Is it just pick a random number, or are there some logic behind it?
You just drop the constants because what matters is the type of operation happening, not how many times it happens. So, O(2n), O(3n), O(4n), etc. can have their consants dropped to O(n), because they're all the same type of operation regardless (linear).
Nice explanation, but cases(best, worst and average) and asymptotic notations are two independent terms, like best case of linear search also can be mentioned as O(1).
Sir is f(n) is different algorithm for same problem because u took differ equation for f(n) and g(n), is f(n) is like refrence and we r comparing with g(n) to find best best ,worst and average case?
Do not get confuse. Lets clear if f(n) = n^3+n^2+1 then g(n) is some derived portion of f(n) which is impacting your algorithm. Therefore here, g(n) can be n^3 i.e. g(n) = n^3 or g(n) = n^3+n or g(n)=n^3+5 etc. Both f(n) and g(n) belongs to same algorithm.
@@Kucchuu I had also the same problem but i can't understand where the g(n) comes from can you explane. you saying derived portion what is derived portion
when f(n) = 2n + 3 Big Omega is Ω(n) Big Theta is θ(n) But for linear seach algorithm f(n) would also be like f(n) = a*n + b; where a and b are some constants Then why Big Omega is Ω(1) in this case?
Haha thank you for this feedback. Would be great if you can transfer that 200 dollars to me 🤣 Just kidding. Don't need donations. I'm happy that this video helped you 😊
So when I saw the example I paused the video and got Big O=n Big Ω=1 Big θ= (n+1)/2 (because the average of n and 1 is n+1/2) I get why we get rid of the /2 for big θ, as it becomes negligible, so could the same be said of the +1?
@@ganashree8342 Yeah, I understand what Big O and the others; for easy f(n), it is easy, but the problem for me comes when the f(n) is more complex. I have a lot of issues finding c1, c2, and n0.
Why do we take the closest to f(n) fn as the best case and the worst case scenario ,it has to be the farthest one right?So that for the best case if you take Omega(1) that will be the fastest taking less time compared to Omega(n).
Maybe bcuz it is not closest to f(n). if linear g(n) = n is closer to f(n) then we will choose it instead of 1. could be wrong but i think it's the answer.
For the linear search can't you also have Omega(n) ? When you do the algorithm analysis you will end up with some linear equation such as f(n) = 3n + 2 , we can show its Omega(n) by doing 3n+2 >= c * n , take c=1 and n=1, it works. The example you gave was theoretically correct , it should at best case be Omega(1) but mathematically following the definitions is it also true to say its Omega(n) too ? This makess the bounding tighter as well
He has said that there can be many best as well as worst cases.. though there can be many upper and lower bounds but we have to take the closest one that's why it is omega(1). Hope you understood.
I can tell this is a good teaching video. Just wished the accent was a little less heavy for those who are not Indian. It's very difficult to understand and it's frustrating because I know how intelligent Indians are but can't seem to find videos where they don't speak with such heavy accents. Oh and I'm also Asian btw before anyone wants to pull the race card.
Hi simple , great explanation , would you kindly provide an example out of say ML algos where it is better to use say Big theta relative say to big O and big Omega ? Thanks
Could anyone help me with this one? I understand Big O and the others, but my problem is finding c1,c2, and n0 for complex functions. Example: n^3/1000 - 100n^2 - 100n + 3. I need to express that one in order of theta notation.
Equating O(.) notation with worst-case, Ω(.) notation with best-case and Θ(.) with average-case is incorrect. The O/Ω/Θ notations and "caseness" (worst/best/average) are independent concepts. It's a common misconception and I see nobody has pointed it out yet in the comments so I will explain why it's wrong. Let's start with that your mathematical definitions of the O/Ω/Θ notations are generally correct. Maybe would only highlight the fact that this notations are not exclusive to computer science or algorithms, but just describe the upper/lower/tight asymptotic bounds on the growth rate of a given function. Ok so the first minor inaccuracy is that when in 13:51 you have found out that f(n) is O(n) and f(n) is O(n^2) you've said that "when we try to find the O(.) notation we have to find the closest one which matches the f(n)". Well, no we don't have to. We have shown that indeed both O(n) and O(n^2) satisfy the mathematical definition and thus both are true. The reason we prefer the O(n) to O(n^2) is just because it gives as more information (it's a tighter bound). Now the big problem. At 24:10 you decided to analyse the time complexity of the linear search algorithm. So now it's true that it's Ω(1) and it's O(n), however it's NOT Θ(n). There is actually no function g(n) such that the time complexity is Θ(g(n)). That is because indeed Ω(1) is the tightest lower bound (for example it's not Ω(log(n))) and O(n) is the tightest upper bound (for example it's not O(log(n))). So you can see there is no g(n) which satisfies the condition c1*g(n) Θ(1) In the average case we have: Ω(n) and O(n) => Θ(n) // here we could say it's n/2, but we omit the constants So it's worst case Θ(n), best case Θ(1) and average case Θ(n). See that I used Θ(.) notation for each worst/best/average case. And the benefit of using Θ(.) for all cases is that it shows the tight bound. That is for example when we say it's worst case Θ(n) it means that it is not worst case Θ(1) and it is not worse case Θ(n^2). When we would use O(.) notation to describe worse case we can indeed say that it's O(n), but it's also true that it's O(n^2). So using Θ(.) gives us more information (it "forces" us to give the tight bound). This means that we should generally use Θ(.) notation as it gives us the most information. The problem however is that if we want to look at the general case of the algorithm the Θ(.) simply might not exist. So in that circumstance the best we can do is say that in general case this algorithm is O(n) and Ω(1). The only algorithms for which we can describe the general case complexity using Θ(.) notation are once for which worst case Θ(.) is the same as best case Θ(.). For example the problem of finding minimum value in n element array is worst case Θ(n), best case Θ(n) and average case Θ(n). So we can say that this algorithm has (in general) Θ(n) time complexity.
Guys, if you liked this video & want many more such tech educational videos on this channel then please support me by subscribing to this channel & also share it with your friends too ✌
please make a tutorial on visual basic
I guess im randomly asking but does anybody know a tool to get back into an instagram account??
I was stupid lost my password. I would love any help you can offer me.
How can we imagine the value of g(n) to be n,n2 like that
Never stop making videos. This legit prepared me for my exam 100 times better than my professor did. I got an A on the exam because of you. Thank you so much!
That's amazing to know Jacob ✌️ super happy for you and your amazing results too. Would be a great help if you could share our channel & videos with your friends too 😊
Yes that is true i can not understand nothing from my professor too. This guy is pure gold learned almost everything from him. Never stop uploading man you are gifted! Thank you for everything
@@georgikarastoychev1241 How can i be Software Engineer after 12 th commerce?
It's been 3yrs and this saved my life
I have not even watched the video yet and I already know this this the best video I have every seen. I legitimately screamed in joy when I realized this was a Simple Snippets video.
the most thoroughly and easily explained tutorial I have ever seen. Thank you a bunch!
7mins into the video, I understood Big Oh better. Well played.
You are amazing!
Straight to the point!
Nice editting!
I truly appreciate it :)
Thanks buddy 🤟 glad you liked it 😊
nd u look cute :p
Thank you very much, after having tried much to grasp what my lecturer explained with no success, yours has just been through. Keep up the good work!
I'm glad I watched this after several videos. Thank you so much
You are 3x better at explaining this than my college professor at ASU. You should be making the absurd tuition money she does
Hehehe whats the full form of ASU ? which institute is this ?
I wish I made that kinda money but surely in time I will earn a lot too. Right now my only goal is to provide high quality education to everyone 😇
@@SimpleSnippets Most likely Arizona State University, I feel the same way
I am also a student of discrete mathematics at ASU who is finally getting a clear explanation. Thank You!
👍
That's great to know Fredrick 😊
Thank you, teacher. we stand with you
watching from Africa kenya. im already a teacher now because of this tutorial
your explanation is the best!!! Thank you a lot!
Thank you so much for this! Honestly saving my exams by explaining it so clearly I finally understand :')
Thank you for the video, I finally understand the concept because of you, thank you again !
Hey Bro you saved me my máster course your explanation is awesome, God bless you regards from México
Great to hear!
Thank you so much . I really appreciate your works
Glad you like them!
Thank you some much for this video!! Thanks to you in 30 min I understood perfectly what my professor didnt explain properly in 10 hours :))
Glad it helped!
This is a life saver man, thank you!!
Well explained
Thanks, this will surely help me out in my midterm
I love it - my professor should learn from you
Thank you, men. Really helped me
Quite exemplary and to the point.
Thanks for your work.
Thank you, excellent explanation
thanks bro really you are a legend
best video ever found ❤
This covers theory quite well unlike other videos
In big o notation what is c constant, like u took c as 5 in example so how we have take and what's it's role I'm not understanding 😶
Excellent explanation.
Very good lecture
Your explanation is excelent!
hi, just a little suggestion: it's better to say f(n) is O(g(n)) or f(n) belongs to O(g(n)) instead of saying f(n) = O(g(n))
Have you ever heard of a dialect of English that comes from India. Indian English bro.😁 I am serious
As always amazing video and very nice explanation. Thank you so much!
Glad you liked it!
Thanks for this video bro...
Most welcome Shreyas, please do share the videos & our channel with your friends too. Thats the biggest help and support you can give back to this channel! 😇
best explain....u r amazing😃
Thank you Vaishnavi 😊
carry on broo..... ur explaination was awesome
Am even using the tutorial to prepare for an exam this morning and is so helpful
I have a doubt. If we need to find the closest fit to the best case time like you said, then shouldn't Big-Omega(n) have the constant as 2 instead of 1??
2n < 2n+3 always
Instead of 1n as 2n is a closer fit. Please tell me if I'm wrong with reason
With big omega you don't actually need to write what constant you use, whether it's 2n or 2000n it's still just O(n), you just have to find any constant to satisfy g(n) being bigger after n0 and you're set
@@sangodan3031 You're right mate. Thanks.
superb explanation
u r amazing. Thank u soooo much
at 6:10 why did consider that c =5 but when n was powered by 2 we consider c =1 at 11:20 ?
Excellent video...
I am waiting for this type of lectures.. 🤩🤩🤩🤩
I like your ds lecture now I will complete your ds course thank you I am last year student
Glad you liked it! Please support me by sharing the videos and our channel with your friends too. Thats the biggest help and support you can provide 😇
@@SimpleSnippets you upload your videos on edyoda learning
@@amrutachavan7686 yes I have uploaded some video on edyoda platform 😊
I don't understand where these constant values are taken from. How do determine if c should be 1 or 2 or whatever? Is it just pick a random number, or are there some logic behind it?
You just drop the constants because what matters is the type of operation happening, not how many times it happens. So, O(2n), O(3n), O(4n), etc. can have their consants dropped to O(n), because they're all the same type of operation regardless (linear).
Thanks for explanation, nice video !
literally youtube king
It was amazing. Thank you.
Thank you very much. You are a hero!
Incredibly helpful video ~ thank you
Bhaiya from where can i solve DSA questions...? Coz in geekforgeek , interviewbit they have only solution but not explaination (video explaination)
Nice explanation, but cases(best, worst and average) and asymptotic notations are two independent terms, like best case of linear search also can be mentioned as O(1).
Nice explanation Sir
thanks a lot for such a amazing explanation :)
You rock! Thank you for sharing your knowledge
Best video on notations 💪
Thank you! Note: small error on Big theta slide, description says "Big Omega"
Is there a reason why you chose 2n+3 for f(n)
6:10 in this example what if we consider c=2 ad n=2? We need to desconsider the number without an n quocient for it to work?
THANK YOU VERY MUCH SIR
Finally understood it. Thank you so much.
Best explanation ever ❤️❤️❤️
You are a life saver bro
Sir is f(n) is different algorithm for same problem because u took differ equation for f(n) and g(n), is f(n) is like refrence and we r comparing with g(n) to find best best ,worst and average case?
Do not get confuse. Lets clear if f(n) = n^3+n^2+1 then g(n) is some derived portion of f(n) which is impacting your algorithm. Therefore here, g(n) can be n^3 i.e. g(n) = n^3 or g(n) = n^3+n or g(n)=n^3+5 etc. Both f(n) and g(n) belongs to same algorithm.
@@Kucchuu I had also the same problem but i can't understand where the g(n) comes from can you explane. you saying derived portion what is derived portion
when f(n) = 2n + 3
Big Omega is Ω(n)
Big Theta is θ(n)
But for linear seach algorithm f(n) would also be like f(n) = a*n + b; where a and b are some constants
Then why Big Omega is Ω(1) in this case?
I have a doubt,
It's that can we get various pairs of c and n which satisfy the f(n)=o(g(n)). i. e for f(n)
Within the first 100 seconds this video explained Big-O better than my $200 textbook and my professor… combined.
Haha thank you for this feedback. Would be great if you can transfer that 200 dollars to me 🤣
Just kidding. Don't need donations. I'm happy that this video helped you 😊
Thank you very much.
So when I saw the example I paused the video and got
Big O=n
Big Ω=1
Big θ= (n+1)/2 (because the average of n and 1 is n+1/2)
I get why we get rid of the /2 for big θ, as it becomes negligible, so could the same be said of the +1?
why they is a curve in the f(n) line
I don't understand one thing in this equation 2n+3
omg d same doubt flashed to me as soon as he explained it.can someone please explain this.
@@ganashree8342 Yeah, I understand what Big O and the others; for easy f(n), it is easy, but the problem for me comes when the f(n) is more complex. I have a lot of issues finding c1, c2, and n0.
fantastic..pls upload more videos for clearing concept
Thanks a lot 🙏Sir. Can you show some questions on this topic.
Do you have implementation for these concepts. Thank you for your help. It is very clear and simple. It is way better than my university teachings.
Why do we take the closest to f(n) fn as the best case and the worst case scenario ,it has to be the farthest one right?So that for the best case if you take Omega(1) that will be the fastest taking less time compared to Omega(n).
Excellent job, man!
So big theta always matches big O?
But please that means all notations just need testing through inputs in order to fulfill their conditions??
Hello for big omega
Why dont we choose g(n) as 1 as 1 is always less than any f(n) so time complexity would be omega(1) for any program
Maybe bcuz it is not closest to f(n). if linear g(n) = n is closer to f(n) then we will choose it instead of 1.
could be wrong but i think it's the answer.
really good explanation!
sir
Glad you liked it! Please support me by sharing the videos and our channel with your friends too. Thats the biggest help and support you can provide 😇
Thanks man for making such an awesome content
how to determine c in any algorithm
Decent tutorial! Thank you
Glad it was helpful!
Keep up with the good work, thanks.
Very well done
Thank you very much!
Excellent video
Thank you very much!
thanks a lot, you are the best 😍
For the linear search can't you also have Omega(n) ? When you do the algorithm analysis you will end up with some linear equation such as f(n) = 3n + 2 , we can show its Omega(n) by doing
3n+2 >= c * n , take c=1 and n=1, it works. The example you gave was theoretically correct , it should at best case be Omega(1) but mathematically following the definitions is it also true to say its Omega(n) too ? This makess the bounding tighter as well
He has said that there can be many best as well as worst cases.. though there can be many upper and lower bounds but we have to take the closest one that's why it is omega(1). Hope you understood.
Why videos are not available in 240p?
keep up the great work!!
Thanks so much man
Hi bro in first line of the definition I think it should be 20:22 Big theta notation
thanks for this video, even thanks for this playlist dude....:)
I can tell this is a good teaching video. Just wished the accent was a little less heavy for those who are not Indian. It's very difficult to understand and it's frustrating because I know how intelligent Indians are but can't seem to find videos where they don't speak with such heavy accents. Oh and I'm also Asian btw before anyone wants to pull the race card.
Hi simple , great explanation , would you kindly provide an example out of say ML algos where it is better to use say Big theta relative say to big O and big Omega ? Thanks
Could anyone help me with this one? I understand Big O and the others, but my problem is finding c1,c2, and n0 for complex functions. Example: n^3/1000 - 100n^2 - 100n + 3. I need to express that one in order of theta notation.
thnx for the help brother
Equating O(.) notation with worst-case, Ω(.) notation with best-case and Θ(.) with average-case is incorrect. The O/Ω/Θ notations and "caseness" (worst/best/average) are independent concepts.
It's a common misconception and I see nobody has pointed it out yet in the comments so I will explain why it's wrong.
Let's start with that your mathematical definitions of the O/Ω/Θ notations are generally correct.
Maybe would only highlight the fact that this notations are not exclusive to computer science or algorithms, but just describe the upper/lower/tight asymptotic bounds on the growth rate of a given function.
Ok so the first minor inaccuracy is that when in 13:51 you have found out that f(n) is O(n) and f(n) is O(n^2) you've said that "when we try to find the O(.) notation we have to find the closest one which matches the f(n)". Well, no we don't have to. We have shown that indeed both O(n) and O(n^2) satisfy the mathematical definition and thus both are true. The reason we prefer the O(n) to O(n^2) is just because it gives as more information (it's a tighter bound).
Now the big problem. At 24:10 you decided to analyse the time complexity of the linear search algorithm.
So now it's true that it's Ω(1) and it's O(n), however it's NOT Θ(n). There is actually no function g(n) such that the time complexity is Θ(g(n)).
That is because indeed Ω(1) is the tightest lower bound (for example it's not Ω(log(n))) and O(n) is the tightest upper bound (for example it's not O(log(n))). So you can see there is no g(n) which satisfies the condition c1*g(n) Θ(1)
In the average case we have:
Ω(n) and O(n) => Θ(n) // here we could say it's n/2, but we omit the constants
So it's worst case Θ(n), best case Θ(1) and average case Θ(n). See that I used Θ(.) notation for each worst/best/average case. And the benefit of using Θ(.) for all cases is that it shows the tight bound. That is for example when we say it's worst case Θ(n) it means that it is not worst case Θ(1) and it is not worse case Θ(n^2).
When we would use O(.) notation to describe worse case we can indeed say that it's O(n), but it's also true that it's O(n^2).
So using Θ(.) gives us more information (it "forces" us to give the tight bound).
This means that we should generally use Θ(.) notation as it gives us the most information. The problem however is that if we want to look at the general case of the algorithm the Θ(.) simply might not exist. So in that circumstance the best we can do is say that in general case this algorithm is O(n) and Ω(1).
The only algorithms for which we can describe the general case complexity using Θ(.) notation are once for which worst case Θ(.) is the same as best case Θ(.). For example the problem of finding minimum value in n element array is worst case Θ(n), best case Θ(n) and average case Θ(n). So we can say that this algorithm has (in general) Θ(n) time complexity.
Thanks, it helped a lot👍
great video!!!
You are great!
Great content. Easy to follow and to the point. Wonderful!