Be careful not to make a sorting algorithm too fast. Years ago a friend worked for a timesharing service (consider it caveman cloud computing) and wondered why his new program took so long. He discovered that the system sort routine was stupidly slow. So he fixed it to be fast. A few days later his managers yelled, "WHAT DID YOU DO??" It turned out that his sort speedup also made many customer batch jobs run so fast that their CPU bills were suddenly WAY lower, and the timesharing company's revenues were significantly dropping. "PUT IT BACK THE OLD WAY" they said. So he did.
Additionally, the corollary to this is that suddenly they get 100x as many customers, and then yell at him, again, to "Speed it up, we ran out of time to process our customers! Too many of them are leaving!" 😊😂👍
@@genghiskhan6688 They are stupid and greedy, they could accommodate way more customers without adding more physical resources thus generating more revenue at a lower cost.
@@vyrsh0 You reinvented counting sort. Or radix sort, which have time complexity of logN, but if base of a log is equal of higher than count of array is linear (square memory complexity in this case)
The exact number of operations is more than selection sort (or even bubble sort and linear insertion sort for that matter) but all four of them scale by O(n²).
Sometimes, the time spent to program an algorithm is orders of magnitude greater than the amount of time you'd ever wait on it to finish. In such cases, having something incredibly unsophisticated and easy to remember can save you time. I doubt that something as common as sorting could be such a case, but IF both implementing a well-scaling algorithm and using a library function would take too much of your time, then this is the solution for you (or bubble sort).
And what algorithm does the library function use? Answer: Usually Quicksort or some variation of Heapsort. Sometimes, it's better to write your own algorithm, because the set-up for using the library function can involve dozens of lines of code and helper (comparison) functions, when all you need is to compare one or two members of structs/objects within an array.
@@DavidRTribble actual answer… It uses whatever the team of engineers was paid billions of dollars to develop and optimise and it could change at any time. So it doesn’t matter what algorithm it uses… it works… and it is quick. 😂
This is sometimes called "the postman's sort". If you were a postman walking a route to deliver it, when you first pick up today's mail you would take a piece of mail and insert it in your carry bag. Then the next piece you would put 'where it belongs' in relation to the first piece on your route. Then each piece you pick up from the bin, you put it in the 'correct position' as you put it in your bag. It is simple, you only have to handle each piece of mail once. BUT, you have to somehow be able to 'insert' each new piece in the correct spot. With computer memory, that often means you have to shift other items up/down in your array to make room for it. (if you're using linked lists, 'insertion' is relatively fast/easy).
Most human sorting are fast because semantically they are working with an index rather than binary comparison. For text the index is usually an alphabet. For a postman the index is the route map. The bag can be modeled by a preallocated indexmap, with a logical index structure but a continuous memory allocation.
I think one usefully "perverse definition of simple" here would be to sort in the smallest number of instructions. Sorting small buffers with minimal instruction memory could be useful for some IoT applications. Consider the "state machines" on the RP2040 where you only have space to store 32 instructions. n^2 is still going to be a smallish number of microseconds for a 16 byte buffer.
I discovered this by mistake in high school many years ago (more that 20y). I realized that is a mistake in the code but after I saw that even so it sorts a vector. I saved the code like this for further research, my colleagues and my teachers didn't know why it still sorts. Now I know
It could be more efficient for niche applications where the data set isn't fixed. Data elements constantly being added, removed, or changed. Online databases, etc. The redundant instructions in this "simple" N-squared ugliness would actually be useful in this circumstance.
In the world of algorithms, “simple” algorithms are often inefficient. Technically, the simplest would be bogosort: 1. Shuffle the list. 2. Is it sorted? Yes → end No → go back to 1
@@lucar6897 You’re describing selection sort. Which according to polylog, is more complicated than his simplesort, based on his definition of “simple.”
The word "shuffle" hides a surprisingly large amount of complexity there. I will propose this sort as the simplest, provided you're running on actual hardware. It's even in-place, but I make no guarantees as to the average or best case time complexity: 1. Hope for bit errors resulting in the list sorting itself.
The selection sort is an improvement over Bubble sort that also has trow loops and one comparison in the inner loop. The second one is ... weird. Of course the number of operations is higher but at the same order (n^2 versus (n^2 - n) /2). Yes It is simple and I can't imagine a simpler way. Interesting because all the time we always think how to make algorithms faster, more efficient, but rarely find the simplest one.
this is the first sorting algorithm i thought of in hacker rank. then it gave me the warning "operation was too slow. get outta here and optimize your code" 💀
The simplest sorting algorithm in terms of both coding and time complexity is obviously qbSort: EnthropyGenerator.Permute (array); For (int i = 0; i < array.length - 1; i++) {if (array[i] > array[i+1]) this.universe.terminate();}; Can you even call that 4 lines? It's more like 3 And of course, the complexity is only O(n).
I love the fact that the Select Sort can be made optimal by finding the min value index in time O(log(n)) by using a min-heap structure. So you would still have a linear part (the outer loop), but the inner one would be changed with a call to find the min value in a min-heap, that is by itself theoretically costant since the min value will always be in the root of the tree, but it becomes logarithmic due to the heapify function that rebuilds the heap after the extraction of the root value. In the end you would change the time complexity of the Select Sort from O(n^2) to O(n * log(n)) which is indeed proved to be optimal for a sorting algorithm. What you get is the algorithm known as Heap Sort.
It is proofed to be perfect for algorithms that compare the values of the array that shall be sorted. You can look up Countingsort for example which has an even better time complexity, because it takes a different approach.
I feel yah. CPUs I do well with. only ever got the magic smoke you using a sledge (sledgehammers fix ANYTHING, btw - and if it breaks, it needed replacing anyway). My critical weak point is memory usage. ‘200MB memory to calculate the 6 steps to craft item X in a game - why do you need 200mb in a single string when each step just combines 2 items?’ Simple solutions may hold elegance… and fall to pieces if you insist on any realism.
I actually wrote multiple sorting algorithms myself and many of them easily beat std::sort by an order of magnitude. The best "magyarsort" takes only 8.5x the time to sort 100 million integers than doing memcpy on them for example - which is extreme fast if you ask me. - and my sort is already faster than std::sort from around 64 elements (and starts to be the same speed around 20). So it basically beats the library sort from C++ in every use case. Also beats ska_sort by around a 2x speedup on random data - but uses more memory. There is a variant which uses less memory and still beats ska_sort and ska_copy sort.
Most programming languages used by mathematicians (e.g. Fortran, Julia, R, Matlab, Maple, Mathematica, Maxima) index from 1. If you're labelling by order, rather than counting memory offsets, then it's natural to start from 1.
SleepSort is simpler and has an O(1) complexity. Have N threads each wait for its elements value. Threads will terminate in order, resulting in a sorted list.
It takes twice as many steps as the original sort, so it'll only be half as fast. Also, it sorts counterintuitively, so it is not "simple" in that regard. I'd rather call it "perverse sort".
I've got a Big O notation sensor in my mind's computational complexity modulus, whenever I see a loop inside the other, what I'm actually seeing is a ridiculous slow quadratic algorithm, and I'll always pass if I can...
is this then a less powerful form of (double optimised) bubblesort which saves two conditionals but never early outs? I searched for “simple sort” but got results for selection sort which seems to be the same algorithm
Of course I know simple sort, selection sort, insertion sort (since they where part of the algorithm and data structures lecture at my university back then in the 90s). However the video is nicely done and the visualization helps to understand how each respective algorithm works. There are other RUclipsrs who focus on the pure illustration of different sorting algorithms, but as soon as I mention channel names here, my comment gets deleted. So feel free to search for them, too 🙂
Is this shell sort or bubble sort? Two very simple algorithms for sorting large lists, either would be fairly quick on a small selection of numbers, but which of the two shown in the video would be more efficient to use, if, for instance, there were more than 100 entries to be sorted?
Insertion and selection sort are similarly fast. The general recommendation is to call the sort function from the standard library since whoever wrote it is a bigger efficiency nerd than you.
An optimized selection sort and insertion are similar, though the insertion will still be slightly faster. Bubble is much slower, and shell sort much faster. Here's the output of a program I wrote in C for fun, for speed testing multiple sorting algorithms. For reference, this was on a Ryzen 7950X, but the times remain relative across different systems. (+Opt = Optimized Algorithm, and NR = Non-Recursive): Sort method: Size: 1024 4096 16384 65536 ========================================================================= Bubble sort: 0.001314 secs 0.020288 secs 0.327573 secs 5.379057 secs Bubble +Opt: 0.000959 secs 0.013028 secs 0.204979 secs 4.454882 secs Selection: 0.000683 secs 0.010469 secs 0.166774 secs 2.666797 secs Selection +Opt: 0.000421 secs 0.006290 secs 0.098621 secs 1.562943 secs Insertion: 0.000278 secs 0.004402 secs 0.072242 secs 1.150316 secs Shell sort: 0.000073 secs 0.000464 secs 0.003553 secs 0.021817 secs Quicksort NR: 0.000034 secs 0.000153 secs 0.000691 secs 0.003058 secs C Quick sort: 0.000053 secs 0.000206 secs 0.000930 secs 0.004307 secs
I'm very curious what language is used in this video, or is it just a pseudo-language? It's using BASIC-style For loops, a ':' continuation or line ending, but [] brackets for array indexing, which isn't used in BASIC. The formatting also looks Pythonic, but not with those BASIC-style for loops. I even asked chatGPT about it, but it's only guesses were QBasic, FreeBASIC, and VB, all of which use standard BASIC array() brackets for indexing, which nearly broke that poor AI when I tried to explain that.
Leaves me wondering how you’d rate radix sort for simplicity. It may be more complex, but for a little complexity it’s silly efficient at sorting huge sets of data, doing it without ever comparing elements.
@@tonyennis1787 I’ve never had the luxury to play with punch cards at any level… their using radix sort actually makes a ton of sense - especially if all they had was an ability to sort one card at a time into a left or right output bin. You might need to sort a stack of cards log-n base 2 times, but it’d be bloody simple compared to manually collating the cards, especially for large sets
@@tonyennis1787 I”ll bet there were different models with different numbers of output trays, depending on budget. You’d need 10 log base(trays) passes per digit to sort. My single exposure to card reader lore is the tale of Friar-tuck/Robinhood - malware used to demonstrate to Xerox a security vulnerability in their server they sold, which Xerox was slow to patch. The demo exploit made clear to Xerox why not patching the vulnerability was not a viable option… The demo exploit was installed (without authorization) to a Xerox-owned server. ‘The Xerox card reader had two output stackers; it could be instructed to stack into A, stack into B, or stack into A unless a card was unreadable, in which case the bad card was placed into stacker B. One of the patches installed by the ghosts added some code to the card-reader driver... after reading a card, it would flip over to the opposite stacker. As a result, card decks would divide themselves in half when they were read, leaving the operator to recollate them manually.’ - The Jargon File - Hacker Lore.
Dumb question (I followed an entry level gpu programming class a couple of years ago, but completely forgot even basic concepts and terminology): could it be faster on gpu than any other O(n^2) sorting algorithm because each thread operates on the same amount of data and there are no/fewer stalls this way?
The idea is that its a number of calculations. Theres n elements in the list, and you have to iterate through each of them twice. Its a bit of concepts above basic programming courses.
@@gn0my in a sequential computation model I agree, but in a massively parallel model such as gpu the idea is "load a bunch of data, do computations in parallel, wait everyone to finish and gather data", which implies that wasting computation may sometimes be faster because synchronization requires less time. I was wondering if this is one of those cases or not
That's what I thought too and I had to look it up when I saw this, evidently it seems that there are some languages that start at 1. Someone left a comment on stackoverflow about how counting at 0 is a thing started by the C language but that languages before C tended to start at 1. Supposedly this is a list of languages that start counting at 1: ALGOL 98 APL AWK CFML COBOL Fortran FoxPro Julia Lingo Lua Mathematica Matlab PL/I RPG R Ring Sass Smalltalk Wolfram Language XPath/ XQuery
I remember coming across this paper sometimes last year....however, while companies are making ton of monies from this algorithm, Von Neumann is rolling in his grave :)
I think my favorite has to be bogobogo sort. You recursively sort each sublist of size k from an input n until the k+1th element is bigger than the max of the sublist. It has to be n!^n or something
@@landsgevaer No, but keep in mind that you do it recursively. You start with a list of size 1, which is always sorted. Then you check if the next element is bigger than that of the size 1 list. If it is, then the sublist of size 2 is sorted, otherwise, you shuffle the sublist of size 2 and start over (in this case, until the 1st element is smaller than the 2nd, which has 50% odds). Then, do the same for the sublist of size 3, etc...
@@mismis3153 I found a link. Turns out it is way more convoluted. You shuffle the original array, make a copy of the array, then bogobogosort the copy excluding the last element, next if the last element of the copy by chance is the largest, then you check if the (now sorted) copy is the same as the original, otherwise you shuffle the entire copy and start over; if the copy wasn't the same, you reshuffle the original and start over.
@@mismis3153 I read it more than once. Apart from my description being brief, where is the error then? I've actually reasoned that my interpretation leads to the same order of complexity, so I am curious where I misspoke or misunderstood, if anywhere ...
This is not an algorithm that any programmer has never heard of. I learned this in my second year of college in 1972. I would think that by now every student with a degree in computer science has also learned this. Otherwise our education system is going backwards in time. My copy of "Art of Computer Programming/Sorting and Searching." by Donald E. Knuth 1973 is not handy but I am pretty sure that this algorithm is well covered. Anyone with any interest in sorting data should have read this book by now. For a sufficiently large table in a virtual storage environment this algorithm would perform poorly, but it certainly is simple and it works. Programmers do not get paid big bucks to write cute code that is simple. I wish you would at least mention performance. You could also mention that duplicate rows may not work as expected. It is now 49 years since Knuth wrote his excellent series of books about programming. I did not see that you have added any value to his work other than your animated graphics are excellent. We could not do that in 1973. What few dumb terminals we had were not in color.
The most simple algorithm is of course miraclesort; Is it sorted?
Yes - done
No - wait n seconds, then check again
Diet Bogo sort i see
If you wait long enough, a cosmic ray will at least make the algorithm think it's done
@@diribigal Most models of including don't model cosmic rays as being possible.
@@gyroninjamodder I'm not sure what you mean by "of including". Did you mean "of computation"?
@@diribigal yes, models of computation
Be careful not to make a sorting algorithm too fast. Years ago a friend worked for a timesharing service (consider it caveman cloud computing) and wondered why his new program took so long. He discovered that the system sort routine was stupidly slow. So he fixed it to be fast. A few days later his managers yelled, "WHAT DID YOU DO??" It turned out that his sort speedup also made many customer batch jobs run so fast that their CPU bills were suddenly WAY lower, and the timesharing company's revenues were significantly dropping. "PUT IT BACK THE OLD WAY" they said. So he did.
Management: "Efficiency? We don't want that around here!"😂😊👍
Additionally, the corollary to this is that suddenly they get 100x as many customers, and then yell at him, again, to "Speed it up, we ran out of time to process our customers! Too many of them are leaving!" 😊😂👍
The free market doing its thing again.
@@genghiskhan6688 They are stupid and greedy, they could accommodate way more customers without adding more physical resources thus generating more revenue at a lower cost.
god i hate the specific capitalism brainworm that such managers have. It's no better than fraud.
"...for a sufficiently perverse definition of the word simple" GOLD
🤣
Came to post just that; you are top comment. Take my like!
Try sleep sort.
Start a thread for each element in the array and sleep for that ms, at the end of sleep print the number. It'll be sorted.
Yes. Came here to say this. Sleep sort is the true simplest sorting algorithm.*
* for a sufficiently perverse definition of the word simple
Another simple sorting algorithm, but works only for ints.
Int ar1[5] = {5, 9, 2, 11, 8};
Int ar2[10000] = {0};
For(int I=0; i
@@vyrsh0 bruh lmao
@@vyrsh0 You reinvented counting sort. Or radix sort, which have time complexity of logN, but if base of a log is equal of higher than count of array is linear (square memory complexity in this case)
this is funny :)
This is a horrific n^2 monstrosity
It's sufficiently perverse for my tastes
At least it's better than Stooge sort
@@timanderson5717 An early version of the script featured an alignment chart of sorting algorithms that included stooge sort :)
Technically not worse than the initial algorithm. The factor changes, but it's O(n²) compared to O(n²), what's doable on one is doable on the other.
The exact number of operations is more than selection sort (or even bubble sort and linear insertion sort for that matter) but all four of them scale by O(n²).
Symmetry is always a good candidate for simplicity, even though it might make things less efficient.
"The only drawback is that it sorts in reverse order...:" and the horrendous O(n^2) runtime (as mentioned in other comments)
Indeed :) We should have mentioned that our quest is guided by beauty, not speed...
Clearly efficiency was not the point of the video. Just enjoy the ride.
Sometimes, the time spent to program an algorithm is orders of magnitude greater than the amount of time you'd ever wait on it to finish. In such cases, having something incredibly unsophisticated and easy to remember can save you time. I doubt that something as common as sorting could be such a case, but IF both implementing a well-scaling algorithm and using a library function would take too much of your time, then this is the solution for you (or bubble sort).
The simplest sorting algorithm is calling the standard library sort function.
This is what fully understanding the assignment looks like.
And what algorithm does the library function use?
Answer: Usually Quicksort or some variation of Heapsort.
Sometimes, it's better to write your own algorithm, because the set-up for using the library function can involve dozens of lines of code and helper (comparison) functions, when all you need is to compare one or two members of structs/objects within an array.
Most likely introsort or timsort, which are many algorithms merged into one.
@@DavidRTribble actual answer…
It uses whatever the team of engineers was paid billions of dollars to develop and optimise and it could change at any time. So it doesn’t matter what algorithm it uses… it works… and it is quick.
😂
std::sort is kinda efficient)
This is sometimes called "the postman's sort". If you were a postman walking a route to deliver it, when you first pick up today's mail you would take a piece of mail and insert it in your carry bag. Then the next piece you would put 'where it belongs' in relation to the first piece on your route. Then each piece you pick up from the bin, you put it in the 'correct position' as you put it in your bag.
It is simple, you only have to handle each piece of mail once. BUT, you have to somehow be able to 'insert' each new piece in the correct spot. With computer memory, that often means you have to shift other items up/down in your array to make room for it. (if you're using linked lists, 'insertion' is relatively fast/easy).
Most human sorting are fast because semantically they are working with an index rather than binary comparison. For text the index is usually an alphabet. For a postman the index is the route map. The bag can be modeled by a preallocated indexmap, with a logical index structure but a continuous memory allocation.
I think one usefully "perverse definition of simple" here would be to sort in the smallest number of instructions. Sorting small buffers with minimal instruction memory could be useful for some IoT applications. Consider the "state machines" on the RP2040 where you only have space to store 32 instructions. n^2 is still going to be a smallish number of microseconds for a 16 byte buffer.
But at least to my knowledge, they can only decrement and compare equality, so they can not practically be used for sorting.
I discovered this by mistake in high school many years ago (more that 20y). I realized that is a mistake in the code but after I saw that even so it sorts a vector. I saved the code like this for further research, my colleagues and my teachers didn't know why it still sorts. Now I know
It's a neat sorting algorithm, in terms of it's looks, but it's not efficient at all.
It could be more efficient for niche applications where the data set isn't fixed. Data elements constantly being added, removed, or changed. Online databases, etc.
The redundant instructions in this "simple" N-squared ugliness would actually be useful in this circumstance.
@@pwnmeisterage definitely not, if you want constantly sorted data with updates just use a map or a set which both allow for fast operations
I find the Theta(n^2) bubble sort the simplest.
For j from 1 to n:
for i from 1 to n-1:
if arr[i] > arr[i+1]:
swap(arr[i], arr[i+1]).
Famous Bubble sort you've never heard of.
This sorting algorithm is called a student wrote bubble sort but made a common mistake, but it still works.
LMAO
In the world of algorithms, “simple” algorithms are often inefficient. Technically, the simplest would be bogosort:
1. Shuffle the list.
2. Is it sorted?
Yes → end
No → go back to 1
That has a random number generator tho, which is not very simple
Yeah, I think the simplest is something like “search through to find the lowest value, and put it at the start of another list. Repeat”
@@lucar6897 You’re describing selection sort. Which according to polylog, is more complicated than his simplesort, based on his definition of “simple.”
The word "shuffle" hides a surprisingly large amount of complexity there.
I will propose this sort as the simplest, provided you're running on actual hardware. It's even in-place, but I make no guarantees as to the average or best case time complexity:
1. Hope for bit errors resulting in the list sorting itself.
What about gulagsort? It's ought to be he simplest
The selection sort is an improvement over Bubble sort that also has trow loops and one comparison in the inner loop. The second one is ... weird. Of course the number of operations is higher but at the same order (n^2 versus (n^2 - n) /2). Yes It is simple and I can't imagine a simpler way. Interesting because all the time we always think how to make algorithms faster, more efficient, but rarely find the simplest one.
MIT algorithms course taught me to use quicksort but do a shuffle first. Reason being? An almost sorted list can reach n squared time with quick sort.
this is the first sorting algorithm i thought of in hacker rank.
then it gave me the warning "operation was too slow. get outta here and optimize your code" 💀
Sleep sort would like to have a word with you.
The simplest sorting algorithm in terms of both coding and time complexity is obviously qbSort:
EnthropyGenerator.Permute (array);
For (int i = 0; i < array.length - 1; i++)
{if (array[i] > array[i+1]) this.universe.terminate();};
Can you even call that 4 lines? It's more like 3
And of course, the complexity is only O(n).
I have in fact heard of this before. It is listed on Wikipedia as the I Can't Believe It Can Sort.
Why start at 1 tho? Starving African computers could have eaten that wasted bit...
omg that's awful, but funny 😂
It even works in my custom scripting language I wrote
Our teacher called this “bubble sort” and asked it at our final
That's not bubble sort, tho. Bubble sort works by swapping adjacent values only.
I never expected simplest algorithm to be so complex.
I love the fact that the Select Sort can be made optimal by finding the min value index in time O(log(n)) by using a min-heap structure.
So you would still have a linear part (the outer loop), but the inner one would be changed with a call to find the min value in a min-heap, that is by itself theoretically costant since the min value will always be in the root of the tree, but it becomes logarithmic due to the heapify function that rebuilds the heap after the extraction of the root value.
In the end you would change the time complexity of the Select Sort from O(n^2) to O(n * log(n)) which is indeed proved to be optimal for a sorting algorithm.
What you get is the algorithm known as Heap Sort.
It is proofed to be perfect for algorithms that compare the values of the array that shall be sorted. You can look up Countingsort for example which has an even better time complexity, because it takes a different approach.
That's exactly the algorithm i came up with before learning anything about sorting algorithms
This is why I don't write my own algorithms.
Me: "This looks pretty, I'm going to run with it."
My CPU: *lets the magic smoke out*
I feel yah. CPUs I do well with. only ever got the magic smoke you using a sledge (sledgehammers fix ANYTHING, btw - and if it breaks, it needed replacing anyway).
My critical weak point is memory usage.
‘200MB memory to calculate the 6 steps to craft item X in a game - why do you need 200mb in a single string when each step just combines 2 items?’
Simple solutions may hold elegance… and fall to pieces if you insist on any realism.
You're lying. CPUs don't do thar
I actually wrote multiple sorting algorithms myself and many of them easily beat std::sort by an order of magnitude. The best "magyarsort" takes only 8.5x the time to sort 100 million integers than doing memcpy on them for example - which is extreme fast if you ask me.
- and my sort is already faster than std::sort from around 64 elements (and starts to be the same speed around 20). So it basically beats the library sort from C++ in every use case. Also beats ska_sort by around a 2x speedup on random data - but uses more memory. There is a variant which uses less memory and still beats ska_sort and ska_copy sort.
Wait I think we have a bug here because naturally arrays are zero indexed meaning i and j should start at '0' not '1', :)
Most programming languages used by mathematicians (e.g. Fortran, Julia, R, Matlab, Maple, Mathematica, Maxima) index from 1. If you're labelling by order, rather than counting memory offsets, then it's natural to start from 1.
SleepSort is simpler and has an O(1) complexity. Have N threads each wait for its elements value. Threads will terminate in order, resulting in a sorted list.
now prove its correctedness for all x E R 😈
It works. The true nasty part is where constant K is worse than NLogN, but K gets simplified to 1 in the O notation.
Arr[2] = -1
i don't get how it's simplier than i = 1 to n { j = i to n {swap state}}, it's only slower
It takes twice as many steps as the original sort, so it'll only be half as fast.
Also, it sorts counterintuitively, so it is not "simple" in that regard. I'd rather call it "perverse sort".
I heard of this as selection sort and not selectsort. One of the first sorting algorithms we encountered alongside bubble sort and insertion sort.
I've got a Big O notation sensor in my mind's computational complexity modulus, whenever I see a loop inside the other, what I'm actually seeing is a ridiculous slow quadratic algorithm, and I'll always pass if I can...
is this then a less powerful form of (double optimised) bubblesort which saves two conditionals but never early outs? I searched for “simple sort” but got results for selection sort which seems to be the same algorithm
Can you make a video about sleep sort? That thing is so stupid, it is kinda smart.
This is mind opening for undergrads like me
Come on man only one video? It’s was amazing we need and more
"You've never heard of"
Proceeds to show the algorithm we all thought of when we first started
I didn't. Mine was like bubble sort except with twice as many operations that did nothing. I was 9.
Of course I know simple sort, selection sort, insertion sort (since they where part of the algorithm and data structures lecture at my university back then in the 90s). However the video is nicely done and the visualization helps to understand how each respective algorithm works. There are other RUclipsrs who focus on the pure illustration of different sorting algorithms, but as soon as I mention channel names here, my comment gets deleted. So feel free to search for them, too 🙂
It feels a lot like Bubble Sort.
Can u explain how radix sort works
Never heard of? 1990 CS undergrad. This was a few weeks into the first semester of freshman year.
Is this shell sort or bubble sort? Two very simple algorithms for sorting large lists, either would be fairly quick on a small selection of numbers, but which of the two shown in the video would be more efficient to use, if, for instance, there were more than 100 entries to be sorted?
Insertion and selection sort are similarly fast. The general recommendation is to call the sort function from the standard library since whoever wrote it is a bigger efficiency nerd than you.
An optimized selection sort and insertion are similar, though the insertion will still be slightly faster. Bubble is much slower, and shell sort much faster. Here's the output of a program I wrote in C for fun, for speed testing multiple sorting algorithms. For reference, this was on a Ryzen 7950X, but the times remain relative across different systems. (+Opt = Optimized Algorithm, and NR = Non-Recursive):
Sort method: Size: 1024 4096 16384 65536
=========================================================================
Bubble sort: 0.001314 secs 0.020288 secs 0.327573 secs 5.379057 secs
Bubble +Opt: 0.000959 secs 0.013028 secs 0.204979 secs 4.454882 secs
Selection: 0.000683 secs 0.010469 secs 0.166774 secs 2.666797 secs
Selection +Opt: 0.000421 secs 0.006290 secs 0.098621 secs 1.562943 secs
Insertion: 0.000278 secs 0.004402 secs 0.072242 secs 1.150316 secs
Shell sort: 0.000073 secs 0.000464 secs 0.003553 secs 0.021817 secs
Quicksort NR: 0.000034 secs 0.000153 secs 0.000691 secs 0.003058 secs
C Quick sort: 0.000053 secs 0.000206 secs 0.000930 secs 0.004307 secs
Beautiful video I like it ❤️
I’d say the simplest sorting algorithm is just sort(a).
well, yes, but (at least in JS) it doesn't work with numbers since it sorts them by the higher *first* number. by this rule, 100 < 95 < 8
@@the_neto06 To add on to what you said, [-2, -1, 0, -0, 2, 11].sort() evaluates to [-1, -2, 0, -0, 11, 2]
array.sort((a, b)=>(a-b))
This has the added benefit of being able to sort a list of objects by their properties.
objectArray.sort((a.x, b.x)=>(a-b))
I think I used this before, but with a name of bubblesort.
So bubble sort? The first sort algorithm taught usually? :)
They got us on the second half, not gonna lie.
You deserve a subscribe man
Very nice. Strange I've never seen it before.
Gnomesort is nice, it's about this size and is the only sort I can find that lets you keep track of the parity.
Urgh two loops? Soo complicated! When I need a simple sorting algorithm I always use Gnome Sort, that only needs one loop!
I'm very curious what language is used in this video, or is it just a pseudo-language? It's using BASIC-style For loops, a ':' continuation or line ending, but [] brackets for array indexing, which isn't used in BASIC. The formatting also looks Pythonic, but not with those BASIC-style for loops. I even asked chatGPT about it, but it's only guesses were QBasic, FreeBASIC, and VB, all of which use standard BASIC array() brackets for indexing, which nearly broke that poor AI when I tried to explain that.
bro Bubble Sort MK2 be looking dope
Leaves me wondering how you’d rate radix sort for simplicity. It may be more complex, but for a little complexity it’s silly efficient at sorting huge sets of data, doing it without ever comparing elements.
IBM punch card sorters used radix sort.
@@tonyennis1787 I’ve never had the luxury to play with punch cards at any level… their using radix sort actually makes a ton of sense - especially if all they had was an ability to sort one card at a time into a left or right output bin. You might need to sort a stack of cards log-n base 2 times, but it’d be bloody simple compared to manually collating the cards, especially for large sets
@@Relkond if memory serves, there were 10 output bins, one for each digit.
@@tonyennis1787 I”ll bet there were different models with different numbers of output trays, depending on budget.
You’d need 10 log base(trays) passes per digit to sort. My single exposure to card reader lore is the tale of Friar-tuck/Robinhood - malware used to demonstrate to Xerox a security vulnerability in their server they sold, which Xerox was slow to patch. The demo exploit made clear to Xerox why not patching the vulnerability was not a viable option… The demo exploit was installed (without authorization) to a Xerox-owned server.
‘The Xerox card reader had two output stackers; it could be instructed to stack into A, stack into B, or stack into A unless a card was unreadable, in which case the bad card was placed into stacker B. One of the patches installed by the ghosts added some code to the card-reader driver... after reading a card, it would flip over to the opposite stacker. As a result, card decks would divide themselves in half when they were read, leaving the operator to recollate them manually.’ - The Jargon File - Hacker Lore.
This video was so satisfying
✨ The wobbly dobbly bubble sort ✨
Is time complexity will be the same?
Simple in therms of lines of code. But still a O^2 complexity level. To get it lower, you probably need to not use an array but a binary tree.
insertion sort is potentially slightly faster considering worst case scenario is o(n²) and best case scenario is o(n) + 1 swap
ima be real with u
this is what i thought bubble sort was
If you not prefer stability in sorting, then I have developed a new algo for sorting which is pretty simple.
basically marking an bed sort even worse, and that still debatable the simplest, like, index sort does not even need comparisons
yea definitely I've never heard about bubble sort haha
Dumb question (I followed an entry level gpu programming class a couple of years ago, but completely forgot even basic concepts and terminology): could it be faster on gpu than any other O(n^2) sorting algorithm because each thread operates on the same amount of data and there are no/fewer stalls this way?
The idea is that its a number of calculations. Theres n elements in the list, and you have to iterate through each of them twice.
Its a bit of concepts above basic programming courses.
@@gn0my in a sequential computation model I agree, but in a massively parallel model such as gpu the idea is "load a bunch of data, do computations in parallel, wait everyone to finish and gather data", which implies that wasting computation may sometimes be faster because synchronization requires less time. I was wondering if this is one of those cases or not
@@aluminatestrontium7298 Can't parallelise this because the steps have to be done in order
I guess this is just bubble sort, Isn't it?
No.
Don't all arrays start from 0? Why does this one start from 1?
That's what I thought too and I had to look it up when I saw this, evidently it seems that there are some languages that start at 1. Someone left a comment on stackoverflow about how counting at 0 is a thing started by the C language but that languages before C tended to start at 1. Supposedly this is a list of languages that start counting at 1:
ALGOL 98
APL
AWK
CFML
COBOL
Fortran
FoxPro
Julia
Lingo
Lua
Mathematica
Matlab
PL/I
RPG
R
Ring
Sass
Smalltalk
Wolfram Language
XPath/ XQuery
@@BanjoCatfish add Maple and Maxima to that list (both languages used primarily by mathematicians).
I've made this algorithm on my own in a logic programming class. I was the only one who was able to do it.
Good representation of the algo. Add more videos pls. Tnx.
The double for-loop reminds me a bit of the Floyd-Warshall algorithm.
I remember coming across this paper sometimes last year....however, while companies are making ton of monies from this algorithm, Von Neumann is rolling in his grave :)
I think my favorite has to be bogobogo sort. You recursively sort each sublist of size k from an input n until the k+1th element is bigger than the max of the sublist. It has to be n!^n or something
But but but, sorting a sublist doesn't change the maximum of that sublist ...
@@landsgevaer No, but keep in mind that you do it recursively. You start with a list of size 1, which is always sorted. Then you check if the next element is bigger than that of the size 1 list. If it is, then the sublist of size 2 is sorted, otherwise, you shuffle the sublist of size 2 and start over (in this case, until the 1st element is smaller than the 2nd, which has 50% odds). Then, do the same for the sublist of size 3, etc...
@@mismis3153 I found a link. Turns out it is way more convoluted.
You shuffle the original array, make a copy of the array, then bogobogosort the copy excluding the last element, next if the last element of the copy by chance is the largest, then you check if the (now sorted) copy is the same as the original, otherwise you shuffle the entire copy and start over; if the copy wasn't the same, you reshuffle the original and start over.
@@landsgevaer not exactly. You got the spirit but you need to read it more carefully
@@mismis3153 I read it more than once. Apart from my description being brief, where is the error then? I've actually reasoned that my interpretation leads to the same order of complexity, so I am curious where I misspoke or misunderstood, if anywhere ...
I always use this algorithm, unless I really need something to be super fast.
A lot of people in the comments mixing particular variations of the bubble sort with particular variations of the selection sort
... also called bubble-sort, cause the elements go up like bubbles ...
What is this 1-based indexing
that is N²
that will be simpler if it be less than N lg N
Studied this sorting method in 12th grade. Bubble sort, Good for small range of numbers, Will kill the cpu time if pushed in for large range....
This is not bubble sort.
This is not an algorithm that any programmer has never heard of. I learned this in my second year of college in 1972. I would think that by now every student with a degree in computer science has also learned this. Otherwise our education system is going backwards in time. My copy of "Art of Computer Programming/Sorting and Searching." by Donald E. Knuth 1973 is not handy but I am pretty sure that this algorithm is well covered. Anyone with any interest in sorting data should have read this book by now. For a sufficiently large table in a virtual storage environment this algorithm would perform poorly, but it certainly is simple and it works. Programmers do not get paid big bucks to write cute code that is simple. I wish you would at least mention performance. You could also mention that duplicate rows may not work as expected. It is now 49 years since Knuth wrote his excellent series of books about programming. I did not see that you have added any value to his work other than your animated graphics are excellent. We could not do that in 1973. What few dumb terminals we had were not in color.
Same complexity O(n^2) as InsertSort and SelectSort.
I smell something sort of BUBBLY
i literally thought of this myself too lol
# my choice
for p in permutations(a):
if is_sorted(p):
return p
"Less is worse." 😂
huh. I thought that was called shell sort.
This is awesome video , i love it
can you make more videos about algorithems in simplest ways ??? please
i really like your style of explaination
Simplest?
The _broken_ quicksort, done on linked lists
quicksort : Ord a => List a -> List a
quicksort [] = []
quicksort (x :: xs) with (partition (< x) xs)
quicksort (x :: xs) | (bef, aft) = (quicksort bef) ++ [x] ++ (quicksort aft)
so wannabe XD
I typed it in and my FORTRAN compiler just says “Error: What is this shite?”
@@despoticmusic xD
to say swap() is like to say .sort()
omw to destroy my IT exam 😎
It fails to sort the 0th element.
wow, you should call it "bubble sort"
Too many compares and too many swaps… simple… but inefficient.
Eu já programei com um código, quase idêntico, em Algol.
Quite perverse indeed
Yeeeaa.... This is slow. Try Cartesian Tree Sort. It is WAY SIMPLER AND FASTER. O(n) to O(n log n). 50 lines of python code.
Dont do this in a job interview.
aka BUBBLE SORT
Next do exclusion sort!
does there not exists a sort function? why so move too advanced.