Errata: 0:00 filter(x) isn't valid; to filter out falsy values, use filter(None, x) 2:33 & 17:22: the example call in multi_accumulate's docstrings yields an additional (1, 1) at the beginning 4:40: the min and max should be arguments to itertools.accumulate, not list
This first error might be the reason for filter_false. But overall I find a nice synatax sugar, might look better then ! or not . Will there be a follow up with more_itertools?
I use the combinatorial ones a lot, often for testing things. I use all four of them. I have 5 versions of networking device firmware, and I want to make sure they're all compatible with each other, so I iterate over all the pairs of combinations_with_replacement. I have 10 devices networked together, and I want to test throughput for all possible paths. That's combinations, or permutations if I want to test both directions. I have 5 versions of firmware and 3 models of devices, and I want to make sure all firmwares work on all devices, so I iterate over the product of firmwares and models
The combinatorics functions are pretty damn useful IMO. Sure, it might "just" be for maths stuff, but the range of computational maths problems that involves them is vast! It's essentially the answer to the question "what are all the ways to put these inputs together?" which is super generic.
I was part of a project that did analysis on proposed firewall rules. Since the rules could be subnet-to-subnet we used the combinatoric functions to ensure we analyzed every possible unique combination of source and destination addresses in the proposed rules.
I use batched all the time at work, for sysadmin type stuff, or API queries that slow down when you give too many search terms. I’ve used the combinatoric functions to solve programming challenges, e.g. Advent of Code
I used zip_longest recently. I needed to vertically display two lists side by side in a GUI, and there was actually very little chance they'd be the same length. zip_longest with fill="", then ' '.join
I think there's a bug in your multi_accumulate example. You'd want the first argument of itertools.accumulate to be iterator, not iterable, right? For non exhaustable iterables, like range or sequences, you will end up counting the first element twice. Not an issue for min or max, but you'll definitely see an issue if you use the running sum example. iterator will have that initial value removed, though, so using that instead should solve the problem.
Indeed, this is also the case for, the most commonly used, python list also. Changing the "iterable" on line 105 to "iterator" as declared on line 99, would ensure only the rest of the iterating elements are included in the accumulate function (instead of the whole iterable from index 0). Anyway thanks mCoding! This is an insightful video!
Tip: call iter() on an iterable to ensure you get an iterator. The iterator protocol says you get the same iterator if you call iter() on one, so it won't make redundant layers.
I disagree "slightly" with your comment about using a for-loop instead of chaining the utility functions. First, always test the performance of the algorithm which you are conceiving. In general, the built-in utilities (especially the ones implemented in C) almost always perform better than something written in Python. But, performance in Python is not always intuitive - especially when chaining things together where boxing / un-boxing occurs. Test! I definitely agree with your statement about "showing off". I usually frame my comment about this as a maintainability problem. Think about the poor soul who will have to troubleshoot / enhance that part of the code 3 years from now. And if I wrote the code, inevitably that poor soul will be me. My rule of thumb is ... if you have to write more lines of comments to explain what is going on than the actual lines of code themselves - something is wrong. Thanks for another great video!
filterfalse makes sense when you have a function defined elsewhere that you are using (e.g. filter(my_func, it)). To reverse the conditional would require writing a new function or adding a lambda (e.g. filterfalse(lambda x: not my_func(x), it)), which is ugly.
@@syrupthesaiyanturtleit's an extra function call and requires giving a name to the temporary arguments. Python's functools doesn't have a nice compose function, otherwise we'd have: filter(compose(operator.not, my_func), iterable) But none of that matters because a good portion of the Python itertools library relies on filterfalse being defined, so why not make that function public if they had to implement it anyways.
You asked for good use cases of combinatoric iterators: as a dancer, I use them in a simple script that organizes my dance moves collections - it helps coming up with combos to practice
14:34 I've used it to print the content of a list inside a tkinter table: I zip_longest the list and the rows already present in the table; if the row item is None (not enough rows), I add a new row, if the list item is None (too many rows), I delete the row; finally if both are present, I update the row with the list item. I've found out that this is faster than always deleting all rows and then re-add them: it's better to update the existing ones.
Combinatorial once used here in numerical simulation when sum/product sequences over sets occur. My data is large enough so I use combinatorial generators that create the sets.
18:40 last_element seems useful. Honestly I wished there were a built-in "first" and "last" functions: there are so many times I'm using Jupyter or a console and I just want to check the structure of a random iterable (returned list, dict keys, etc), and a "next(iter())" always feels very clunky and not intuitive.
Awesome video as usual. Just one question. At 18:21 wouldn't you have to skip one from the iterable when passing it to accumulate? If so, not doing so will result in functions like sum adding the same element twice.
Product is most useful where you don’t know *how many* for loops you’re going to use. For example, if I want all strings of length N containing “A” and “B”, I can write product(“AB”, repeat=N). The combinatorics ones - yes, I’d imagine they’re most useful for problem solving applications including physics and maths.
I've created a typed_stream library to use the most important lazy functions from itertools as methods on a Stream class. That's far more readable imho than the functions
Could you give an example of compress vs filter where one of them offers a clear benefit (readability, performance, memory usage, anything)? They seem so close they're almost interchangeable (are they?)
My rule of thumb for filter and map is use filter and map, unless I want to define my own function. I always find comprehensions nicer than lambdas. For example, map(str, [1, 2, 3]) to be me is nicer and clearer than (str(a) for a in [1, 2, 3]), but (a*2 for a in [1, 2, 3]) is nicer than map(lambda a: a*2, [1, 2, 3]). Same for filter: filter(str.isupper, ['a', 'B', 'c']) is nicer than (a.isupper() for a in ['a', 'B', 'c']), but (a for a in [1, 2, 3] if a == 2) is nicer than filter(lambda a: a == 2, [1, 2, 3]).
I prefer a combination of the two dot product functions. The starmap and operator are what make the first version clunky, but the rest is OK, and with a generator comprehension it can be made really nice: sum(x * y for x, y in zip(u, v, strict=True))
5:05 There's now batched and 11:24 pairwise in the standard itertools? That's fantastic, i had to write these so many times🎉 14:40 I feel i'm using zip_longest less often than zip, but still regularily
I think the reason they have functions like filterfalse is because they want to provide a way to negate a filter function without wrapping it in a lambda in cases where you already have a function to use for filtering that you can't edit. Things like standard functions or something from a library would fit. It is annoying that it's named weirdly, though. I think something like filternot would make just as much sense and be two letters easier to type. I cloned the repository for Python and was playing with 3.11, but still haven't installed it. Meanwhile, my distro package is only at 3.9 and I'm leery of breaking things by upgrading it. I may end up dual installing it at some point so I can continually upgrade without worrying about breaking the system packages, but I don't use newer features like that for anything more than tests. Oh well, at least some of these will be inspiration for features I add to my own language and it makes me feel better about not having released it yet.
Looking at your thumbnail it seems that just putting return x[0]*y[0] + x[1]*y[1] is a lot more readable + shorted, probably even faster because it doesnt have to look up for all those functionw you use
The itertools recipes are available in the more-itertools package. Rather than copy-pasting the recipes, I just pip install more-itertools and I'm done.
I don't understand why the docs give the recipes for more-itertools and even link its docs, but then does not just include them in the standard library itertools? It's so silly!
I always find it hard to see how some more niche recipes/functions could be used without knowing the problem first. Regarding zip_longest: it feels like I might want to know if something happened, but not always care what happened.
for any benchmark testing, permutations is the best thing available. But saying "I use it a lot" when i literally wrote a single script doesn't feel right. Even if I use it almost daily
@@mCoding monads are meant for pure FP anyway, because in FP they don't have blocks and in particular exceptions blocks. They don't really have a place in python. That said it would be an interesting topic for a future video if you'd like to attempt it, because it's not an easy topic to explain clearly. Maybe a series on explainind the common monads and it could be complemented with OO design patterns.
@@squishy-tomato They don't, but it's likely dot separators caused an automated deletion based on it recognizing it as a URL or IP address. I have that happen far too often to my own posts, so I try to avoid using too many dots here and there.
If you need a list with a given length (eg for some sort of buffer/storage), instead of list(itertools.repeat("X", 4)) You could also use ["X"] * 4 Not sure if there are any notable performance differences, but you need one less import, and its shorter
Everything in itertools was written to be lazy because iterables may be have infinite items or may have a finite number of items that exceeds the amount of RAM. repeat("X", 1_000_000_000) versus ["X"] * 1_000_000_000 Additionally, you may not know ahead of time how many times you need to repeat that element, and iterators don't specify a length (and shouldn't be eagerly consumed). This starts to matter once you've adopted fully lazy iterators in your code: excel_column_A_cells = zip(repeat("A"), count(1)) yields A1, A2, A3, ... You couldn't implement this with an explicit list.
@@SalamanderDancer Technically it's a generator for tuples `('A',1),('A',2),...`, but it is funny that using lazy evaluation requires writing more code.
Errata:
0:00 filter(x) isn't valid; to filter out falsy values, use filter(None, x)
2:33 & 17:22: the example call in multi_accumulate's docstrings yields an additional (1, 1) at the beginning
4:40: the min and max should be arguments to itertools.accumulate, not list
This first error might be the reason for filter_false. But overall I find a nice synatax sugar, might look better then ! or not . Will there be a follow up with more_itertools?
I use the combinatorial ones a lot, often for testing things. I use all four of them.
I have 5 versions of networking device firmware, and I want to make sure they're all compatible with each other, so I iterate over all the pairs of combinations_with_replacement.
I have 10 devices networked together, and I want to test throughput for all possible paths. That's combinations, or permutations if I want to test both directions.
I have 5 versions of firmware and 3 models of devices, and I want to make sure all firmwares work on all devices, so I iterate over the product of firmwares and models
That's an awesome example use case. Thanks for sharing.
Yeah. Tests is a good example.
The combinatorics functions are pretty damn useful IMO. Sure, it might "just" be for maths stuff, but the range of computational maths problems that involves them is vast! It's essentially the answer to the question "what are all the ways to put these inputs together?" which is super generic.
at 4:41 I'm assuming "min" and "max" are passed to "accumulate" and not the "list" constructor
Yep. I noticed that too
"iterable" doesn't sound like a word anymore
"irrbl"
It's called semantic satiation. Pretty interesting concept
@@kilianvounckx9904 TIL
I don’t even need to watch the video to know this will be accurate 😂😂😂
Combinations with replacement are very useful for implementing the statistical bootstrap.
I was part of a project that did analysis on proposed firewall rules. Since the rules could be subnet-to-subnet we used the combinatoric functions to ensure we analyzed every possible unique combination of source and destination addresses in the proposed rules.
I use batched all the time at work, for sysadmin type stuff, or API queries that slow down when you give too many search terms.
I’ve used the combinatoric functions to solve programming challenges, e.g. Advent of Code
Ive found those functions useful while doing simple grid searching where you test combinations of hyperparameters to tune ml models.
Same here.
I've used permutations and combinations in genomics. Probably pretty useful for big data in general.
“Try not to get caught up in showing off just how well you know the itertools library”
I feel attacked
I'm a founding member of Itertools Anonymous. There's help available.
I used zip_longest recently. I needed to vertically display two lists side by side in a GUI, and there was actually very little chance they'd be the same length. zip_longest with fill="", then '
'.join
I think there's a bug in your multi_accumulate example. You'd want the first argument of itertools.accumulate to be iterator, not iterable, right? For non exhaustable iterables, like range or sequences, you will end up counting the first element twice. Not an issue for min or max, but you'll definitely see an issue if you use the running sum example. iterator will have that initial value removed, though, so using that instead should solve the problem.
Indeed, this is also the case for, the most commonly used, python list also. Changing the "iterable" on line 105 to "iterator" as declared on line 99, would ensure only the rest of the iterating elements are included in the accumulate function (instead of the whole iterable from index 0). Anyway thanks mCoding! This is an insightful video!
Tip: call iter() on an iterable to ensure you get an iterator. The iterator protocol says you get the same iterator if you call iter() on one, so it won't make redundant layers.
I disagree "slightly" with your comment about using a for-loop instead of chaining the utility functions.
First, always test the performance of the algorithm which you are conceiving. In general, the built-in utilities (especially the ones implemented in C) almost always perform better than something written in Python. But, performance in Python is not always intuitive - especially when chaining things together where boxing / un-boxing occurs. Test!
I definitely agree with your statement about "showing off". I usually frame my comment about this as a maintainability problem. Think about the poor soul who will have to troubleshoot / enhance that part of the code 3 years from now. And if I wrote the code, inevitably that poor soul will be me.
My rule of thumb is ... if you have to write more lines of comments to explain what is going on than the actual lines of code themselves - something is wrong.
Thanks for another great video!
filterfalse makes sense when you have a function defined elsewhere that you are using (e.g. filter(my_func, it)). To reverse the conditional would require writing a new function or adding a lambda (e.g. filterfalse(lambda x: not my_func(x), it)), which is ugly.
the lambda isn't ugly
@@syrupthesaiyanturtleit's an extra function call and requires giving a name to the temporary arguments.
Python's functools doesn't have a nice compose function, otherwise we'd have:
filter(compose(operator.not, my_func), iterable)
But none of that matters because a good portion of the Python itertools library relies on filterfalse being defined, so why not make that function public if they had to implement it anyways.
@@SalamanderDancer why not just add a parameter to the filter function instead of creating a new one entirely?
@@syrupthesaiyanturtlea new parameter, presumably something called "presume_false" would be uglier
Defining filterfalse is also ugly so that's not really a great argument
I was hoping you would do itertools. Please do functools too!
You asked for good use cases of combinatoric iterators: as a dancer, I use them in a simple script that organizes my dance moves collections - it helps coming up with combos to practice
Actually a legit use case! Thanks for sharing.
14:34 I've used it to print the content of a list inside a tkinter table: I zip_longest the list and the rows already present in the table; if the row item is None (not enough rows), I add a new row, if the list item is None (too many rows), I delete the row; finally if both are present, I update the row with the list item. I've found out that this is faster than always deleting all rows and then re-add them: it's better to update the existing ones.
Combinatorial once used here in numerical simulation when sum/product sequences over sets occur. My data is large enough so I use combinatorial generators that create the sets.
18:40 last_element seems useful. Honestly I wished there were a built-in "first" and "last" functions: there are so many times I'm using Jupyter or a console and I just want to check the structure of a random iterable (returned list, dict keys, etc), and a "next(iter())" always feels very clunky and not intuitive.
I was like: pff, itertools. I know those, what new can I learn.
But that last example... very clever :-) yet still just applying the basics. Nice.
Awesome video as usual. Just one question. At 18:21 wouldn't you have to skip one from the iterable when passing it to accumulate? If so, not doing so will result in functions like sum adding the same element twice.
filterfalse is necessary b/c sometimes your predicate is "bool", and lambda x: not x is blech.
You can use operator.not instead.
@@andreismolensky1507 not.
operator.not_
thanks for the pairwise tip!
Great vid. Is functools next?
The combinatoric ones are useful for Sudoku variant helper tools
Another advantage of product is that it can be a step in an iterator stream, whereas nested for loops can't be.
Product is most useful where you don’t know *how many* for loops you’re going to use. For example, if I want all strings of length N containing “A” and “B”, I can write product(“AB”, repeat=N).
The combinatorics ones - yes, I’d imagine they’re most useful for problem solving applications including physics and maths.
I think that's a great answer!
I've created a typed_stream library to use the most important lazy functions from itertools as methods on a Stream class. That's far more readable imho than the functions
Link
@@aflous it's on pypi and on github on my account Joshix-1 with the same name (yt often deletes links)
It's typed_stream on pypi
When your C++ background doesn't let you spell "in" without a "t" 0:04
Could you give an example of compress vs filter where one of them offers a clear benefit (readability, performance, memory usage, anything)? They seem so close they're almost interchangeable (are they?)
My rule of thumb for filter and map is use filter and map, unless I want to define my own function. I always find comprehensions nicer than lambdas. For example, map(str, [1, 2, 3]) to be me is nicer and clearer than (str(a) for a in [1, 2, 3]), but (a*2 for a in [1, 2, 3]) is nicer than map(lambda a: a*2, [1, 2, 3]). Same for filter: filter(str.isupper, ['a', 'B', 'c']) is nicer than (a.isupper() for a in ['a', 'B', 'c']), but (a for a in [1, 2, 3] if a == 2) is nicer than filter(lambda a: a == 2, [1, 2, 3]).
I prefer a combination of the two dot product functions. The starmap and operator are what make the first version clunky, but the rest is OK, and with a generator comprehension it can be made really nice:
sum(x * y for x, y in zip(u, v, strict=True))
5:05 There's now batched and 11:24 pairwise in the standard itertools? That's fantastic, i had to write these so many times🎉
14:40 I feel i'm using zip_longest less often than zip, but still regularily
Ever heard of more_itertools? That's where batched originally comes from; there's more great stuff inside 👏
I think the reason they have functions like filterfalse is because they want to provide a way to negate a filter function without wrapping it in a lambda in cases where you already have a function to use for filtering that you can't edit. Things like standard functions or something from a library would fit. It is annoying that it's named weirdly, though. I think something like filternot would make just as much sense and be two letters easier to type.
I cloned the repository for Python and was playing with 3.11, but still haven't installed it. Meanwhile, my distro package is only at 3.9 and I'm leery of breaking things by upgrading it. I may end up dual installing it at some point so I can continually upgrade without worrying about breaking the system packages, but I don't use newer features like that for anything more than tests. Oh well, at least some of these will be inspiration for features I add to my own language and it makes me feel better about not having released it yet.
Looking at your thumbnail it seems that just putting return x[0]*y[0] + x[1]*y[1] is a lot more readable + shorted, probably even faster because it doesnt have to look up for all those functionw you use
starmap & zip use in the example in the title is redundant: map can take multiple iterables.
So it could be sum(map(mul, x, y))
is itertools the greatest pypackage?
Combs + perms can be useful for test data
Combinatorics are guaranteed to be useful at least once a year - Advent of Code.
I came here to say this. 😊
Nice! But I want MORE! I SAID MORE!!! As in more-itertools, of course.☺
If you could make this comment recursive it'd be really cool.
The itertools recipes are available in the more-itertools package. Rather than copy-pasting the recipes, I just pip install more-itertools and I'm done.
I don't understand why the docs give the recipes for more-itertools and even link its docs, but then does not just include them in the standard library itertools? It's so silly!
I always find it hard to see how some more niche recipes/functions could be used without knowing the problem first.
Regarding zip_longest: it feels like I might want to know if something happened, but not always care what happened.
hi, newbie programmer here. Why do you keep using 'assert' in your functions? What's the point of this keyword in this context?
for any benchmark testing, permutations is the best thing available.
But saying "I use it a lot" when i literally wrote a single script doesn't feel right. Even if I use it almost daily
A single script that you use every day sounds like an awesome script!
the 'better use case' you seem to miss is 'security' ... aka: encryption.
Could you elaborate? I'm not familiar with any encryption schemes that iterate over permutations or combinations.
been waiting on this one
12:21 there is no better place for lambda as that. add = lambda pair: pair[0] + pair[1]
On the contrary, giving a name to a lambda is considered by some (linters) to be a class 3 felony.
@@mCoding okay okay okay, then just rewriting add function would be enough
I use tee to duplicate a generator.
Useful for counting how many lines my Cursor SQLite returned without consuming it
You could have mentionned that multi_accumulate() and multi_reduce() are monadic and what monad they implement.
Hmmm true, but would throwing in the definition and explanation of a monad in an already 20 minute video be a good thing or a bad thing?
@@mCoding monads are meant for pure FP anyway, because in FP they don't have blocks and in particular exceptions blocks. They don't really have a place in python.
That said it would be an interesting topic for a future video if you'd like to attempt it, because it's not an easy topic to explain clearly. Maybe a series on explainind the common monads and it could be complemented with OO design patterns.
Some of them are quite useful, but as a developer, I need to break my habits first to remember the use case and instead of loops go to the itertools.
Itertools group by is reminiscent of how old fashion Hadoop works with parallel key, value operations
I would have used those combinatorics functions (and even zip longest) for advent of code challenges.
Probably not any real world code
In the thumbnail you’re missing a )
I never use itertools. I don't why there is such a pressures to learn it.
How come all the comments are porn bots? And who's this Scott they're all referring to?
That's just modern day RUclips. Dislike and report.
luckily they all got deleted, or maybe youtube is filtering them from my view
I don't see them. Maybe RUclips is showing them only to frequent visitors to porn sites?😂
You would see them if you're early. I reported all of them, and I suppose many other early viewers did, so I guess YT has removed them.
@@squishy-tomato They don't, but it's likely dot separators caused an automated deletion based on it recognizing it as a URL or IP address. I have that happen far too often to my own posts, so I try to avoid using too many dots here and there.
If you need a list with a given length (eg for some sort of buffer/storage), instead of
list(itertools.repeat("X", 4))
You could also use
["X"] * 4
Not sure if there are any notable performance differences, but you need one less import, and its shorter
Everything in itertools was written to be lazy because iterables may be have infinite items or may have a finite number of items that exceeds the amount of RAM.
repeat("X", 1_000_000_000) versus ["X"] * 1_000_000_000
Additionally, you may not know ahead of time how many times you need to repeat that element, and iterators don't specify a length (and shouldn't be eagerly consumed). This starts to matter once you've adopted fully lazy iterators in your code:
excel_column_A_cells = zip(repeat("A"), count(1))
yields A1, A2, A3, ...
You couldn't implement this with an explicit list.
@@SalamanderDancer Technically it's a generator for tuples `('A',1),('A',2),...`, but it is funny that using lazy evaluation requires writing more code.
When she iterate on my object till I throw
Python is so Slow
Mojo is fixing that