Stop Using the Worst Way to Loop Lists in .NET!
HTML-код
- Опубликовано: 8 фев 2025
- Get 20% off our new Deep Dive: Domain-Driven Design course on Dometrain: bit.ly/44pf7sE
Get the source code:
Become a Patreon and get special perks: / nickchapsas
Hello, everybody, I'm Nick, and in this video I will compare every single loop technique that you could feasibly use in C# to find out which one you should be using.
Workshops: bit.ly/nickwor...
Don't forget to comment, like and subscribe :)
Social Media:
Follow me on GitHub: github.com/Elf...
Follow me on Twitter: / nickchapsas
Connect on LinkedIn: / nick-chapsas
Keep coding merch: keepcoding.shop
#csharp #dotnet
A video about IAsyncEnumerable with yield would be great!
I second this. Also in a webapi read from a db. Is there any memory or performance gain vs an array. Im sure there are tradeoffs but i would like to know as well.
That would yield great results
I freaking love AsyncEnumerable. And if you combine it with the System.Linq.Async package, your team won't feel completely lost either. They can always ToListAsync() if they're not yet comfortable with the yield keyword.
Sure!!
@@T___Brown computer science 101 says EVERYTHING is a tradeoff. Memory vs performance will never escape us, by nature of our current tech.
Your instincts are so right they apply to pretty much everything, not just this particular case.
My only advice is unless you work on embedded systems or something like that, don't worry too much about this sort of micro-optimization unless you can 'feel' the code in your app, as in lag/unresponsiveness/unacceptable load times, etc.
Nick why dont you look back to some of your code from years and years ago and pull out some of the questionable decisions you made, and then discuss why they were questionable. Hell i have some code if you like!
That’s a great idea!
CodeCop: internal investigations
CodeCop (Season 2): Recursive Refactoring
Nick Spansas made another nice video about spans :D
before GTA 6 !!!
"We are going to use a random number as the seed 420" without breaking character 🙂
8:38
It would be cool if .NET had some kind of substructural type system reserved for these specific use cases, like for example:
```
var list = new List() { 1, 2, 3 };
var span = CollectionsMarshal.AsSpan(list); // List is restricted until span is out-of-scope, if it can't prove it is out-of-scope then it will be restricted forever
list.Add(10); // Err: list is restricted by 'span'
```
As this would make situations like this impossible. Maybe I'll experiment a bit with roslyn analyzers later to see the suitability of it.
I think this would be solved by implementing Rust's borrow checker
seems like this is basically what all the overhead of the other methods is all about
@
I don't think this is a solution on itself, the borrow checker is something very unique in this regard and also very complex and costly. You can solve this with a simple ownership analysis which is totally possible to do in roslyn.
@@SumGuyLovesVideos
What?
@@diadetediotedio6918 is the underlying structure of the list a span? Do the foreach operations use the span, with error trapping overhead behind the scenes? I was saying it seems like that must be what's happening, and the overhead code is why it's not as performant; is that what is happening?
I am working as .net developer and that is the first time when I seen goto keyword. Helped me a lot with data input in my console application. Obviously a lot better option than infinity loop with breaking only when conditions match.
Thank you!
"Those days are gone"
*Unity developers crying in a corner*
We're just off in the corner implementing IJob and hoping Burst will take care of all of our problems 🥲
For real. Unity 6 announced and no .NET Core / CoreCLR still
@@dhkatz_ that was probably the most sad thing. Well ... use the DOTS :D
@@dhkatz_ ??
@@dhkatz_ that's what I'm pissed about the MOST!! like all graphical advacements and no CORECLR?! It's been in the works for years now I think?
Nice vid. The large increase in the ForEach(delegate) is to be expected if it's not being inlined. But... it's biased. You are doing SO LITTLE work the function call has disproportionate weight, we're at the ns level after all. Throw any amount of realistic work and the calling cost next to vanishes. That's the major issue with "microbenchmarks". Also, imho, the intent of it is that you can use it straight up when you receive an external function to run on the list, not really to use it for code you control.
Indeed, forEach ia actually the only normal way to use iteration there that is concise and elegant. Who uses indexes in production?
@@MarincaGheorghe Everyone that isn't doing simple s**t? If you can go a whole day without using indexes, you're not doing anything even moderately interesting...
You can avoid the ForEach closure by providing a static method instead of a lambda.
If your method requires context you can use an extension method that provides a generic T callback with the context.
I wish he tried this in the video. The T callback is also a great technique in general to avoid closures while keeping the lambda ease of use.
Good advice, but just having to remember that a block of code is sensitive to the capture of outer scope variables would already make me avoid that pattern (for general or performance oriented use), as that makes it a heavier cognitive load to read and modify compared to the other looping methods.
And you still pay the cost of the delegate invocation.
@@protox4 But it would still make for a better comparison to the other methods. Right now, it's roughly 3-4X slower due the constant allocation of a new closure every iteration.
Also, the problem is that we're capturing variables from the local stack, which causes the compiler to generate code to create a new closure every iteration. A static lambda is another way to avoid such mistakes.
@@keithwilliams2353
static blocks the capture of outer scope variables that are not static as well, so this is not a real problem.
Nick, another great C# vid as always. Keep up the great work!
You can also add unrolled loops to your benchmarks.
how do you unroll a loop if the iteration count is unknown though? it's known here, but nobody guarantees in real code that the list will have N items.
@@Kitulous You still iterate, but process multiple, say 4 items, per iteration. Then process the remainder with the regular loop if the size is not perfectly divisible by 4.
@@gdargdar91nah, i'd unroll
Whats an unrolled loop... and why should i care about it
Nick, a video on the IAsyncEnumerable and yield keyword would be awesome! Thanks
I would love to see a video on yield/return. I've never wrapped my head around that one.
I almost spilt my drink when I saw a GoTo being used !!!! :D
Very interested to hear the "normal" foreach is as quick as a for - I've been trying not to use it. I guess the main reason now not to use it is that the resultant string is immutable where list[i] in a for loop isn't.
Not sure if this has been suggested, but you could use just a single variable, starting at "count-1", decrementing and stopping after zero is hit. You'd hit all the items in reverse order, so it might not suit every scenario. There'd be less work for the CPU, but I'd imagine any performance increase (if any) would still be pretty slim.
So correct me if I'm wrong:
1. No concurrent access to this List (as it could get changed)
2. No Inserting/Removing elements inside the loop (List can reallocate a new array, and you'd have a span of an old one)
3. No heap allocation inside the loop whatsoever (allocation could trigger a garbage collection, and shuffle objects around, invalidating our span)
If I have all these prerequisites in place, I should be safe, right?
The real trouble maker is the removal of items, because the span will allow you to access items that are now 'out of range', which depending on whether it's a value or reference type, will net you repeat/stale values or nulls.
Inserting isn't a big deal, if the list grows, you've still got the old array and can keep going.
The bigger issue with the list growing is that in-place updates/changes will no longer be reflected in your span (or vice versa).
Spans are built on refs, not pointers, so point 3 is not an issue in this scenario.
I'd love to see a video about the .net 8 blazor and the hybrid rendering modes, it's a huge change IMO and I struggled with it a lot when I make a new project
2:41 I don't know what JIT compiler will output but in C these kind of loops produces single instruction since it basically means 'get the last element'. You may need to use something like an aggregation to prevent JIT optimization. But, It's still a valid benchmark imho. It proves JIT most likely optimize simplest code better 😅.
It will be good to make some test with big elements in list, like struct with 10+ fields, because in this case coping will be really matter and and you can compare usual for vs for with "ref"
Your example also works for the aggregate linq function. It would have been interesting to see how linq for each and aggregate perform compared to each other and to the other examples.
But otherwise great video as always 😄
I would like to have a video on IEnumerable and how can you yield return items. It should be very interesting. Covering IAsyncEnumerable also would be superb.
Recursion would be a nice one in the comparison. 😊
For 1 million iterations as in the video, with recursion you might run out of memory. You might also not - I don't know - but I don't think it's good practice to use recursion for so many iterations.
This is because with recursion, every time the same method is called, the data associated with the old call of that method is not popped from the stack. Therefore if (for example) in your recursive method you defined an integer myInt, then by the end of the recursion you will have 1,000,000 local myInt variables stored simultaneously. On the other hand, a for loop just stores a single integer i, and it reuses that same i when it moves on to the next iteration.
Recursion is great in certain situations, just maybe not when you've got an extremely large collection. I agree it would have been nice to see what would happen though!
It's a million iterations of a flat loop, not really a good fit for recursion. Although I think you'd still have the same issue ForEach has.
Recursion is really only good for traversing trees and things like that. Where the path of the code would require complicated looping with extra memory usage.
What is the explanation why spans are faster? I don't think it is range checks, because both List and Span do them (unless I am mistaken?).
The indexer on the list adds an extra indirection and range check.
I think bound's checks are necessary for a list that is a local field because async or multithreaded code could change the size of the List during the iterations (or replace the list that the reference points to, by an assignment). A span can't outlive the current stack its on; nothing can change its reference or size except the code of the currently executing method (or the non-async methods its passed to). Because of this, the JIT KNOWS when it can safely skip generating bounds checks. The same can be done for arrays defined locally in a method (usually).
Great video Nick!
What about the Select method in linq? I'd be interested in the performance difference that has compared to the ones you demoed in this video
LINQ is always slower. What you gain in less lines of code is rarely rewarded with good performance.
Use whatever causes the least grief for the team you're working with -- unless a *customer requirement* is failing and requires a specific variation. Team "grief" can be from breaking preferred standards, budgetary issues, resource usage, etc. that your co-workers (and perhaps managers) impose above customer requirements. If you've met customer requirements, then you're done and worrying about this is working for free. Don't work for free.
6:21
Unfortunately, it still is for custom enumerators in custom collections (even if you are using a ref struct enumerator without any allocations, as the .NET won't do any optimizations on the process it will be slower).
I'd always prever foreach() over for() as long as I don't need the loop counter.
As you should. It's just less words for the same thing.
@@gileee apparently it's not the same, as it's 3 times slower.
@@Hillgrov Where did you see that? Maybe 5 years ago that was the case, but you can't trust these measurements when they say 5us +- 5us error lol.
@@gileee this video... he just showed it.
@@Hillgrov You are looking at the .ForEach, not the foreach :)
For the label part, even better, prefix every line with "L10:", "L20:", "L30:", and so on, and pretend you're programming in Commodore Basic.
Genius.
If Span is referencing the underlying array, how does the List compare to an array?
In C# a list IS an array as far as the runtime is concerned. The only time you notice it, for practical intents and purposes is when it reaches it's limit and has to grow. Can test it by not pre-dimensioning it and just add more and more items while tracking "Add() time". Every once in a while you'll get a minor delay, meaning you hit the maxsize and a new array had to be allocated, then the old is copied to the new and disposed. As you might guess, the minor delay gets worse over time as more memory needs to be copied. And if you have REALLY BAD memory fragmentation you can even hit a point where you can't grow because there's no contiguous space large enough.
@@ErazerPT it is not about the memory characteristics but about access of the indices in the various loops I’m interested in.
@@ryan-heath in terms of access a list and an array are identical. the only difference is that a list is a dynamic array, meaning it can be appended to, while a regular array is static, you cannot add or remove items from it.
@@ryan-heath Not sure what that meant. In foreach/ForEach you don't have access to the current index, in the others you have.
In Span, you are not slicing so it's 1:1. If you were, you'd just have mempointer+[n].
If you mean speed of access of list[n] vs span(oflist)[n] vs array[n], it should be the same quite likely because the runtime/JIT will most likely turn everything into array[n]. It's not bound to play by the same rules as "user code" ;)
@@ErazerPT If the array is as fast as the List.AsSpan(), will the array access always triumph the List in any loop variant?
One almost unrelated thing: "For" loop i dont think is the most common looping type, almost everyone uses "Foreach" instead of "For" based on what i see. "For" in my experience only used, when "Foreach" is not available for some reason, which is pretty rare. Even "While" is more common from what i see than "For", i personally using it like 3-4 times more frequently. (Note: This ofc only applies to C# and maybe a few other languages that can do iterarions)
For used to be the main loop you saw, and in older code you will still see for loops everywhere. More recent code favors foreach, and you mostly see for loops when you actually want to use the iterator value for some reason.
For is used when you need random index access (two items in the same time). And when you have IEnumerable it is much easier to use foreach
@@joshpatton757 Old languages didn't have foreach loops. In c# there used to be a performance penalty in foreach which people really didn't like.
@@gileee I was talking about older C# code, not older languages. And yes, that was one of the reasons why things shifted.
@@joshpatton757 Older programmers came from older languages, but kept the same habits. That's why I mentioned it.
Is this CollectionMarshal faster for complex generics types too? Or only for primitive types (such as strings, int and so on)? 🤔
In december I will have been written C# for 15 years. For all those years, I have never ever needed to use goto. I wonder what are the use cases. I used a few Do While loops, probably half a dozen times.
The only time I ever saw goto used in production C# code, there was a comment next to it saying "// SEE! goto IS useful sometimes!!!!!".
It was just an if-else except they used a goto instead of else. Totally unnecessary.
(I did occasionally see goto used for seriously hardcore performance optimizations back in my C++ days, when it was profiled and measured to improve things. Not since then).
I thought it was a legacy thing from C / C++ where it would be easier to just copy code from it wholesale and use it in C# . As long as you change the notation, you're good to go. I've probably used it once to get out of a multi-stacked for loop because at one place I worked, you were not allowed to use early returns at all and goto was preferred 🤷♂
If you ever want to do switch case fall through, you need to use the goto keyword. Really the only other use case I have used it for is emulating Rust's labeled loops.
Nice video as always
Nick what about for example .Select from Linq?
string result = "";
_list.Select(x => result += x);
This is a bit of a "please don't do it", but still a way to loop XD
Using .Select is very elegant, clean, expressive & concise way of looping. 90 % of the cases, for & foreach could be replaced with .Select. In matter of fact, I'm only using foreach with synergy of yield return, in case of large collections, otherwise prefer .Select
I don't give a sh!t that my code is 200 micro secs slower, because I'm not launching a missile or rocket to outer space. I believe it is a worth to have more clean, easy to understand & nice to read code, rather than some 300 microsecs faster hairy sh!t
I'm not promoting the code above as masterpiece of functional paradigm, just speaking in general.
@@johnnykeems2911 For me is just the fact that the code I put up there is passing an out of scope variable inside the lambda and executing an operation without return, I would refactor this in some way that would return the value instead, but yeah most of the times I loop with LINQ and Lambdas, you may not be launching a missile but this could be the difference of a client having to pay more for infrastructure, that is when I would think of making the type of decisions that Nick is showing in the vid
@@ricardoduarte442 I never said that the code you put there is 100% :)) I just said that for me looping with Select is the better way of doing it in most of the cases. I don't think that using .Select instead for or foreach in business app, will impair the performance so much, that will cost your client lots of money for infrastructure, this is simply not serious. For example badly written sql script could cause you much more overhead, therefore infrastructure costs, than .Select :)) it still depends on particular situation but take into account that linq extension methods are also optimized for better performance, so it could turn out that doing it with linq might be more efficient that doing it manually :))
First of all, it's not a loop, because Select itself is a lazy method, it only produces a new IEnumerable. You still have to foreach on it, or to call a method like Count or Sum.
Secondly, this is an Anti pattern, because it mixes up side effects with pure functional programming. LINQ methods are not meant to alter context, but to produce results without mutating context.
@@jongeduard
"First of all, it's not a loop, because Select itself is a lazy method, it only produces a new IEnumerable." And ???
"You still have to foreach on it, or to call a method like Count or Sum." Are you sure of that ??
"Secondly, this is an Anti pattern, because it mixes up side effects with pure functional programming." I dont see how .Select extension method mutates any new object that, it actually returns. Are you trying to persuade me that LINq methods are not functional ?? Dude, saying that Select is ANTI-pattern is most stupid sh***t I heard today so far. Anyway, Im not going to argue, you can live with that belief :)
The worst thing I see is that people call ToList or ToArray on an IEnumerable JUST to be able to use the ForEach method.
So in that case the main issue is not just using a slower loop, but needless buffering (and which, of course, internally needs another loop for that).
Will these results hold if the list type is not visible to the compiler/linker? e.g. if you get a IReadOnlyList as a response from a function in a separate DLL?
The foreach would get worse if the static type of the variable isn't List or T[]. The reason foreach is as fast as the for loop is List has a struct enumerator that the foreach loop binds to. Even just changing the type to IList would require allocating an enumerator and cause slowdown.
I once used "a totally random number of 69" in a presentation for my team. I don't think anyone really noticed (pretty sure half of them weren't actually there in principle of attending an online meeting). It felt pretty good to sneak it in though. Thank you for teaching me that, Nick!
I'm curious exactly why .ForEach scores so low. I'd have thought the optimization phase would have generated the same code as the foreach loop.
I am curious, how would a plain listing of all 1 000 000 elements would compare to the rest. Nick, can you do it, please?
I realize this is sample code, and you know it's not a problem here, but my mental code-reviewer won’t let this go: your do/while loop doesn't account for the possibility that the list is empty.
That's actually one of the reasons do/while is so uncommon; it assumes a minimum of one iteration, when zero is a valid scenario in most production software.
The one time I use it is when querying an API that has pagination with a 'next page link'. I has to do minimally one query to start with, and optionally query the other pages. Any other examples?
@@sttrife Maybe if you're building a table or csv file? First iteration is a header row you know will exist, and then the next n loops are the data.
I also use them sometimes when generating data for a one-off unit test. Like, if I want to test involving two random values that I can be sure don't match:
int i1 = GetRandomInt(10);
int i2;
do { i2 = GetRandomInt(10); } while (i1 == i2);
@@sttrifeAnything where you first do an action then test the results of that action to determine if you need to keep doing it.
@@sttrife Like Wilee said, anywhere where at least one iteration is expected. Although a for loop is usually the best fit there too, just for consistency, so it's only truly useful if you do non-trivial logic in the loop that determines if the loop should continue. Without a do-while you'd have to duplicate the code in the for/while loop to determine if you should skip the "second" run of that code (ie. the first run of the loop). You can come up with infinite examples.
Like when you run a game, you need a game loop right. A part of that game loop is to also read input from the input devices. If that input is to quit the game then you trigger a break.
Like:
do { input = readInput() }
while (input != EXIT)
With for/while you'd need something like
var input = readInput()
while (input != EXIT) { input = readInput () }
So there's code duplication happening. Which is a bigger issue if there's more logic happening than just a call to readInput() in this example.
since you're not even doing anything in the loop - and just returning result[last] - isn't the compiler smart enough to unwind and remove the entire loop, and just return result[last]? I'm surprised there's no rider hint telling you that the loop is pointless, resharper usually knows
Interesting that not only is Span twice as fast as the "for"s + "while", but it has also twice the statistical spread (cf. error/deviation of 2.[...] vs 4.[...])! Why is that?
I'd love to see some stuff about IEnumerable.ToArray() vs IEnumerable.ToList() and when one makes sense over the other. I'm generally not a fan of List if I can get away with just constructing my IEnumerable and iterating it into an array with yield return. But I'm uncertain about whether it's a good idea in terms of performance. Sure iterating the array afterwards might be faster, but what about materializing it? Is ToList() more optimized for handling materialization of any IEnumerable since the size isn't known beforehand?
IEnumerable.ToArray() uses an internal struct called LargeArrayBuilder, which is kind of like a List but it builds with pools instead of a single array. After all your items have been added to the LargeArrayBuilder, ToArray() is called which allocates the return array and each pooled buffer is copied into the destination with Array.Copy().
It's hard to say if one is faster over the other without benchmarking, however the fact that it uses pools means there should be less redundant data copying on larger datasets, making ToArray() faster.
I ran some benchmarks and found that ToArray is both faster and more memory efficient than ToList, provided your data is large enough and you use a length modifying enumerator method like Where (There is no difference with Select).
With a size of 100, toarray and tolist where identical in speed and memory.
At 10,000 the speed is still the same but tolist uses almost double the memory (78.8KB vs 128.4KB).
At 1,000,000 toarray is faster (4,443us vs 5,204us) but is only a bit more memory efficient (7813KB vs 8192KB)
At 10,000,000 toarray is still faster (42,356us vs 50,718us) and more memory efficient (78,126KB vs 131,073KB).
In conclusion, ToArray is almost always better, but how much better depends on your data size.
Test was run using int[]s.
I'm really annoyed that youtube removed my first reply :/
Also for the .ForEach version, would not it be faster (and allocate less) if you didn't captured a variable?
The closure isn't helping its performance, but I doubt it could catch up to the basic loops with all the help in the world. Because of the extra function calls.
@@gileee
I don't think it would catch the basic loops performance, but it would potentially improve it greatly. Maybe even more if the JIT can do some function inlining here (which I'm not sure it would do).
There is no "foreach", "for" or "while" in il. All of the loops variants should end up as "if" and "goto". Why then is "for" faster than "goto"?
The title is a bit misleading, since the video is specifically about iterating a List, not just comparing the kinds of loops in general
You need a list (or any IEnumerable) for iterating on it.
He is comparing the loops.. not the collection types
Who else didn't know you can do 1_000_000
You should try a loop using pointers in an unsafe context, that should be the fastest of them all
"only spans!" :)) but typically I would use foreach.
for and foreach (depends if I need index)
Why shouldn't I replace all loops with the Span version?
As mentioned in the video, span traversal can fail if the elements in the collection change while you are traversing. So unless your traversal algorithm is thread-safe with regards to the collection, you risk running into problems at runtime.
Unless you *really* need the performance in that specific scenario, its better to just use a regular foreach.
Did you know that `num & 1 == 0` is faster than `num % 2 == 1`? Doesn't mean you should use it though, as it is both less clear and the performance benefit is less than a cpu cycle.
It's all about knowing how to employ the tricks when performance is absolutely necessary, like if you're iterating over a billion items.
Thanks
@Nick have you looked at HTMX?
Stay tuned :D
Searching the comments for "420" references... I know yall saw it.
I was hoping you use parallel loops!!
Parallel loops are kind of situational, and are predictably going to be much faster. You can't parallelize every loop...
@@HMan2828Or even slower in cases like this where very little work is done, so the overhead of the parallel overtakes the actual work.
I just want to mention to be careful with the 'foreach' statement you made. If you are looping through collections from the standard library then yes, they are quite optimized, but some custom enumerators might not have the same optimizations in place which might or might not impact your perfomance.
Why is the ForEach method so much slower?
Span is faster because it doesn't do bounds checking, right?
It technically does do bounds checks, but the JIT is good at optimizing them away because it can prove the access can't go out-of-bounds. A list can be mutated elsewhere so the checks are needed.
200 ms boost in an array of one million items... If you care about those numbers you probably shouldn't use C# in a first place.
I thought the native ForEach method was more efficient than a foreach 😭
No parallel loop love?
You forgot Enumerator (while /next) which should be close to the fastest without the risk of span.
That's what foreach loop does.
@@protox4 Sort of. Sometimes. Not always which is why it's slower than for. If it was just Enumerator, it would be as fast or faster than for.
foreach is probably the most used way to loop over lists
The other possibility is by using recursion!
Pretend it's not possible. You'll be happier and will write better code that way.
No Aggregate comparison?
oh, a goto in c#... my old friendnemy
Shouldn't then the list be pinned?
What about AsParallel().For all()?
well if you go down that route you have to take into account so many more different things
"Recursion is easy once you understand" 💀
Fun fact: 'var _size = _list.Count;' does nothing. JIT optimizer is smart enough to compute the value only once and store it inside a CPU registry, regardless of you placing it into a separate variable or not. Just compare resulting JIT assemblies. Length check will always consist of a single compare instruction like 'cmp edx, eax'.
Kinda strange to see such bogus 'optimization' from you.
Did you benchmark this? I just tried on .Net 8 and there was a 200us difference for the For loop method (was on release mode)
iirc in one of his videos, he tested that array.Length would be optimized in a for loop, but not list.Count
Again, just look at resulting assembly.
Reference for sharplab: #v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKGIAYACY8lAbhprKfIHYGBvGg2HdGAGwgQADgAoAMgEtcGADzM6APgZilGAJQChI4+oaxcAVzEYGAXgYA7K2PbUjx4QDNoDGQoc2CnYMdKwMQSraugB0AMIQFgFhCgDUKQaC1B7ZZjCW1sE6ygDaCgC6rjkMAL4cWTnEfOZWGJUitdQdQA==
Now lets analyze the function.
L0000: sub rsp, 0x28 ; inits the stack
L0004: xor eax, eax ; 'result = null'
L0006: xor ecx, ecx ; 'i = 0', remember i being ecx
L0008: mov r8d, [rdx+0x10] ; 'list.Count' is loaded into r8d here
L000c: test r8d, r8d ; actually cool optimization, tests if 'list.Count' is 0
L000f: jle short L003d ; if the condition met, jumps to the end immediately
L0011: nop [rax] ; does nothing
L0018: nop [rax+rax] ; does nothing
; loop start
L0020: cmp ecx, r8d ; bounds check of i against 'list.Count', bare registry comparison here, no extra value load
L0023: jae short L0042 ; jumps to exception if the bounds check failed
L0025: mov rax, [rdx+8] ; loads a reference to 'T[] _items' inside the List
L0029: cmp ecx, [rax+8] ; this is another bounds check obviously, against '_items.Length'
L002c: jae short L004f ; jumps to exception if the bounds check failed
L002e: mov r10d, ecx ; makes an extra copy of i into r10, idk why
L0031: mov rax, [rax+r10*8+0x10] ; loads 'list[i]', finally
L0036: inc ecx ; 'i++'
L0038: cmp ecx, r8d ; 'i < list.Count'
L003b: jl short L0020 ; jumps for the next loop iteration if the condition met
; loop end
L003d: add rsp, 0x28 ; clears the stack
L0041: ret ; return
Now if you add an extra variable, it only creates a copy of the value, but rest logic stays the same.
I actually were more surprised by the internal '_items' bounds check here. It is unnecessary as internal '_items.Length' can never be less than 'list.Count', but I guess the optimizer is not smart enough to realize that.
Also it loads '_items.Length' from memory on each iteration, which makes it very expensive. It is necessary as C# allows you to mutate the list during iteration, causing it to reallocate the array. That tanks performance massively.
Switching from the List to a regular array here makes the code insanely shorter and faster, and eliminates the bounds check completely.
Would love to have IEnumerable video
I don't even know goto is present in c#💀
It is ! Mainly for decompilation purposes.
All break, continue, return, else, are converted into goto in CIL. Sometimes, the code is optimised when compiling and it's impossible for the decompiler to figure out what the originating code was like and will output a goto.
Goto did not wrong.
It's even used in a few places in .NET itself
The ONLY use for it is using it to run a couple cases of a switch statement one after the other.
This happens every monday that lands on the 1st day of the month on the days there is a total solar eclipse in the north pole.😅
I`m done. This annoying ad every video that are outdated so fast and interrupt the flow EVERY VIDEO. No thanks.
AsPan()? More like AssPan() 🗿
Damn - only third
First
Maybe first 10 comments? Come on… gimme something nick!
90% off all dometrain for first comment on this video?
Can we talk about such a promotion?
.ForEach, all the way. One day Microsoft will make it fast. Right?
I don't think they can. The overhead is based on the anonymous class generated and allocated for the closure.
For this purpose shown here, using ForEach is the worst, but it has it's advantages for example in combination with a strategy pattern. As it takes a delegate, you could compose a behaviour from one type that does the iteration and another that defines the operation applied to the content of the collection.
Parallel ForEach is faster. :P