Honestly, Linkin Park's "In the End" to me always evoked the image of a tired dev screaming at his uncooperative code base. At least LINQin Park brought salvation through .NET 9
Some guy on RUclips recently told me that .NET 9 was just "meh" and questioned the entire update. You should school that guy with your Code Cop series.
I mean, new features wise, it is meh. However I am still excited, and have been for every .NET version for past few years - and that's specifically cause of the performance improvements they make. For me it just means my stuff runs faster and better without me having to do anything, so I'll take that happily. Still wish there were more actual features in .NET 9 though - but I am still updating my main projects as soon as it hits, even if for these free gains alone.
While this is a overall a positive thing, I'm worried about the fact that these seem like hardcoded optimizations. If you have a custom enumerable adapter implemented via yield return, and you put in between two LINQ adapters from the standard library (like list.CustomAdapter1().Skip(5).CustomAdapter2().take(5)), you won't get these performance benefits, right? Not to mention AFAIK you're still paying the price of dynamic dispatch for every call to your Func lambdas. I'm more a fan of adapters in language like C++ or Rust, where the optimizations are done by the compiler itself by inlining all the lambdas and producing code equivalent to a for loop.
I think this type of optimization was chosen because C# runs on CLR, and it's not compiled to machine code directly. And there can be libs written using other languages and running in the same build. So .net team would have to edit compilers of each of these languages, leaving their scope of responsibility. Framework should remain a framework and not require changes of compilators, especially of many languages.
@@SerhiiZhydel Oh yeah for sure. I probably wasn't being clear - I'm not trying to say .Net should do the same optimizations that gcc or rustc do. I was trying to say that even with these new linq optimizations it isn't hard to get into a situation where they all fly out the window and if the verdict a newbie gets from this is "linq is fast now" they'll have a hard time realising why it is "suddenly" slow in their usecase. Furthermore, my point is that even with these optimizations a for loop over a span is still going to be faster in C# AFAIK, so linq still can't be used in performance critical code.
depends on the type of optimization. For the Span ones, I'd say that's right, you're getting nothing. For the ones like "You don't need order the whole results set before processing First", sounds beneficial regardless.
so Unity annouced at United 2024 that vor version 6.1+ (or 6.1?) they will move from mono to CoreCLR. Hopefully this means all those improvements will be there too!
Very interesting. I think this is a real good way to explain performance evolution. But something is missing. To be honest and exhaustive, we need to check the performance evolution from many analyses axes : - Dotnet framework version - ORM (EF, Daper, ...) - Storage kind (SQL Server, Oracle, MongoDb, Casandra, ...) Without this, it is really difficult to consider that only an evolution of the framework means x6 performance.
Sounds good. We use linq a lot at work, so we will definitily benefit from switching to .NET 9, but it is unsure when we will switch from 8 to 9. Normally we only switch to LTS releases and as you all know, .NET 9 is not a LTS release.
@@FraserMcLean81 Its simple, if you work in a very large company, adapting libraries or software is expensive and/or time consuming. Also there may security risks involved. Therefore its common practice that middle to large companies only use libraries that are well tested, hardened and are used by many companies. In addition the libraries are heavily tested by the IT department against vulnerabilities. Such tests takes many months and it takes 1-2 years until software or libraries are approved by them.
@@FraserMcLean81because it is hugely expensive to migrate the code, the dependencies, the buil, the install, the tests and when finally the product reaches your client devices, that version of .Net is already no longer supported... Those short term version are useless in a complex professional environment.
These improvements are welcome. But I also would like if .NET focused a bit on doing actual compiler optimizations, like, many of the LINQ constructs can easily be converted into normal loops and have massive performance implications over needing to allocate enumerables.
hi Nick - so this video is all about IEnumerable and memory interface of LINQ. What about IQueryable and db operations, is there an improvement over there as well?
I believe the replacing iterators with spans might actually change the behavior in some weird cases, mightn't it? Let's say your junior dev decide to modify the underlying list inside the select lambda. Currently, iterator will throw the exception describing that's a terrible idea. When using spans, it will work, but you won't know if the reallocation happened so you iterate through the current underlying span or the old one
Unless I’m misunderstanding the video then congratulations on the newborn baby! I’ll keep an eye out for less frequent videos from an increasingly tired Nick.
This is why I strongly oppose optimizations against language features. Keep your code short/simple and leave it to the compiler/language to make it faster.
This looks a lot like what RDBMSs are doing when planning query execution. Only they use heuristics and statistics over the source data while this seems more universal.
@@akirakosaintjust It (the expression syntax) is weird, until it's not. In 98% of scenarios the lambda syntax is appropriate, is easier to use, and looks great. I have run into scenarios requiring joins that either weren't possible or were so hideously ugly trying to get the lambda syntax to work it wasn't worth it. If you ever do run into such a situation, don't fret. The expression syntax makes a lot of sense once you're forced to work with it a bit, and it becomes very powerful, albeit in a limited # of situations. You can start using it just as you'd use TSQL w/ joins, but on in-memory sets of data joining as necessary. Selecting from those joins only the data you need is a thing of beauty, like a skilled surgeon with a scalpel.
With postponed new features and coming performance improvements for existing features, feels like Microsoft could've changed their release cycle and called this an LTS release...
Serious question. What if the data is too large to convert to a span. Are there still performance improvements? I generally avoid spans because of this fear of running out of stack space. Is this a legit worry?
Questions: 1. Did they do improvements also to the LINQ methods used inside real DB queries, like in `IQueryable` objects? It is a different species, given it is translated to SQL, or to technology which is behind it. 2. Did they do also improvements to the non LINQ methods of collections?
There is so much crap error handling in LINQ that results from a type system where null as a type was an afterthought. Also the dynamic typing makes it worse. Good example is List.Count(), LINQ is smart enough to check if it implements ICountable but it still needs to do type checking at runtime
@@paxcoder People have often used the any() extension on a list to see if it is not empty. If (mylist.Any()) (It reads nicely) But microsoft is giving warnings when you do it. You should use if(mylist.Count == 0) for performance reasons. I was wondering if these improvements made the any check okay to use
@paxcoder A List implements IEnumerable, which gives it access to methods like Count() from LINQ. However, List also directly tracks its size with the Count property, since it's essentially a dynamically resized array. Because List knows its size internally, you can use the Count property to access the number of elements directly. While Count() works fine on a List, it’s not necessary in this case, as the property is faster due to avoiding a method call. You can use the Count() Method in case you want to count something specific with a predicate like Count(x => x > 50) So TL;DR: You don't need to use the Count() method when you can access the Count property directly 😁
@@marceldeger7487 I asked about DasBloch's comment. It's not apparent that these new optimizations have anything to do with Any() vs Count() == 0. Fun fact: Enumerable.Any without arguments already uses the internal count if available via TryGetNonEnumeratedCount. One should still probably use .Count if possible though, because TryGetNonEnumeratedCount does a runtime type test, as Linq methods tend to do (not sure why they don't have different implementations for different types instead?).
And here I was thinking these optimizations were already in the first version of Linq. Okay, spans did not exist, but explicit implementations for arrays and lists did exist, right? ... right? ....
Nope. The first version was extremely naive. It has seen optimizations since then under the hood but I don't believe it has ever had specialized implementations for backing arrays till now
hello nick, can this potentially change the framework behavior in case or bad written code where two threads are accessing the same enumerable, one using these method with the new implementation and the other modifying the collection adding an element at the same time? old framework should throw an exception as they are using the enumerator to enumerate the collection inside the linq method, do the new linq throws anyway ? or which is the behavior now?
I think they should fix bug before improvement, you can't be perfect with known bugs which are even not exists in typescript even java, Like null return in generics when instead of using nullable ( this is almost a hack - lol)
Their solution looks like a hack to me. When I think of optimization I think of compiler optimization or optimization of the design so that its simpler and performant, not rewriting every year the same code, but only for ints or bools.
That's a horse of a different color. That said, EF cores SQL is extremely optimized for most common use cases. Also, your SQL won't improve itself from version to version. :)
@@pilotboba Yes, I can see the improvements and benefits from the hard work of the .NET team. i m happy to see new tools can improve my life. However, I rarely need to tweak my SQL scripts since they were optimized from day 1. For me, code that 'improves itself' isn’t always ideal - my SQL scripts have remained unchanged(performance wise) over the last 10 years, even through multiple .NET upgrades, with no surprises, and works great. As tables grow in size(~10M) and complexity, I often have to switch from LINQ to raw SQL to maintain performance (Past experiences, 6 years back). i think it is good to try in small scale projects.
@@pilotboba Yes, I can see the improvements and benefits from the hard work of the .NET team. However, I rarely need to tweak my SQL scripts since they were optimized from day 1. For me, code that 'improves itself' isn’t always ideal - my SQL has remained unchanged(Performance) over the last 10 years, even through multiple .NET upgrades, with no surprises, works great. As tables grow in size and complexity, I often have to switch from LINQ to raw SQL to maintain performance(Past experience, 6 years back). And, there's no harm in trying LINQ again for smaller-scale projects in future.
And don't forget that regular lists and arrays have their own .length/.count methods that always will outperform LINQ. Especially with Unity development
So you basically got some code from another guy's blog and you put it here, as if you created some of that benchmark or what? Why not reference the original post
No? Who said C# should be slow in the first place? If this was the mentality behind C#, it would not had value types natively nor reified generics, yet they exist and they provide many performance improvements over the Java model for example. The same goes for Spans and other high performance types (and even pointers, in extreme cases), and the same goes into the constant efforts by the .NET team to improve the performance of the language. This kind of point is just very bizarre to me.
@@diadetediotedio6918 Optimizing .NET is a relatively recent thing. Eric Lippert, formerly on the C# design team, had written several blog posts over his tenure there about premature optimization being a bad idea. They simply weren't concerned with blazing performance until recently. I've been doing this for quite awhile now, and C# was traditionally considered a slow language. Things are different now of course, but we're talking about the past here.
So, you are failling people at interview not because they are not good... just because they don't know your exact use case... nice fail from you, if they are good at coding, they can learn fast... Sure I'll never want to work for people like this.
You can teach people how to code but you can't teach them how to behave, especially after 30. Rejecting people based on character because said character can't gel with the existing teams culture is not only very common, but a must if you want a good team.
Speaking from experience, I'd rather have a mediocre developer with good soft skills over skilled programmers who don't. It really can't be overstated how destructive people who don't know how to behave in the team can be for the entire team. Both in productivity and development enjoyment/team atmosphere
@@nickchapsasdon’t you have a course on passing the behavioral parts though? In that what you were just selling in this video? So if these peoples behaviors can’t change like you said, they are lying and just getting a pass to get the job. Then they go back to their crappy old behaviors. So what you’re selling..is not so great for society. Weird.
@@Snickerv12 You can't change your behavior but you can still hack your way into passing the behavioral interview. They are two very different things. I've worked with absolute dickhead developers that new how to bullshit the behavioral interview and say what the interviewer wanted to hear.
"Good fit" is a thing, like it or not. If you're going to be disruptive and/or disrespectful to the other devs and/or the team/company culture then it doesn't matter how good your technical skills are or how vast your knowledge is. You're still not a good fit. (period)
Before watching the video on how they did it: SPAN!!!
Exactly what I am thinking
SPAN!
Span, span, span, bacon, eggs and span.
Let’s go!
@@daniellundqvist5012 I was just thinking about that Monty Python skit too. haha
Im glad that they SPANd time on improving linq
please excuse yourself and refrain from entering any party
A special team was formed in Microsoft to work on these improvements named LINQin Park. (sorry not sorry for this joke :D)
They better keep a close eye on the team lead...
Why?? 😂
@@tudogeo7061 they shouldn't really do that, cause in the end..... it doesn't really matter
They are always One Step Closer to the perfection (sorry not sorry for this joke :D)
Honestly, Linkin Park's "In the End" to me always evoked the image of a tired dev screaming at his uncooperative code base.
At least LINQin Park brought salvation through .NET 9
Please do the video about Unsafe
Always happy to dev in .NET. The ecosystem is improving by the day!
Span simply is the game changer in .NET. So elegant.
Some guy on RUclips recently told me that .NET 9 was just "meh" and questioned the entire update. You should school that guy with your Code Cop series.
I mean, new features wise, it is meh. However I am still excited, and have been for every .NET version for past few years - and that's specifically cause of the performance improvements they make. For me it just means my stuff runs faster and better without me having to do anything, so I'll take that happily.
Still wish there were more actual features in .NET 9 though - but I am still updating my main projects as soon as it hits, even if for these free gains alone.
Insane! Wow, thanks for this detailed overview!
Please make an updated deep dive on spans and marshaling
While this is a overall a positive thing, I'm worried about the fact that these seem like hardcoded optimizations. If you have a custom enumerable adapter implemented via yield return, and you put in between two LINQ adapters from the standard library (like list.CustomAdapter1().Skip(5).CustomAdapter2().take(5)), you won't get these performance benefits, right? Not to mention AFAIK you're still paying the price of dynamic dispatch for every call to your Func lambdas.
I'm more a fan of adapters in language like C++ or Rust, where the optimizations are done by the compiler itself by inlining all the lambdas and producing code equivalent to a for loop.
I think this type of optimization was chosen because C# runs on CLR, and it's not compiled to machine code directly. And there can be libs written using other languages and running in the same build. So .net team would have to edit compilers of each of these languages, leaving their scope of responsibility. Framework should remain a framework and not require changes of compilators, especially of many languages.
@@SerhiiZhydel Oh yeah for sure. I probably wasn't being clear - I'm not trying to say .Net should do the same optimizations that gcc or rustc do. I was trying to say that even with these new linq optimizations it isn't hard to get into a situation where they all fly out the window and if the verdict a newbie gets from this is "linq is fast now" they'll have a hard time realising why it is "suddenly" slow in their usecase. Furthermore, my point is that even with these optimizations a for loop over a span is still going to be faster in C# AFAIK, so linq still can't be used in performance critical code.
depends on the type of optimization. For the Span ones, I'd say that's right, you're getting nothing. For the ones like "You don't need order the whole results set before processing First", sounds beneficial regardless.
so Unity annouced at United 2024 that vor version 6.1+ (or 6.1?) they will move from mono to CoreCLR. Hopefully this means all those improvements will be there too!
According their statement it should be there.Man I'm just excited about Unity 6.1 ..Hope they deliver what they showed
They were talking about "next generation" (Unity 7). The move won't happen in Unity 6 unfortunately
@@x3n689 Nooooooo!!!!!
Still a great move for them to make. Glad it’s now at least set on their roadmap. Zero memory allocation is huge, especially for game applications
I'm looking forward to those improvements a lot
Very interesting. I think this is a real good way to explain performance evolution. But something is missing.
To be honest and exhaustive, we need to check the performance evolution from many analyses axes :
- Dotnet framework version
- ORM (EF, Daper, ...)
- Storage kind (SQL Server, Oracle, MongoDb, Casandra, ...)
Without this, it is really difficult to consider that only an evolution of the framework means x6 performance.
I'm starting to understand the benefit of upgrading projects to newer framework versions.
Sounds good. We use linq a lot at work, so we will definitily benefit from switching to .NET 9, but it is unsure when we will switch from 8 to 9.
Normally we only switch to LTS releases and as you all know, .NET 9 is not a LTS release.
Never understood why LTS matters so much. .NET 9 is still supported for 18 months and then .NET 10 will come out.
@@FraserMcLean81 Its simple, if you work in a very large company, adapting libraries or software is expensive and/or time consuming. Also there may security risks involved. Therefore its common practice that middle to large companies only use libraries that are well tested, hardened and are used by many companies. In addition the libraries are heavily tested by the IT department against vulnerabilities. Such tests takes many months and it takes 1-2 years until software or libraries are approved by them.
@@FraserMcLean81because it is hugely expensive to migrate the code, the dependencies, the buil, the install, the tests and when finally the product reaches your client devices, that version of .Net is already no longer supported... Those short term version are useless in a complex professional environment.
These improvements are welcome. But I also would like if .NET focused a bit on doing actual compiler optimizations, like, many of the LINQ constructs can easily be converted into normal loops and have massive performance implications over needing to allocate enumerables.
They're doing a lot of those too. For example they've started supported stack-allocating classes in a few scenarios.
@@modernkennnern
They did? What, I thought it was not a thing that would happen any time soon, where did you saw this?
Nice update, I usually never optimize my code as performance really isn't a problem any more, when I optimize it's the algorithm that I change
hi Nick - so this video is all about IEnumerable and memory interface of LINQ. What about IQueryable and db operations, is there an improvement over there as well?
I've been anticipating this since forever ago. There was no good reason for LINQ to be as slow as it has been up until now.
I believe the replacing iterators with spans might actually change the behavior in some weird cases, mightn't it?
Let's say your junior dev decide to modify the underlying list inside the select lambda. Currently, iterator will throw the exception describing that's a terrible idea. When using spans, it will work, but you won't know if the reallocation happened so you iterate through the current underlying span or the old one
Unless I’m misunderstanding the video then congratulations on the newborn baby! I’ll keep an eye out for less frequent videos from an increasingly tired Nick.
🗣️🗣️🔥🤝 Stephen Toub sign on my newborn baby
We're migrating to .NET 9 with this one 😎
You mean net 6? -your company
Yes, please make an Unsafe.* video.
Looks good nick.
Span, what else could it be
Iterator consolidation 🤯
Impressive... Most impressive! 😮
This is why I strongly oppose optimizations against language features. Keep your code short/simple and leave it to the compiler/language to make it faster.
This looks a lot like what RDBMSs are doing when planning query execution. Only they use heuristics and statistics over the source data while this seems more universal.
Brilliant. We had to refactor some methods back in dot net 7, because linq was extremely slow in those situations. Love this one ❤️
Does the first improvement apply to lists of objects? Who has lists of numbers on their code?
Hate people hating LINQ, it's amazing
Who hates LINQ?
@@tedchirvasiu
stupid people
@@tedchirvasiu There will always be people calling out LINQ for being slower or worse in some way, because that's how people on the internet work
@@tedchirvasiu query syntax LINQ is very weird imo, lambda syntax LINQ is great
@@akirakosaintjust It (the expression syntax) is weird, until it's not. In 98% of scenarios the lambda syntax is appropriate, is easier to use, and looks great. I have run into scenarios requiring joins that either weren't possible or were so hideously ugly trying to get the lambda syntax to work it wasn't worth it. If you ever do run into such a situation, don't fret. The expression syntax makes a lot of sense once you're forced to work with it a bit, and it becomes very powerful, albeit in a limited # of situations. You can start using it just as you'd use TSQL w/ joins, but on in-memory sets of data joining as necessary. Selecting from those joins only the data you need is a thing of beauty, like a skilled surgeon with a scalpel.
Fantastic improvements from Microsoft. I like LINQ. Thank you Nick for this video.
a video on the Unsafe class would be awesome
Did Distinct and OrderBy get their piece of the cake?
This video really spanned my LINQ.
Is there better performance for the normal code? ie. Where, firstordefault, select, groupBy? 99% of my code uses these
With postponed new features and coming performance improvements for existing features, feels like Microsoft could've changed their release cycle and called this an LTS release...
Serious question. What if the data is too large to convert to a span. Are there still performance improvements? I generally avoid spans because of this fear of running out of stack space. Is this a legit worry?
Questions:
1. Did they do improvements also to the LINQ methods used inside real DB queries, like in `IQueryable` objects? It is a different species, given it is translated to SQL, or to technology which is behind it.
2. Did they do also improvements to the non LINQ methods of collections?
What kind of scorcery is this? Nice work
All very cool, still my codebase at work is .net framework 4.8 so..
There is so much crap error handling in LINQ that results from a type system where null as a type was an afterthought. Also the dynamic typing makes it worse. Good example is List.Count(), LINQ is smart enough to check if it implements ICountable but it still needs to do type checking at runtime
some people says it's efficient to use normal loops instead of linq. Is that true??? Even loops are easier in some cases
SPAN SPAN SPAN! SPAAAAAN!
When in doubt, span
Does this mean we can use Any() on a list instead of Count() == 0 ?
@DasBloch on a list you should use Count == instead of Count()
I haven't done C# for a long time. Can you explain your comment to me please?
@@paxcoder People have often used the any() extension on a list to see if it is not empty. If (mylist.Any()) (It reads nicely) But microsoft is giving warnings when you do it. You should use if(mylist.Count == 0) for performance reasons. I was wondering if these improvements made the any check okay to use
@paxcoder A List implements IEnumerable, which gives it access to methods like Count() from LINQ. However, List also directly tracks its size with the Count property, since it's essentially a dynamically resized array.
Because List knows its size internally, you can use the Count property to access the number of elements directly. While Count() works fine on a List, it’s not necessary in this case, as the property is faster due to avoiding a method call.
You can use the Count() Method in case you want to count something specific with a predicate like Count(x => x > 50)
So TL;DR: You don't need to use the Count() method when you can access the Count property directly 😁
@@marceldeger7487 I asked about DasBloch's comment. It's not apparent that these new optimizations have anything to do with Any() vs Count() == 0.
Fun fact: Enumerable.Any without arguments already uses the internal count if available via TryGetNonEnumeratedCount. One should still probably use .Count if possible though, because TryGetNonEnumeratedCount does a runtime type test, as Linq methods tend to do (not sure why they don't have different implementations for different types instead?).
So, Stephen Toub really signed on your newborn baby?
Effing great!
Why they dont implement this in C++ directly
Could you please provide the source code for the project?
And here I was thinking these optimizations were already in the first version of Linq. Okay, spans did not exist, but explicit implementations for arrays and lists did exist, right? ... right? ....
they did do type checks for interfaces like IList, but spans were not implemented. It was optimized, just not this fast.
Nope. The first version was extremely naive. It has seen optimizations since then under the hood but I don't believe it has ever had specialized implementations for backing arrays till now
hello nick, can this potentially change the framework behavior in case or bad written code where two threads are accessing the same enumerable, one using these method with the new implementation and the other modifying the collection adding an element at the same time?
old framework should throw an exception as they are using the enumerator to enumerate the collection inside the linq method, do the new linq throws anyway ? or which is the behavior now?
I don’t believe they can change these semantics. If the underlying structure changes they must keep throwing an exception. This is designed behavior.
Slow code? Did you try use Span?)
I love LINQ and I love how LINQ gets better with every dotnet version. That makes C# so much superior to other programming languages
1:40 I hope he gets the message.
can you introduce a library for performance testing in integration test and unit test ?
Always Span
I think they should fix bug before improvement, you can't be perfect with known bugs which are even not exists in typescript even java,
Like null return in generics when instead of using nullable ( this is almost a hack - lol)
SelectAsync still missing
While I’m sure it’s not Microsoft’s concern, changes like this are great for those of us in gaming.
I hope this features will go to unity3d scripting backend
I am wondering if Stephen Toub responded positively to your sincere request?
UPD: Yes, please, make a video about the Unsafe
Spans!
Stephen Toub mentioned
it still have garbage collector
one time they return pointers like in golang
Their solution looks like a hack to me. When I think of optimization I think of compiler optimization or optimization of the design so that its simpler and performant, not rewriting every year the same code, but only for ints or bools.
can you provide this code for free its just benchmarks
What IDE is this? Does not look like vscode or full fledge Visual Studio! (Asking for a friend)
He's using Jetbrains Rider
Rider
@@vyrp I hardly know her
That's great! So much performance improvement.
It still doesn't convince me to switch from raw SQL to LINQ.
That's a horse of a different color.
That said, EF cores SQL is extremely optimized for most common use cases. Also, your SQL won't improve itself from version to version. :)
@@pilotboba Yes, I can see the improvements and benefits from the hard work of the .NET team. i m happy to see new tools can improve my life. However, I rarely need to tweak my SQL scripts since they were optimized from day 1. For me, code that 'improves itself' isn’t always ideal - my SQL scripts have remained unchanged(performance wise) over the last 10 years, even through multiple .NET upgrades, with no surprises, and works great. As tables grow in size(~10M) and complexity, I often have to switch from LINQ to raw SQL to maintain performance (Past experiences, 6 years back). i think it is good to try in small scale projects.
@@pilotboba Yes, I can see the improvements and benefits from the hard work of the .NET team. However, I rarely need to tweak my SQL scripts since they were optimized from day 1. For me, code that 'improves itself' isn’t always ideal - my SQL has remained unchanged(Performance) over the last 10 years, even through multiple .NET upgrades, with no surprises, works great. As tables grow in size and complexity, I often have to switch from LINQ to raw SQL to maintain performance(Past experience, 6 years back). And, there's no harm in trying LINQ again for smaller-scale projects in future.
@jasongoh2046 : nothing that Nick mentioned in this video is applicable to Linq-to-SQL. He was talking about in-memory Linq.
@@jasongoh2046 Fair, whatever works for you. Also keep in mind that stuff shown in this video is about linq of inmemory data not IQueryable stuff.
Maybe it's me, but I always understand "Hello everybody, I'm naked" at the beginning of the video...
Other languages are getting further and further behind.
Hey, I contributed to some of these!
How did Microsoft improved the times:
"They handwritten the optimization logic 😅"
... " They improved the logic by sweat and tears of programmers "
Why are pronouncing link queue as link?
Because it’s pronounced link
@@nickchapsas Link-Q, that was a first 😆
@@nickchapsas ah so queue is silent. makes sense
someone re-record that monty python spam-sketch to span please
Without watching the video I could tell it's SPAN 😂
And don't forget that regular lists and arrays have their own .length/.count methods that always will outperform LINQ.
Especially with Unity development
Where can I read or what should I watch to get more familiar with such low-level C# features?
Google Stephen Toub
hopefully all five of my internal enterprise app users will notice the perf gains
😶🌫Its not LTS 😜
But as some reddit user's manager said, we don't use LINQ here!
Reasons not to use LINQ.
1. You get paid per LOC written.
2. See 1.
3. See 1
4. See 1
(I
get
paid
per
line
in
youtube
comments.
)
ONLY SPANS
Can you make video "what is marshaling" ))
"Only Spans"
Finally no more optimizing loops, .NET9 does it all.
it's again.... the SPAN..... or as they say Spaghetti Programs Are Normal
Oh, Only Spans once again
It's a pity they don't invest as much in reducing memory consumption
So you basically got some code from another guy's blog and you put it here, as if you created some of that benchmark or what? Why not reference the original post
Exactly.
Not being allowed to use LINQ for performance is wild to me. Seems like if that's a requirement, C# was the wrong language choice to begin with.
No. One obvious example is Unity
No? Who said C# should be slow in the first place?
If this was the mentality behind C#, it would not had value types natively nor reified generics, yet they exist and they provide many performance improvements over the Java model for example. The same goes for Spans and other high performance types (and even pointers, in extreme cases), and the same goes into the constant efforts by the .NET team to improve the performance of the language.
This kind of point is just very bizarre to me.
@@diadetediotedio6918 Optimizing .NET is a relatively recent thing. Eric Lippert, formerly on the C# design team, had written several blog posts over his tenure there about premature optimization being a bad idea. They simply weren't concerned with blazing performance until recently. I've been doing this for quite awhile now, and C# was traditionally considered a slow language. Things are different now of course, but we're talking about the past here.
@@diadetediotedio6918 I think that's why he said "is wild to me"
So, you are failling people at interview not because they are not good... just because they don't know your exact use case... nice fail from you, if they are good at coding, they can learn fast... Sure I'll never want to work for people like this.
You can teach people how to code but you can't teach them how to behave, especially after 30. Rejecting people based on character because said character can't gel with the existing teams culture is not only very common, but a must if you want a good team.
Speaking from experience, I'd rather have a mediocre developer with good soft skills over skilled programmers who don't. It really can't be overstated how destructive people who don't know how to behave in the team can be for the entire team. Both in productivity and development enjoyment/team atmosphere
@@nickchapsasdon’t you have a course on passing the behavioral parts though? In that what you were just selling in this video? So if these peoples behaviors can’t change like you said, they are lying and just getting a pass to get the job. Then they go back to their crappy old behaviors. So what you’re selling..is not so great for society. Weird.
@@Snickerv12 You can't change your behavior but you can still hack your way into passing the behavioral interview. They are two very different things. I've worked with absolute dickhead developers that new how to bullshit the behavioral interview and say what the interviewer wanted to hear.
"Good fit" is a thing, like it or not. If you're going to be disruptive and/or disrespectful to the other devs and/or the team/company culture then it doesn't matter how good your technical skills are or how vast your knowledge is. You're still not a good fit. (period)
Hard pass.
Bunch of awful, unreadable, mind eating code.
How did Microsoft improved the times:
"They handwritten the optimization logic 😅"
... " They improved the logic by sweat and tears of programmers "