The terms may be actually coined from math, there are covariant and contravariant functors in category theory and they very nicely demonstrate whan happens in C#. A covariant functor f (or just "functor") has a mapping `(a ~> b) -> (f a ~> f b)`, a contravariant functor g has a mapping `(a ~> b) -> (g b ~> g a)`. Now, imagine that ~> stands for "is subtype of", `f` and `g` are your types with variance definition on their type parameters, and you get variance relations in OOP: from `A : B` follows `F : F` if `F`, from `A : B` follows `F : F` if `F`. Now, about those `in` and `out`: functions are functors, covariant in their output and contravariant in their input.
Very good explanation, but not because of the examples, but for the initial introduction of the cause-effect origins of both terms. I think this is the actual eye-opener which explains the essence of both concepts. I would only emphasize as strongly as I can that co- and contravariance are both very general concepts, and it's not about C# interfaces, in/out keywords, parameters and return values. The examples you gave may have been switched or given in another language, but the general idea is the following: If our new type N that we are building (the interface in your example), follows the order of derivation strictly, then it's COvariant, otherwise, it's CONTRAviariant. To put it more clearly: if there is type B derived from A, so B:A and I have a generic type N, then if for N I can get (in whatever way) only A in other words for shrinking⬇ of N, the output is also shrunk⬇ and for N I can get both A and B in other words, for expanding⬆ of N, the output is also expanded⬆ then N is COvariant if for N I can get both A and B in other words for shrinking⬇ of N, the output is expanded⬆ and for N I can get only B in other words for expanding⬆ of N, the output is shrunk⬇ then N is CONTRAvariant. But of course, it's still nice to see an implementation, which although not fully developed (you're assigning null! to your interfaces and try to use them), could easily be used in a compilable code.
A great and very systematic explanation of the concepts of Covariance and Contravariance and their practical implications in various contexts (reference assignment, method parameters and interfaces).
2 месяца назад
What advantage gives using "MyInt" versus "MyInt" not specifying anything? Because if it doesn't change the behavior then it can be just ommited always, right?
Thank you for the good course. However, I would suggest insisting on what the in/out keywords do and how that ties into object substitution. I had to go to another source to fill in the details of co/contra-variance.
One thing that might not be immediately obvious is that you can't nest the generic types and get the same behavior since T includes the fully nested type. So if you had a List you couldn't assign a List to it.
I have this analogy, hope it helps somebody to grasp the concept easier Imagine you're a producer, if your product's quality improves you can satisfy more customers, more quality = more customers, so you are *covariant Imagine you're a consumer, if your expectation in product quality goes up there'll be fewer products that can satisfy you, more expectation = less options, so you are *contravariant
One big question is: if for producer methods, the correct behavior is "Covariance" and for the consumer methods, the correct behavior is "Contravariance", why is it the developers' responsibility to set it?! I would like to know the applications. In other words, whenever I'm defining a generic type, should I go through the decision-making process of variance options? I've hardly encountered a code base that had specified this. Does it mean it's majorly used internally in .NET code base? Thanks! 👍
Variance tells the compiler how to verify assignments. It appears that definite checks were not possible to automate, and so the language designers left it to the programmer to specify. I am not aware of any strongly typed language that is doing that automatically (though, I would love to learn if one exists!). That would be an interesting achievement.
@@zoran-horvat Thanks for the reply. I hope some analyzers could provide suggestions like "You're just using T to return objects, would you like to mark T as covariance?"...
Watch again after a few weeks pause, and don't forget to write that code in the IDE as you progress. That will finally teach you and, once variance gets into your mind, I'm sure you will never watch the assignments the same way again.
I'm sorry, but you don't explain the need for the in and out keywords, as all your syntax checking remains the same when you take them out from the interface declarations. It's only at 9:10 in your video that the real importance of the keywords is revealed. But you never explain it, nor its practical implications. You only make statements in your video, you don't explain anything.
Excuse me, I must object to that. What was I doing for ten minutes if not explaining? But anyway, I have watched the video for you, and pulled out the important parts, so that you can focus on separate explanations as per need. 04:00 - 05:00 Announces variance as augmenting the object substitution principle 05:00 - 05:40 Explanation of out and in keywords, when they are used and what do we call that 05:40 - 07:40 Practical implications of the out keyword, applying the OSP to output values 07:40 - 09:05 Practical implications of the in keyword, applying the OSP to input values 09:10 - 11:05 Practical implication of OSP when applied to variant interfaces
@@zoran-horvat I'm sorry. We'll just have to disagree on this. It's not my intention to deride you. But I felt that nowhere in the video you explain the need for those keywords. You state the syntax and grammar of the language very well. But come short on the semantics. As an example, You didn't remove the in and out keywords from the interfaces and explained the compiler syntax check changes that causes and why. This would have been the aha! moment for someone like me, who was trying to understand the concepts. I saw your video first yesterday. And remained confused as to why I needed those keywords. It was only after experimentation and seeing another video from someone else that I finally understand the concept and its applicability. Sorry.
Covariance and Contravariance names comes from simple logical diagram of implicid assignment direction rules Contravariance PClass Generic Covariance PClass
Become a patron and get access to source code and exclusive live streams: www.patreon.com/posts/what-is-and-in-c-81378570
I felt off my chair when he said "it is very simple indeed" at the end
I couldn't resist, sorry :)
Hahaha, me too! But admittedly, it did get a lot less confusing after this video
Ok, so… I wasn’t the only one. 😂
I have to refresh this whenever I want to use these terms to describe something to others😂
You sir never cease to blow my mind with your incredible insight. Thank you!
The terms may be actually coined from math, there are covariant and contravariant functors in category theory and they very nicely demonstrate whan happens in C#. A covariant functor f (or just "functor") has a mapping `(a ~> b) -> (f a ~> f b)`, a contravariant functor g has a mapping `(a ~> b) -> (g b ~> g a)`. Now, imagine that ~> stands for "is subtype of", `f` and `g` are your types with variance definition on their type parameters, and you get variance relations in OOP: from `A : B` follows `F : F` if `F`, from `A : B` follows `F : F` if `F`. Now, about those `in` and `out`: functions are functors, covariant in their output and contravariant in their input.
Very good explanation, but not because of the examples, but for the initial introduction of the cause-effect origins of both terms. I think this is the actual eye-opener which explains the essence of both concepts.
I would only emphasize as strongly as I can that co- and contravariance are both very general concepts, and it's not about C# interfaces, in/out keywords, parameters and return values. The examples you gave may have been switched or given in another language, but the general idea is the following:
If our new type N that we are building (the interface in your example), follows the order of derivation strictly, then it's COvariant, otherwise, it's CONTRAviariant.
To put it more clearly:
if there is type B derived from A, so B:A
and I have a generic type N, then
if
for N I can get (in whatever way) only A
in other words for shrinking⬇ of N, the output is also shrunk⬇
and
for N I can get both A and B
in other words, for expanding⬆ of N, the output is also expanded⬆
then N is COvariant
if
for N I can get both A and B
in other words for shrinking⬇ of N, the output is expanded⬆
and
for N I can get only B
in other words for expanding⬆ of N, the output is shrunk⬇
then N is CONTRAvariant.
But of course, it's still nice to see an implementation, which although not fully developed (you're assigning null! to your interfaces and try to use them), could easily be used in a compilable code.
This explanation is not only perfect, it is a work of art.. Thanks!
Truly, I am very grateful for your videos. You explain everything magnificently and in a way that's easy to understand."
Деда, ты лучший. Я понял разницу между этими принципами, благодаря тебе.
Thanks! This has to be the best explanation of the topic I've come across, with a nice exhaustive set of examples.
Thank you! I am glad to hear it was helpful.
A great and very systematic explanation of the concepts of Covariance and Contravariance and their practical implications in various contexts (reference assignment, method parameters and interfaces).
What advantage gives using "MyInt" versus "MyInt" not specifying anything? Because if it doesn't change the behavior then it can be just ommited always, right?
Zoran sounds like a very intelligent person. Thanks for the video. Never x my mind to look at Base and Derived this way !
Excellent explanation. I’ve heard several and this is the best. I may actually understand cov/ contra now.
Amazingly simple explanation. Thank you so much for this!
Thank you! very clear, and I love your talking speed, it makes me easy to follow
Thanks! This explains it better than other tutorials.
you made it so so so so so clear on this topic, thank you sir!
Yeah, it was so easy, but for some reason, it was the first time I could understand it! 😆 👍
explained perfectly.. i m checking out your course in pluralsight
Thank you for the good course. However, I would suggest insisting on what the in/out keywords do and how that ties into object substitution.
I had to go to another source to fill in the details of co/contra-variance.
Thanks for the suggestion. That makes sense.
¡Gracias por compartir!, los ejemplos estuvieron muy claros y despejan toda duda acerca de estos dos conceptos
Very clear explanation of this difficult concept!
Best explanation ever. You're the man! Thanks Zoran :)
Wow Thanks a lot Zoran, Finally I could understand these concepts. you are amazing .
Beautifully explained!
Super clear video! thanks!!
Great and very clear explanation, thanks!
One thing that might not be immediately obvious is that you can't nest the generic types and get the same behavior since T includes the fully nested type. So if you had a List you couldn't assign a List to it.
That is because List is invariant. Try with IEnumerable.
Great job! That was the best demo on this topic I've found!
I really enjoy your videos
My brain just froze. Rebooting.
Thank you guy from Budapest!
Szio!
Very good explanation , thank you
Wow ! Amazing teacher !
Thank you!
Is there something on a thorough understanding of the TPL and also the difference of concurrency and parallel processing
I understood Liskov Substitution principle, but I didn't understand what in or out does and how it relates to co or contravariance.
Eureka moment. The concept is actually not that hard to understand when someone actually explains it well
I have this analogy, hope it helps somebody to grasp the concept easier
Imagine you're a producer, if your product's quality improves you can satisfy more customers, more quality = more customers, so you are *covariant
Imagine you're a consumer, if your expectation in product quality goes up there'll be fewer products that can satisfy you, more expectation = less options, so you are *contravariant
One big question is: if for producer methods, the correct behavior is "Covariance" and for the consumer methods, the correct behavior is "Contravariance", why is it the developers' responsibility to set it?! I would like to know the applications. In other words, whenever I'm defining a generic type, should I go through the decision-making process of variance options? I've hardly encountered a code base that had specified this. Does it mean it's majorly used internally in .NET code base? Thanks! 👍
Variance tells the compiler how to verify assignments. It appears that definite checks were not possible to automate, and so the language designers left it to the programmer to specify. I am not aware of any strongly typed language that is doing that automatically (though, I would love to learn if one exists!). That would be an interesting achievement.
@@zoran-horvat Thanks for the reply. I hope some analyzers could provide suggestions like "You're just using T to return objects, would you like to mark T as covariance?"...
Very clear tysm!
I got lost...😰
Watch again after a few weeks pause, and don't forget to write that code in the IDE as you progress. That will finally teach you and, once variance gets into your mind, I'm sure you will never watch the assignments the same way again.
are you hungarian?
@@mackosajt86 Not yet.
I'm sorry, but you don't explain the need for the in and out keywords, as all your syntax checking remains the same when you take them out from the interface declarations. It's only at 9:10 in your video that the real importance of the keywords is revealed. But you never explain it, nor its practical implications. You only make statements in your video, you don't explain anything.
Excuse me, I must object to that. What was I doing for ten minutes if not explaining? But anyway, I have watched the video for you, and pulled out the important parts, so that you can focus on separate explanations as per need.
04:00 - 05:00 Announces variance as augmenting the object substitution principle
05:00 - 05:40 Explanation of out and in keywords, when they are used and what do we call that
05:40 - 07:40 Practical implications of the out keyword, applying the OSP to output values
07:40 - 09:05 Practical implications of the in keyword, applying the OSP to input values
09:10 - 11:05 Practical implication of OSP when applied to variant interfaces
@@zoran-horvat I'm sorry. We'll just have to disagree on this. It's not my intention to deride you. But I felt that nowhere in the video you explain the need for those keywords. You state the syntax and grammar of the language very well. But come short on the semantics. As an example, You didn't remove the in and out keywords from the interfaces and explained the compiler syntax check changes that causes and why. This would have been the aha! moment for someone like me, who was trying to understand the concepts. I saw your video first yesterday. And remained confused as to why I needed those keywords. It was only after experimentation and seeing another video from someone else that I finally understand the concept and its applicability. Sorry.
Covariance and Contravariance names comes from simple logical diagram of implicid assignment direction rules
Contravariance
PClass Generic
Covariance
PClass
You are right about the analysis, but the use of names in science for the same effects is predating their use in programming by decades.