42:08 Re: Rust, "Specialization desired, but hard to land due to legacy": it's more like "specialization desired but hard to land due to unsoundness". As is well-known Rust cares a lot about memory safety, and there are known soundness bugs when using specialization on lifetime parameters. There are some ideas for how to fix this but some of those ideas also turned out to be unsound so I think it's just in a holding pattern now.
My understanding, but to be fair it's limited as this is really Richard and Josh's area, is that some of what leads to these things being unsound and hard to fix is the legacy. Without the legacy, some solutions to the soundness challenges might be available that would simplify landing specialization. So it's somewhat both?
@@ChandlerCarruth I think there are definitely forward compatibility hazards with actually rolling out specialization, but based on a basic test it seems the issue with unfolding associated types is handled correctly. More precisely, here's an example showing how type projections fail to reduce when you use specialization: #![feature(specialization)] trait Foo { type X: Default; } impl Foo for T { default type X = u8; } impl Foo for u16 { type X = u16; } fn foo() -> u8 { ::X::default() } The key is the "default type", which enables specialization on that type. With it, foo() fails to compile because "::X" is not normalized to "u8", and without it, foo() compiles but the "impl Foo for u16" fails to compile because you aren't allowed to specialize it. So the legacy issue is that adding "default type" is a breaking change and likely can't be done for any traits from the standard library, but this doesn't prevent stabilizing the specialization feature itself, it only blocks it from being *used* where legacy code is an issue. AFAIK the real reason the specialization feature itself is blocked is because of the soundness issues.
@@ChandlerCarruth I don't think the word 'legacy' applies here. Sure, there are existing implementations in the compiler that take some time to change, but there isn't any code out in the wild using specialization that you need to keep compiling, which would make it unfeasible to make those changes.
@@digama0 When I researched this, I found these blog posts stating that they had a solution to the soundness concerns with specialization: smallcultfollowing.com/babysteps/blog/2018/02/09/maximally-minimal-specialization-always-applicable-impls/ and aturon.github.io/tech/2018/04/05/sound-specialization/
Super epic. I will though say, that actually trying to use "T:! MulWith(f64)" in rust and trying to write some generic code where a bunch of generic types have to be multiplied, added, and moded gets VERY messy in the types. It's mathematically correct, but it's also hard to read if you are trying to do something more complicated than one multiplication. Don't know a good solution though. Duck typing is pretty bad. Maybe the compiler can tell you what interfaces your types should be implementing?
So, `Result` isn't yet in scope -- we're still describing the constrain on what it is. So we have an injected way of referencing it that is currently spelled `.Self`. The `.` is used in the where clause to reference a member of the thing currently being constrained -- the type that will *eventually* be bound to `Result`, but hasn't yet been. So `.T` would be equivalent to `Result.T`. Which is where `.Self` comes from -- `Result.Self` == `Result`. Still, this is a source of some frustration because the syntax is subtle, especially in this context where there is a *different* meaning for `Self` and `.Self`. It can result in amazingly confusing things like `where .Self == Self`. But so far, we've not come up with a better syntax for this.
It's not quite Rust. There are many distinctions: While the interface generic system looks very much like Rust's, to me this one seems to be much more powerful. Since there is no worry about "soundness" or "lifetimes" we get a much more powerful tool here. The language is not focused as much on checking borrows and assuring that nothing dangles - it is an actual successor to C++, without the baggage, with relatively "sane" defaults, grammar, generics, "concepts", conversions - but keeping all the unsafety and danger of dangling reference, lifetimes that can end before the views to the objects, iterators etc. It results in a much more powerful language and abstractions, that are not possible (or at least not viable) in "safe" rust. And mostly, Carbon is supposed to be a successor language in codebases that can't afford switching to Rust and are too big to move them, even module by module. Using Carbon is easier here - what it promises is seamless interoperability with C++, NOT with C as Rust does. Wrapping things in C APIs to inter-operate with Rust is a major pain if you want to migrate a big codebase. This glue layer is temporary. It's another point of failure, another thing to maintain. It can be hard to get right - if you want to keep "soundness" of Rust program and not trigger C++ UB by "safe" Rust code. With Carbon - you don't have any of these worries.
Because Rust doesn’t work for their very real use case. There are billions of lines of critical, performance sensitive code in codebases that will never be converted to Rust. The Carbon language README literally says “use Rust if you can,” and Carbon is built to safely maintain and add new features to existing C++ codebases that will never be migrated. If you’re starting a new project, and are not a C++ developer, this is not relevant to you. Use Rust.
Carbon gets so many things just right. I hope this also extends to language build in unit tests and "language as build infrastructure" like Zig.
42:08 Re: Rust, "Specialization desired, but hard to land due to legacy": it's more like "specialization desired but hard to land due to unsoundness". As is well-known Rust cares a lot about memory safety, and there are known soundness bugs when using specialization on lifetime parameters. There are some ideas for how to fix this but some of those ideas also turned out to be unsound so I think it's just in a holding pattern now.
My understanding, but to be fair it's limited as this is really Richard and Josh's area, is that some of what leads to these things being unsound and hard to fix is the legacy. Without the legacy, some solutions to the soundness challenges might be available that would simplify landing specialization. So it's somewhat both?
@@ChandlerCarruth I think there are definitely forward compatibility hazards with actually rolling out specialization, but based on a basic test it seems the issue with unfolding associated types is handled correctly. More precisely, here's an example showing how type projections fail to reduce when you use specialization:
#![feature(specialization)]
trait Foo { type X: Default; }
impl Foo for T { default type X = u8; }
impl Foo for u16 { type X = u16; }
fn foo() -> u8 { ::X::default() }
The key is the "default type", which enables specialization on that type. With it, foo() fails to compile because "::X" is not normalized to "u8", and without it, foo() compiles but the "impl Foo for u16" fails to compile because you aren't allowed to specialize it.
So the legacy issue is that adding "default type" is a breaking change and likely can't be done for any traits from the standard library, but this doesn't prevent stabilizing the specialization feature itself, it only blocks it from being *used* where legacy code is an issue. AFAIK the real reason the specialization feature itself is blocked is because of the soundness issues.
@@ChandlerCarruth I don't think the word 'legacy' applies here. Sure, there are existing implementations in the compiler that take some time to change, but there isn't any code out in the wild using specialization that you need to keep compiling, which would make it unfeasible to make those changes.
@@digama0 When I researched this, I found these blog posts stating that they had a solution to the soundness concerns with specialization: smallcultfollowing.com/babysteps/blog/2018/02/09/maximally-minimal-specialization-always-applicable-impls/ and aturon.github.io/tech/2018/04/05/sound-specialization/
Super epic. I will though say, that actually trying to use "T:! MulWith(f64)" in rust and trying to write some generic code where a bunch of generic types have to be multiplied, added, and moded
gets VERY messy in the types. It's mathematically correct, but it's also hard to read if you are trying to do something more complicated than one multiplication.
Don't know a good solution though. Duck typing is pretty bad. Maybe the compiler can tell you what interfaces your types should be implementing?
ImplicitAs is a very elegant and safe solution to implicit conversion.
1:17:00 shouldn't it be:
ImplicitAs(.Result)
instead of
ImplicitAs(.Self)
?
So, `Result` isn't yet in scope -- we're still describing the constrain on what it is. So we have an injected way of referencing it that is currently spelled `.Self`. The `.` is used in the where clause to reference a member of the thing currently being constrained -- the type that will *eventually* be bound to `Result`, but hasn't yet been. So `.T` would be equivalent to `Result.T`. Which is where `.Self` comes from -- `Result.Self` == `Result`.
Still, this is a source of some frustration because the syntax is subtle, especially in this context where there is a *different* meaning for `Self` and `.Self`. It can result in amazingly confusing things like `where .Self == Self`. But so far, we've not come up with a better syntax for this.
41:42 That is not exactly correct. Lifetimes also can be generic parameters.
nice to see one mainstream language competing to become a bigger hot mess than the next.
Why are they reinventing rust?
Rust stole plenty of ideas from other languages.
@@sirhenrystalwart8303 Yeah, that's largely how language design works as I understand it.
It's not quite Rust. There are many distinctions:
While the interface generic system looks very much like Rust's, to me this one seems to be much more powerful. Since there is no worry about "soundness" or "lifetimes" we get a much more powerful tool here.
The language is not focused as much on checking borrows and assuring that nothing dangles - it is an actual successor to C++, without the baggage, with relatively "sane" defaults, grammar, generics, "concepts", conversions - but keeping all the unsafety and danger of dangling reference, lifetimes that can end before the views to the objects, iterators etc. It results in a much more powerful language and abstractions, that are not possible (or at least not viable) in "safe" rust.
And mostly, Carbon is supposed to be a successor language in codebases that can't afford switching to Rust and are too big to move them, even module by module. Using Carbon is easier here - what it promises is seamless interoperability with C++, NOT with C as Rust does. Wrapping things in C APIs to inter-operate with Rust is a major pain if you want to migrate a big codebase. This glue layer is temporary. It's another point of failure, another thing to maintain. It can be hard to get right - if you want to keep "soundness" of Rust program and not trigger C++ UB by "safe" Rust code. With Carbon - you don't have any of these worries.
Because Rust doesn’t work for their very real use case. There are billions of lines of critical, performance sensitive code in codebases that will never be converted to Rust. The Carbon language README literally says “use Rust if you can,” and Carbon is built to safely maintain and add new features to existing C++ codebases that will never be migrated.
If you’re starting a new project, and are not a C++ developer, this is not relevant to you. Use Rust.