C pointers are intuitive in that they are literally a variable storing an address in memory. Don't get confused between pointers and heap allocation and deallocation strategies.
@@pxolqopt3597 Initially it seems redundant to have an indirection to your variable when you could just use your variable because your first pointer lesson is int a = 42; int* aPtr = &a; // do stuff in the same scope with aPtr instead of a for some unexplained reason. Then you find out about dynamic allocation needing somewhere to store the address memory was allocated and it makes sense for that, then you see the benefits of pass by reference instead of by value for large objects and realise that's actually passing a pointer too. Then it make sense.
@@pxolqopt3597 Yeah, the challenging thing is the concept, which really isn't about the coding. Once you piece it all together that variables also point to places in memory, it makes a lot of sense.
"Zero cost abstraction" in C++ was never intended to mean that the feature is totally free to use. Bjarne Stroustrup defined it as meaning that using the abstraction should not cost more than achieving the same thing without the abstraction.
@@sineptic No. For example a class with methods is an abstraction over having a set of free functions that take some pointer to a common data structure as a parameter. For example iterators are an abstraction over having a loop that operates on a sequence of data items. Bjarne's idea is that using those abstractions should have the same performances as doing it the old way.
@@maxdigiacomo4608I'm not even sure that is true in all labelled cases as I have heard an Ada and ex Rust user talk about how the Rust zero cost abstractions were slower.
The feeling I get when using C++ is that if you are really careful and write your code just right, a const here, move semantics here, etc., then you can end up with code that looks very high level (in the sense that a few lines can do a lot), but also very efficient without a lot of memory copies or pointer dereferences. But, the high level part is just an illusion, because you have to think very low level to get the high level code to work efficiently. It becomes almost a game.
@@Vitorian That's silly, almost everything is optional, but move semantics are one of the most useful concepts in C++. If you don't use it you are doing it wrong.
Yeah, the problem is when you forget to call "free" or accidentally use the freed memory, or dereference the null pointer... By itself it's not that hard, but sometimes you just forget that precisely BECAUSE it is easy(kinda like forgetting a minus sign in whilst solving a Calculus problem)
I lived on C++ for decades. C++ is great when you control *all* the source code. Rust excells in allowing you to use 3rd party libraries painlessly and efficently.
I partially agree with that. C++ as a language is a middling experience, but dependency management is god-awful, and it accentuates the annoyances of having decentralized build systems.
@@thebrowhodoesntlift9613 I mean, you don't expect us to just take your word for it, do you? Are you willing to take a shot at giving a synopsis for why you feel that way?
@@mkvalorI don't, I just gave my opinion... But, I'll expand: he's equating 'unsafe' rust to writing assembly in C++... but unsafe rust is still rust, valid rust, assembly isn't C++. The real equivalent would be assembly in Rust vs C++. Unsafe it's very much a part of the language, it's just removes some of the EXTRA guarantees that rust compiler gives you. It's much more similar to templates in C++, it's a dirty way to hack around things but it is still the language itself. The rest of the article was really good though! Anyways, I might be misrepresenting him so here is the segment from the article for you to decide: "The “unsafe” excuse Upon reading this, some pundits will throw the recurring excuse “oh but Rust has unsafe mode where you can do yada yada”, so let me preface this article with this analogy. Calling “unsafe” as an escape for Rust’s rigid system is the same as saying that C++ can do inline assembly so you can copy/paste infinite optimizations done outside the compiler infrastructure. That defeats the purpose. We are analyzing what the language can do in normal operation is, not the backdoors that it allows to open when it gives in to operational pressure."
@@mkvalor I mean, it is a pretty bad argument. Sometimes you really need to do some inline assembly (this is supported in both rust and c++). One case that comes to mind is byte comparison for cryptography purposes. Often times you _really_ need to compare bytes in a constant amount of time to avoid timing attacks and make sure the compiler doesn't optimize the code and return early on the first differing byte. If you have a need for it, why would you not use this feature? then you might say "then why don't you just use assembly?" well, c++ and rust are still _much_ safer to use than raw assembly, and containing the hand written assembly code into small pieces where needed is better than throwing your hands in the air and porting your entire project to assembly. honestly, I'll probably point to this example for people that make the similar argument against rust of "but you have to use unsafe sometimes, so just use c++".
@@mkvalor He says unsafe Rust == assembly in C++ which isn't remotely true. Rust has inverted defaults to C++, it is safe by default and if you want you can slide back to C++-style wild west with no safety via unsafe {}. It will still be valid Rust, which is crucial to understand, but it is your responsibility to make sure it is correct & safe as you explicitly asked the compiler not to check it (e.g. using borrow checker). But the syntax and most rules will still apply as normal with some additional rules specific to unsafe blocks. Completely orthogonal to the inline assembly which can be used in both C++ and Rust.
Safety is really about correctness with regards to the indented behaviour/specification, almost all segfaults are due to a faulty proof of correctness. That’s what rust guards against, whether you accept potential malicious input or not is moot.
A very good point and really the reason I like rust so much compared to C/C++. Once I get a program in rust to compile without errors or (significant) warnings, I can be almost certain that said program will work exactly as I intended (with maybe a few logic bugs and the like) because the rust compiler made sure to check that the code I was writing was correct. This means I only need to be a somewhat decent rust programmer to write good rust code, whereas I would need to be an expert C/C++ programmer to write good C/C++ code. I wish more rust enthusiasts would emphasize that correctness is what the rust compiler really helps you with and that safety is more of a happy side-effect.
It's great to watch people argue over whether an abstraction is or is not zero-cost without even thinking to read the assembly generated by the compiler (I checked, and unique ptrs are indeed zero cost and do not appear at the assembly level)
Yep, the only “overhead,” if you can call it that, is that the type tells the compiler to call the deleter when it goes out of scope, which is no different from any correct code you would have written with a raw pointer. The specific deleter to use is included in the type of unique_ptr when you instantiate it, and the type is only visible at compile time. At run time, it’s just a pointer.
just to be more specific: "implementation-defined behaviour means that the compiler must choose and document a consistent behaviour. unspecified behaviour means that from a given set of possibilities, the compiler chooses one (or different ones within the same program). undefined behaviour means arbitrary behaviour from the compiler."
in c++ and probably in c, the "int" type doesn't have explicit memory allocation value, it can be 1 byte, 2 byte or 4 byte, that depends on compiler. So probably in his case compiler turn that numbers i8 under the hood and because in rust there is no implicit memory allocation on integers, compiler basically makes that operations with 4 byte values.
CMake is just a crappy open-source hack. I will never use it by choice. It has found its way into Microsoft Visual Studio as an option, but again, VS already does a way better job on its own as a C++ build system. Been a C++ developer for 22 years. The only time I've needed CMake was to import some open-source code that didn't have VS project files... and it took forever to get right. CMake is an absolute nightmare. Additionally, Microsoft Visual Studio is a much better development IDE (for C++) than what's available for Rust. I've done some Rust coding in VSCode. Not great.
C build systems are like JS frameworks, I absolutely hate all of them and refuse to learn them. Someone else can waste their time and memory memorizing all the nonsense needed to setup and compile a C project 30 different ways using 30 different tools.
UB doesn't mean compiler WILL optimise the thing, it means it MAY do that, it may also choose NOT to do that which would yield inconsistent results, for same input on different machines which have different C++ compilers.
Like Prime said, it works for the authors use case. They can check all compilers and find the one that works for their hardware. HST developers pay so much attention to the smallest parts of the trade path that they sometimes write their own routers because itll save a couple of nanoseconds. Its crazy world out there, and they Know what the compiler does because they check. There is a good talk on yt about the level of optimisations they do and its mindblowing.
@@TrueHolarctic Except it is not guaranteed. I work in high frequency trading too and relying on stuff like this is stupid. One day it holds and the next day compiler just deleted your code (because it's UB, it can do whatever, especially Clang is super trigger happy and deletes most UBs it detecs). So yeah, crazy optimizations are done but this is beyond crazy. It's called "undefined" for a reason.
@@Resurr3ction UB tends to fall into one of two big camps, it gets optimized out in some manner, or it does something kinda like what you would expect strangely, it depends on what specific behavior you're asking of the compiler as to what it will do, and there are some manners to actually force UB optimizations, 95% of which are compiler intrinsics (like std::unreachable is implemented via compiler intrinsics and is guaranteed to trigger optimizations that either trap or remove the unreachable case)
In c++ std::unique_ptr is not free(in some not trivial cases), because c++ standard says that the object that is std::move from should still be in a valid state, so when you std::move std::unique_ptr the variable that is moved from is basically a nullptr, that is why before memory is freed by the destructor of std::unique_ptr it checks the internal pointer is null, that is the only cost of using unique_ptr.
I'm very happy you covered this article. I love rust and I am building great projects with it. But I feel like the common take that rust is so superior to C++ has turned into a monocultural echo chamber. I have helped build multiplayer game backends (for published titles) in C++. The truth is that many deployed C++ projects run fine for decades with no stability or security consequences to the contrary of the popular narrative. So this is very worth stating out loud.
"The truth is that many deployed C++ projects run fine for decades with no stability or security consequences to the contrary of the popular narrative" Until... And that's the hard truth for (to the best of my knowledge) all C and C++ code bases. I say this a Rust programmer with 45 years of C experience and almost 40 years of C++ experience. I have never seen (and nor have I ever known of) a C or C++ project that didn't have stability or security issues in all of that time. I've merely observed that these things hadn't been discovered yet. Until they were.
@@appstratum9747 In your experience, how many of these issues were due to things that Rust fixes? I don't have the experience that you do, so I don't have a reference for the types of issues that usually pop up in large c++ code bases.
@@sweetcornwhiskey The primary categories of issues with C++ that Rust deals with at source are virtually all related to inappropriate memory access beyond that allocated, use of memory when that memory is no longer valid for use, concurrent reading and writing of the same region of memory (by accident: race conditions) and so on. All of this is covered by the Rust language and most of these problems are eliminated before the compiler compiles successfully. Another important class of errors that Rust avoids is the failure to code to cover all the possible states of branching logic. At all points in the codebase. Null pointer issues are another category of errors that are eliminated. There is no null in Rust. In Rust you are forced to deal with the possibility of a value being invalid. In C you're not. In part because what invalid represents is very difficult for the compiler to understand. Software development practices are almost always designed to limit the negative impact of the weakest members of a team. The problem with C++ is that even on the miniscule chance that everything is defect free today, at any point in future someone on the team might break it in a serious way: by introducing a memory-related defect that hasn't been detected during testing. All C++ projects have decades of vulnerability to weaker programmers introducing critical defects similar to those already mentioned. Or corner cases that weren't expected. Rust protects against much of this. Even if the software was apparently free of vulnerabilities when it entered production C++ can have serious vulnerabilities introduced down the line. Another category of errors that Rust eliminates are those related to interchangeable datatypes. Int and char for example. Or relaxed casting from one ordinal type to another. In C++ you can get away with this with possibly very nasty surprises at some point in future when you get unexpected data. Not so with Rust. It's very strict. I wish I could spend more time answering you. But I hope I have given you at least some idea. The reason Rust is so frustrating for many at the outset is that the compiler forces you to address all the things that you would otherwise have forgotten to or didn't even consider. Whereas C++ will simply compile and you will find some of those oversights in testing. It's the ones that you don't find that become security vulnerabilities down the road. The Rust compiler is criticised for being slow. And it is. But it is doing so much more for you that nobody is going to do for you when programming in C++. It annoys you and nags you. But it won't let you build software until you deal with the issues. Rust holds developers to account for their decisions. It's obstructive in that sense. The pay off is that it's much safer. When you want to do something unsafe you're very well aware that it's unsafe.
@@appstratum9747 It's insane to me that anyone thinks that C and C++ devs are all 100x devs who know EXACTLY what they're doing... Code bases are living and organic - they flex and stretch before they collapse and wind up, they are often subject to change and improvement - the speed that you get when you commit to Agile development practices or follow the "Working, Right, Fast" pattern (MVP: get it working, first pass: get it right, last pass: get it fast) ultimately comes at the cost of safety. Shift-left is a real thing, it's a MASSIVE thing, it's hard to get a job without being proficient in unit testing and a red flag when somewhere doesn't do it: Rust shifts memory errors LEFT. It's made it simple to solve these errors, and helps speed up developers to get safe code who might not be as proficient by way of teaching. C++ is NOT as good as Rust in virtually (haha, virtually...) any situation, and in Rust you can just invoke unsafe code to get C-like syntax: Rust wins, every time, to say otherwise is cope lol. Addendum: And there's nothing wrong with C. There's plenty wrong with C++, but at the end of the day, it produces results which fulfill requirements. Rust is a very natural evolution of systems programming languages: it's just better at what C and C++ do. It's just better at what C tries to achieve. It's just better when safety is a requirement, and if you have any respect for your users, then safety is a requirement.
It’s almost always an issue with the management of the project (bad code style, etc). Someone who knows how to write c in a way that easy to proof read and verify does not write vulnerable code. The issue is with massive projects where vulnerabilities arise from inexperienced developers and cannot be easily noticed due to poor code style. See nasa’s c style guide for an example of this.
Regarding the compiled examples Rust has different save variants for simple operators: * Is unchecked multiplication that is, as the article states, not checked in release mode But there existist: .checked_mul(rhs) .saturating_mul(rhs) .wrapping_mul(rhs) which all have different begaviour regarding the overflow With that convidered what the rust program means is that programmer wants the unchecked mul and div and thats what the compiler outputs while in c++ the compiler is following this hidden list of UB rules that you need to know to decode what the program would compile into. In my opinion Rust better reflects what the programmer expressed in the program than C++
Iterator optimization is actually done before llvm. Also struct layout optimization is happening before llvm. Finally, some vectorization stuff is prepared in MIR. Naturally, clang does neither of these things, so it often fails to vectorize obvious things.
To beat a dead horse: pub fn noop(x: i32) -> i32 { ((x as i64 * 2) / 2) as i32 } gets optimized into an actual noop (as we are proving that overflow can not happen). So, a correct program in Rust gets correctly optimized, and shit program in Rust gets to crash rather than UB your entire production.
I don't think I am being arrogant when I say I am a C++ expert. I like C++, but my biggest problem with it is that it is impossible to teach, and many fairly common programming styles in C++ are basically "experts only" (if you care about safety and correctness) and completely impenetrable to junior engineers. They might very well even think they understand it fully, but it is not incredibly likely they do.
I took a C++ class as a junior programmer and hated it so much. Stuck with C# instead and went on to do well. It definitely didn't appeal to me as a beginner, maybe that's just me though
I'm a Rust newby but C++ veteran. As i understand it, unique_ptr is Box and shared_ptr is Rc/Arc. shared_ptr and weak_ptr operations are threadcsafe (but not necessarily the pointee).
That comparison sounds about right to me! I think *std::shared_ptr* in particular is probably more akin to *Arc* in Rust because, if I'm not mistaken, the internal reference count is itself atomic (please feel free to correct me).
@@white-bunny Right. If we want to get very deep into semantics, we have to remember that Rust references and smart pointers are not nullable unlike C++.
Unique_ptr are free 95% of the time and usually just get compiled away (though ofc. the deconstructor will get called though that you need to call either way for your program to be correct) with that rest 5% of the time is when you provide a custom deleter (though even that overhead is basically non-existent). Additionally in some cases you need to default construct a unique_ptr and then move the object into it. In this case the default construction can have some overhead but can also be compiled away in some cases.
As someone learning C++ with some C# experience. I dont like Unique_ptr, its more annoying to remember and use then just slapping * and & symbols. Sure it doesnt look as cute having Unique or ComPtrs and aligning the spacing perfectly. But for learning and do things without a team/personal. RAW all the way
Unique cases are only "not compiled away" in case the object it is holding is not trivially destructible you can check for if the object is non destructible using a cpp concept static assert. Even then it's mostly compiled away for most -flto cases. It's pretty awesome.
In the general case the destructor also incurs an if-not-null check for the delete call or let's delete's internal logic do that check. So pick your poison: redundant if-checks or redundant delete-calls. You are at the mercy of the SSA to get rid of these. Admittedly most programming patterns are amendable to this.
@@freezingcicada6852 unique_ptr is not replacement for & and *, it's meant to be replacement mostly for `new` and `delete` (and other for custom allocators and deallocators) if using `new Class()` and later on `delete obj` rather than just using `std::make_unique()` and letting the compiler handle later on deleting of the objects is annoying for you, then I'm not sure what to tell you. But from what you wrote, it feels more like you're using it wrongly, imo it can be especially useful while being member variable of classes, where you can just omit having to handle destruction of it in the dtor and not having "space" for leaking memory.
Unique pointers are a (practically) zero cost abstraction at runtime. It's interesting that such a experienced developer can be completely oblivious to core C++ concepts like RAII.
@@vladimir0rusgo to godbolt and check the generated assembly. Most of the cases, the unique ptr is not compiled into the code because it's literally just replaces a manual delete.
@@vladimir0rus unique_ptr is just a wrapper for a raw pointer that disallows copying and has some additional functions. Do you honestly think there's a difference in performance for any compiler worth its salt? Adding functions and compile-time checks does not affect runtime performance.
@@greg77389 additional code always affects runtime performance, even if it is code for exception or some additional function (because L1 code cache is small). Even if it just a wrapper it might add code which will be present after optimization stage.
17:03 might be just me, but pointers were super simple when I was first learning C++ in my first year of college. The problem I had to help people with the most, was the concept of inheritance and access control. They all got the pointers thing pretty quick. I even wrote my own ref-count smart pointer classes (this was well before C++11) to use in my school projects.
Pointers are def easy, it's just an index into the 4GB byte array we call RAM. (I say 4GB as I recommend anyone to learn systems programming in 32bit first)
@@npip99 In some systems pointer was not a plain index - in real mode of x86 pointers are made of segment and offset. Also null pointer was not always equal to zero (in other systems). And pointer concept is not very hard to understand, but effective dynamic memory management is - and most of mistakes in programs are related to the memory management (memory leaks, dangling pointers and so on).
Some applications confuse people, like void or function pointers. But since I came from higher level languages where I constantly made use of callbacks and arrays holding different data types, I understood the application right away.
The idea when the unique pointer was designed and added to the language was that compilers should generally be able to completely eliminate it at compile time so its the same as rust. But in practice it turned out, because of exception stack unrolling and translation unit boundaries that there are a lot more instances where it can not be optimized away as the committee initially intended. It has gotten better with more advanced link time optimization but you still have to check if you want to make sure it actually gets optimized away fully and in normal use it probably won't be in some scenarios.
C++ is probably the Pioneer of no cost abstraction, modern C++ can be ridiculously optimized due to it(if you know how to use it well). It's pretty damn amazing seeing a 6 line matmul function running that fast which it has no right to. The tooling is hard to learn and idioms are hard to follow but at the end you can write with your knowledge and compiler veryy efficient code.
"if you know how to use it well" is always the argument I see thrown around. What if a language would just have these zero-cost abstractions built in where you don't have to "know how to use it well" and it just works? Sounds crazy, I know.
I never had an issue using ASM. In fact it was a requirement when I went to school ASM was required before C or C++. I also was required to learn it in the military. I tend to find most the programmers I know who had to learn ASM first are better programmers by far than the one's I know who didn't.
The main issue is not how things are taught, but that so many people get into CS for the money and not for the interest. People who like coding will learn more about it and it's concepts, people that want to make money easily will hit a roadblock
I believe this is an example of survivorship bias. You’re only looking at the people who started off learning assembly and *kept going*. You are not considering the many many people who would start learning assembly get intimidated/ make minimal progress and then quit. Those programmers who started using assembly are likely naturally more talented and would have succeeded regardless of what lang the started with.
@@rkidyalso looking at ppl with a 20+ years of experience since ASM was a thing back then. Kind of like saying ppl who learned to drive on stick shifts are better drivers. btw, I had classes in 6502 and 8088 ASM and I just do Python now ;)
Interesting....Allah bless you my feller. I was thinking about this. Am a ME by trade, struggling w/ employment but have a REALLY nits strong interest in coding including niche stuff...or stuff that is niche so far. I will take this as a sign from God (maybe) that I am on the right track. @@DoctorWhoNow01
@@DoctorWhoNow01 the main issue with computer science majors is that most of what they should be learning as part of their degree has been shifted over to electrical engineering and computer science retained pretty much just the theoretical stuff
9:15 Though even performance cost of the bounds checks is a bit nuanced. Like when using a iterator over `Vec` these checks can be mostly compiled out.
@@hwstar9416 Rust bounds checks are minimal. If you have iterator loop , you know it is safe. If known size vs known index - no need for check. If you iterate from 0 to n (n user given) in array of size X, it will do bounds checks, but if you make a line before if(n < X ), the bounds check will be done only 1 time not for every iteration. And even if you do full check, max performance penalty with special loop for bounds check is 10%. typical is 2%.
Yes but likewise if you iterate over every element in C++ with a range for loop / standard algorithms you're also not going to access invalid memory. So yes, Rust incurs no performance penalty but it also has no safety benefit. This case is the same in both languages because there is no tradeoff to make. It's the raw access case where C++ chooses performance and Rust chooses safety.
Liked this article, especially the part where it questions the safety claim. I know I am learning Rust because it looks easier to learn than C++ today and I don’t strictly need to work with it.
people don't realise the ridiculous amount of details one needs to know to write proper C++, to write Ok rust you don't need to hold so many miniscule details in your mind to do so, just the ownership thing(and lifetimes that are actually difficult)
That's simply the price you pay for having a language that is actually useful, allows you to do what you need to, and doesn't constantly get in the way. If you don't need the features of C++, feel free to use something else.
C++ is not difficult to grasp, it just has an enormous amount of details you need to know, to write it properly. Sb unironically said when teaching it don't use this, that's a design flaw
If that is a problem then just write C, the complexity of C++ is just there for when you need it, but nobody is forcing you to use it and instead you can write proper C code that is more simple.
Errors as return values and throw: Erlang IMO has the best separation and does both. Exceptions are... for exceptional situations. Things you haven't dealt with and rarely should try. The "let it crash" idea is important to the platform. (Though that works only because of the actor concurrency model.) Errors are known failures and should be handled as such. Most languages that have exceptions use them for both and it just confuses the situation and due to not having concurrency which allows you to separate out responsibilities such that you can "let it fail."
TBF, you can implement Exceptions in Rust using panic hooks, but, I'm too stupid/lazy to dig deeper into the topic (it's also probably unidiomatic and destroys the correctness/soundness of your code).
Erlang assumes the role of both an operating system and a programming language here, though. For long-running processes in most environments, the equivalent of erlangs 'let it crash' is programs just literally crashing and being restarted by something like systemd. Its outside of a more general purpose programming languages scope of responsibility to bother caring about this. But i do agree that using exceptions as control flow is dumb.
@@sproccoliI disagree it is outside the scope. These general purpose languages are providing concurrency features which lack good error handling making usage far more difficult than needed. They treat concurrency as a 3rd class citizen and it is really harming the space. And the language typically *must* worry about this topic because without language integration they will always be lacking. Which is why every actor model grafted onto other languages always feel incomplete and more fragile than Erlang. Concurrency will only get more important going forward and if languages really need to catch up with Erlang wrt concurrency (and error handling).
As someone who just started learning Rust, and who has no computer science background. I'm really really proud of myself for understanding 50% of what qas said in the video and in the comments. especially the comments 😂😂🎉
Couple of notes on optimizations: 1. What if you WANT overflow semantics in multiplication? Like, what if you're trying to clear the top bit? How do you force a C++ compiler to generate appropriate code? 2. Exceptions as return value are not a 0-cost abstraction. Real exceptions can be implemented by having a static mapping of function return addresses to catch blocks. This mapping can be read when throwing. So long as an exception is not thrown, the program makes a return behave as a return. Turning every throwing return into a switch creates run-time cost in the event that no exception is thrown. Java and Swift allow exceptions to be a 0-cost abstraction by forcing you to declare which functions do or don't throw, with Swift also requiring a decoration of the call site. This means you need to know an exception is there, and you must handle it or account for it. As for "You don't know what your state is after throwing", that's what RAII is for. You should ALWAYS have a clean state when returning from anything, any way. The stack rewind is meant to guarantee that.
The unique pointer is free in an optimized build or your compiler is stupid, unless you do unidiomatic stuff. The problem is the destructor which tests if the pointer is null and otherwise frees the pointee. On a null pointer, ideally the check can be optimized away or would be necessary anyway. Unique pointers are zero overhead. It does things for you which you want to happen. Shared pointers have a reference count which is an obvious cost. It's honestly a complete different use case. The old C++ code base that I continue to modernize can live completely on unqize pointers, it doesnt need shared pointers anywhere. I threw so many of them in the bin.
You can make linked lists in rust without unsafe, much easier and cleaner, if you enable the polonius borrow checker, which is currently experimental. It is an issue in stable rust, but hopefully not for much longer.
You can get rust (nightly) to not even produce code for the function and inline it as identity (using unsafe). Calling it "the unsafe excuse" is IMO a misunderstanding of what unsafe does. Unsafe makes it sound like you're now free to do as you want, but unsafe allows you to do only three specific things you could not do before, not throw out the entire promise of rust.
The article is wrong about SEGFAULTs. They cannot be cleanly handled. Calling anything after a segfault is undefined behavior. Parts of functions could've been overwritten. Just calling printf might yield a very unexpected result after a segmentation fault.
Yes, this is very important to stress. All out of bounds accesses are truly bad in C. When you're lucky, the processor will catch the out of bounds memory access, when the accessed byte is on a page that hasn't been mapped into memory (X86 pagination), and in that case the kernel catches the fault and allows the program to crash immediately. But this good behaviour is more the exception than the norm. When not lucky (in the majority of cases), the out of bounds access touches a page that is mapped into memory, the out of bounds goes undetected by the processor, and whatever malware that was crafted into that png image (or pdf, of whatever!) can now happily start encrypting the disk... Because, I think we have to stress that the other wrong point the article makes is to assume that a computer that is not a server is not under attack! Everything is under attack, always. Open any document, and the software is potentially under attack! All data are unsafe! Just an example: think about an online library. They OCR books and magazines, and put the contents in a database. Guess what happens when they process a magazine about SQL attacks? Yes, you guessed it: the "DROP ALL TABLES;" gets OCR'd correctly, and deletes all the data of the project! The whole project was offline, and the attack came from a piece of paper!!! This is just an illustration that any data can contain malicious code!
As an older guy, I found that understanding assembly was easier than pointers and not harder. I started programming assembly on a Commodore VIC-20 before I ever heard of pointers or C. If you understand the addressing modes in assembly then you just naturally understand pointers.
This is a bit of a tangential point, but I find it strange that many Rust developers put the language in the forefront as if the language is a feature Languages are tools, the end user doesn't care how it's made
Depends on the context and who's the user. Rust is used to build a lot of tools and libraries in which the language does used has implications and the users usually care.
"many Rust developers" You started biased as this is not a "Rust thing", this is a "programming thing". Just as I saw many, many, many times C++ programmers saying C++ is the unique tool that should be used.
I usually use C instead of C++ because the minimum runtime is big with C++ and it's a chore to port it to weird microcontrollers. In driver developer where I do use C++, we use a tiny subset of it that works out pretty well. Rust lets you do some really wild stuff to pare down the runtime support while still retaining some of the features, or explicitly disabling them to enable some compile-time errors. Such as defining your own allocator and or disabling exception handling.
I have to say I love your videos! Despite all the humor you truly look over the bias fence while not being afraid to not know all the answers. That's refreshing and inspiring in a world so full of bulshit certainties.
Totally true about the large majority of applications running with mitigations disabled. Spectre mitigated libs are not the default on MSVC and not many people change that default.
Point around 9:22. That is mostly false in article. There is bounds checking but not always. Do iterator loop? No bounds check. If compiler knows your index can go max to 100 index and statically you made something size of 100.. no bounds check. If you have user inputted index to check up to that number in array - yes it will do bounds check but if you do something like if (user_input< array.size() ) (basically like you would do in C) the bounds checks will be removed on array and you will do check only once instead on every iteration on loop. Even if you do full bounds check on every iteration of loop, highest performance drop i ever saw in benchmark was 10%, mostly it is 2%. Literally compiler knowing more about types from Rust and being allowed to do more agressive optimalizations sometimes easly can overtake that.
The fact that the author has provided no code examples repro cases or benchmarks for their performance claims just shows to me that they are using the ancient technique of just making stuff up.
Basically: rust is better, easier, and more modern (maybe too new/modern), while C++ is more raw and established. More projects use C++, and learning it will land you more jobs. Rust is still in the phase of early adoption.
Pointers absolutely are simple and intuitive. Or do you consider paper with "your jacket is in the closet" complicated? I mean, as soon as you understand that computer has a memory and it's actually stored somewhere physically, pointers are the most obvious way of working with it in every regard.
Hey Prime. You mentioned about implementing a Double linked list in Rust. You can break Rust ownership model by just adding an extra pointer to it. Having an Arc that points to itself may not work. But you can make an Arc that points to itself. Bypassing the memory checks of Rust.
that does not in any way "break ownership". That's just a reference cycle, possible in every language, and perfectly valid in Rust as it's a shared ownership primitive. And since you're pointing this out as if it's specific to Rust, reference cycles apply to C++'s shared_ptr in the same exact way.
@@skeetskeet9403Sorry about the mistake. What I meant is that it is easy to leak memory that way. Rust's compiler will not bother us with it. So implementing the double linked list using two pointers will proceed just like any other language.
Well, it is not a pressing issue until they lose all their money due to some UB they missed. After that it is not important anyway because the company is no more.
Domain dependent. In video games they're not a pressing issue, for example. And the idea that because you can be memory unsafe you will be is overall a stupid point. If you wrote a high-performant game in Rust you would use unsafe a lot. So the end result is the same. If it doesn't cause very many crashes then nobody cares and we all move on. If it does then you get a bug report and fix it.
@@lucass8119 >In video games they're not a pressing issue, for example. Considering that many games nowadays operate with real money, it isn't true. And even without money, it is very hard to debug bugs caused by undefined behavior. Time spent on that bugs better be spent on implementing features. Also, writing correct unsafe code in Rust is easier compared to C++ so the only reason to use C++ over Rust for videogames is the current lack of mature game engines.
@@ХузинТимур games don't operate with real money, if they did, I could modify my game to say that I paid the company billions and have all most expensive cosmetics in the game instantly.
I agree with the guy. C++ is great. It's powerful. It's fast. It's flexible. C++ is my favorite programming language. Been using it for over 20 years. Tried Rust. Didn't like it. I don't want to program in a straight jacket. I don't need to be hand held and bossed around by Rust.
Rust has overflow flags which you can use to instruct it how to treat places where it is possible. Don't remember them off the top of my head but it's Rust's solution to avoiding undefined behaviour in this case, it lets the programmer decide "what do you want to do if this happens, should I give a warning or what?"
I think my knowledge of c++ has helped with me improving my mathematical and computer science fundamentals, many concepts are very similar to in the literature.
Yeah I know a company that has the same c++ code running from everything from smartcards to mainframes, rust has not nearly the toolchain to support that
20:00 That's not even a positive though... If you have worked on windows, you know that MSVC shouts at you if you use scanf and forces you to use scanf_s instead, however the scanf_s doesn't exist on the GCC and many other distributions. It happens because while scanf_s isn't an ISO standard C++ function, Microsoft can basically do what it wants with MSVC, and get away with it.
@@scion911 It's not about out of standards or inside standards. The thing is, it is part of one compiler but not of other and is allowed. It can happen for other things and does happen.
The article is not bad, but I had to chuckle when the author wrote segmentation faults are not an issue in C++. I once tried to integrate DJI's own lib to extract thermal information from drone pictures and got a crash within 5 minutes, because their SDK developers allocated something with "new" and tried to deallocate the same thing with "free()". Pretty much every single time I try to integrate a third party solution to any of our business apps I can find glaring memory issues right away. We even had obscure cases of libcurl crashing on us in the past. So yeah, big disagree.
The guy misunderstands what security is. On any "normal" desktop, as long as user input is interpreted in any way and there is a network connection, any program such as a simple ls can be exploited to gain privilige. Also there really is no difference betwen crush bug and a security bug. Even if a crash is caused not by memory misswrite, it means the situation wasnt planned for - and that opens for it being misused.
The program can only be exploited to get the privilege it has itself. Like if you make a single player offline game, what does the user have to get from compromising your program? Maybe they can get around piracy locks but the user doesn't get admin access if they didn't already have it. Maybe your program can be taken over by a malicious file stupid user got from the internet and loaded, the fault is mainly on the user there. If you install a mod, you have to understand it could break your computer, especially if games allow mods that are compiled libraries and not just scripts.
-fsanitize=address and -fsanitize=thread can catch most memory and concurrency issues in your CI, but you can’t leave them on in production since they consume too much memory and slow down your execution X times. Thus you need great test coverage to stay somewhat safe.
How tf are pointers difficult? Is this easy and intuitive for me because C++ was my first "real" programming language? Heck, C++ makes it MORE clear by explicitly distinguishing between values, pointers, and references, as opposed to Java where a variable is a value or a reference depending on its type.
I think what a lot people don't realise between C++ and Rust is the usefulness of the compiler. I write both languages a huge amount and the compiler errors in C++ are so useless. I basically look at the rough area in the code I need to be looking at and rely on my knowledge of the language because the error messages just aren't useful. Rust actually tells you what is happening
You want to know how important latency is to trading, The programs run on the processor in the Mellanox network card because the latency between CPU and NIC is too expensive.
std::unique_ptr does not have runtime overhead. std::unique_ptr is essentially Rust’s Box. std::shared_ptr is reference counted like Rust’s Arc. Comparing apples to apples, C++ is essentially the same to Rust in this regard.
@@khatdubell Actually no, even with a custom deleter on unique_ptr its free. Because its templated, there's no indirection happening at all. There's no function pointer stored or called. The function is written into the type itself and inlined. Templates can really be quite magical. This is the same reason why std::sort is significantly faster than C qsort. No function pointers, no indirection, and no jumps.
There is no difference from a unique ptr and a raw initialized pointer with delete being called on destruction. You would have to do something to move a raw pointer to a different class as well.
Having multiple layers of IR isn't just a Rust thing. GCC's compilation pipeline for C and C++ involves a "GENERIC" IR that, as I understand it, is equivalent to LLVM IR, and then a more internal IR named GIMPLE which is processed in three distinct stages (High-level GIMPLE, Low-level GIMPLE, and SSA GIMPLE).
You get around a lot of the overhead of shared_ptr's by passing around references to it. This prevents all the reference counting overhead required if you pass it by copy.
@@ThePrimeTimeagen Not if you don't need the atomic counter, weak references should only be used when you absolutely need the atomic reference counter.
@@ThePrimeTimeagen No. Sometimes, all you want is the object itself, and don't really care who owns it as long as it lives "long enough" to complete whatever operation you're doing. For example, if you pass the object to a function (and that function won't place a reference to that object into another structure, which should be the case for the vast majority of free functions). In that case, the lifetime of the shared_ptr is guaranteed to exceed whatever lifetime the callee requires, so you can just pass a reference to the shared_ptr (or, as is my preference, to the owned object itself -- I care about what the object is, not the implementation detail of how it's allocated in memory). I pass a shared_ptr by value only when it makes sense for two entities to own a given object, for example in a concurrent cache. If my cache holds shared_ptrs I get to evict items from it with complete abandon because whoever picked up stuff from the cache will get a shared_ptr which won't get deleted until they're done with it. A lot of it is also down to what is idiomatic. Rust, seems to me, uses a lot of Arc and Rc where a C++ programmer would just use values instead.
@@ThePrimeTimeagen Depends if its meant to be an owner of it or a thread-safe view. Shared-pointer is multi-owner, its okay to pass around raw-dog pointers as non-owning views of the data. You wouldn't write void foo(const std::shared_ptr k) { std::println("K's value is {}", *k); }, instead prefer void foo(const T*) { std::println("K's value is {}", *k); }. Because you're just taking a peek at the data, even if you were mutating it, you're not taking ownership of it. If you might need to check for nullptr, then you need to do that with the raw pointer and smart pointer either way.
I have done some speed comparisons between Rust and C++ and C++ wins by around 10% every time. I was comparing Rust with MSVC. When I switched to Clang/LLVM, they were almost identical. That kind of makes sense given that Rust uses the same back-end. MSVC seems to provide better optimizations.
The best abstraction over assembly is and always will be calling functions. My parameters gets passed, my registers get saved, my value is returned, and I don't worry about any of it.
A seasoned C++ professional is just as bad at writing C++ as a junior developer. It's not a question of "how many bugs do I have in my module I just wrote", it's a matter of "how many hidden errors will be in my code after some other programmer fixes a bug in another module/does a branch merge", large C++ codebases are very far from being robust, very hard to fix (e.g. a merge in Perforce messed up the math in the allocator, it made the GPU execute random garbage from the RAM and rewrite some CPU resources in the shared CPU+GPU memory which lead to segfaults in semi-random places in CPU code). I've seen a high-salary Silicon Valley programmers not understanding how to do a multithreading in UI-heavy apps. I'm yet to see Rust's ability to catch all these problems beforehand myself (at least having a test framework integrated into compiler helps a lot here), but a C++ programmer claiming they don't have problems in their codebase is a joke.
I don't think there's a language that can help with those kind of issues. As far as c++(or rust or any other language) is concerned, there's no GPU. And don't even get me started on GPU hangs.
@@khatdubell If we go that route we can assume most of the world doesn't know how to write good code. Statistically speaking, chance of you belonging in that set of programmers is higher than other set. So we are either in the presence of a demigod, your highness, or a talking donkey; decide for yourself...
Inexperienced developers, use of decade-old C++ standards and missing tooling is contributing to C++s bad reputation. Modern C++ (and use of its concepts) combined with good tools is so much better and safer and still very fast.
C++ is not going to go away, there is too much value in the echo system of tools and libraries. I like Herb Sutters approach to addressing the problems c++ with cppfront or "cpp2" which is more of a refactor of the c++ syntax which automatically defaults to the "c++ best practices" than a new language.
I find the argument at 9:00 very tired. Just because raw array access are bounds checked doesn’t mean it is always slower to use rust. 1. Most of the time you don’t use the square brackets because there are zero cost iterators. 2. LLVM often removes bounds checks because programs (including non rust programs) are often written where it is impossible to have an out of range index. 3. Bounds checks actually have very little runtime cost in 2023 because unless your programs frequently read out of bounds the cpu branch predictor will ignore the branch %100 of the time
There's a blog post out there by a guy who tried to follow a textbook about interpreters in rust and, even after some herculean optimization efforts, was still lagging behind C-like performance by double percentage digits, so I find it hard to believe that bounds checks are that irrelevant.
What that other guy basically is saying, that C++ is better because it does rely on undefined behavior to a greater extent than Rust (or to be more exact: the language implementors like CLang and GCC are optimizing more aggressively than it could be done in Rust bc of UB). That cannot be a good thing! I've seen examples, where GCC cares about signed integer overflow when in debug and "optimizes" that away whn in -O3. You don't want to have such an erratic behavior!
No you haven't. You've seen cases where the bug you introduced via producing UB wasn't made apparent prior to the compiler actually doing its optimisations (which are based on the assumption that you have not written a program that will allow UB to occur). This is not erratic - its entirely predictable that UB bugs are often only apparent after full optinisation.
yes and it's especially annoying considering how simple it would be to remove the arrow syntax (->). The compiler already knows if things are pointers. It doesn't need the arrow. It could have used dot syntax for everything from the beginning!
It's not just an address. void pointers are just addresses. Normal pointers are addresses to a certain type. The second part is just as important because it shows how dereference works.
Idk, I was using rust for a while and thought I liked it over cpp, but then I realized I was wrong as I got more and more frustrated with how it wasn’t meeting my performance expectations and how much friction it felt like there was with the compiler when writing it
@KayOScode I am sure you work at Nasa and you search ultimate frontier or real time operating system which can kill people if error happens... Like Airbus embedded system ? No kidding you are produce useless programs in a giant entreprise which produce useless programs ....
Yep, this is the reason why the adoption stagnates even though there is this incredible push by a small community. They never mention actual downsides, they either hide them or they have no idea.
I find that people who dismiss Rust's safety advantages as "I'm too smart/good/careful to ever need safety" are just dumb lol, it's the same line of reasoning that would do away with access modifiers since "I would simply not reference private fields/methods", real life has you sharing a codebase with the dumbest/careless/hackiest individuals you'll ever meet and that includes yourself from two months ago, any feature that prevents an idiot from hurting others or themselves in the 99.99% of cases in exchange for a theoretical 00.001% performance hit that only happens in some scenarios is inherently good. Any environment that doesn't have you blindly troubleshooting in complete dark for no good reason other than "This is an old language, we can't change that now" is inherently superior. While there are instances where C++ is the better choice, the truth is, for nearly every application someone may ever need to develop, Rust will be easier and less time consuming to build with both short and long term.
Although shared pointer have an overhead in C++, it generally only hurts in loops and thus get getting the raw pointer before entering the loop solves pretty much all performance issues.
I haven’t had a segfault or stackoverflow in C/C++ in ages, either. Already helps when you enable all the warnings and set the compiler to pedantic checking and solve each message. Now memory leaks I’ve had but Rust doesn’t prevent memory leaks either. You just need to profile your application and see that happen and solve it.
@@anon_y_mousse you can always get memory leaks. In any language. A typical situation is a map to an object pointer. Don’t say that the object should be in the map because there are many reason when you need a non-owning pointer. You do have to manage both deleting the pointer from the list and delete the object. It’s easy to overlook to do one or the other. And there are many more cases where a little oversight can cause a memory leak and you should therefore always profile your memory, don’t trust yourself! Now I do agree that especially since C++17 a lot of things to shoot yourself in the foot is taken care off. And I adore the newer versions.
@@anon_y_mousse "Also, if you're getting memory leaks in C++, then you're writing your code wrong." Well yeah that's how memory leaks _usually_ happen and it's not unique to any language. No language or compiler can check for memory leaks deeper than you can see with syntax, because it can occur in various states, at various times in various cases, that are not trivial for a compiler, let alone a human, to spot. "Idiomatic C++ should never have memory leaks", what does "idiomatic" mean? Do you have reference for every possible idiom that makes it ideomatic for every possible problem you may encounter? Idiomatic is a loose definition for using a _syntax_ of a language, not for eliminating things like memory leaks. Iterators, which are arguably idiomatic in Rust (and C++) doesn't solve every problem, whether you use C++ or Rust.
@@CallousCoder That's not a leak, that's a lifetime issue. It's still a problem, arguably a more serious one, but it's not a leak. I agree with the poster above that idiomatic c++ should never leak.
@@isodoubIet the effect is a memory leak, memory keeps leaking away to the point the user gets an OOM. And those things won’t be governed by any idiomatics. And they are very real. I’m not arguing that RAII doesn’t significantly reduce the risk of memory leaks with 24 years ago when I started C++ they are for more rare for sure, but they will not be circumvent all memory “non return” (let’s call it that then) issues. That’s what the original poster sounds like they mean. You cannot be sure unless you test drive the application extensively and profile the memory as there can be memory leaks for sure. Take Chrome for example 🤣any sufficiently complex 3D application has/had memory leaks galore. From AutoCAD, Maya, to Blender to Modo I had them crash with OOM errors.
Oh, but there are languages that never crash! A cop out example would be anything without runtime exceptions and the has its standard library built around only total functions. So indexing into a list would be `(List, Nat) -> Option`. This is a bad answer, because you will still have to consider apparently impossible cases (like assertion violations) and do _something_ then. The most sensible choice being recording the program state so you can debug it later. An actual example would be languages that let you show to the compiler a case is impossible. So instead of indexing into a list you could use `(Vect, Fin) -> T`. This is a function that takes a list of a given length, and than a finite number - between 0 and that length. What's surprising, it is possible to statically typecheck this. That is, without knowing the concrete value of n the compiler can verify your program uses this function correctly. The big problem is such languages are niche and generally lack libraries and tooling.
I suppose functional languages come close to the hypothetical language you were talking about, only adding things like monads in the part of the program that has to interface with the outside world..
@@Lttlemoi he's not talking about a hypothetical language. dependently typed languages exist and let you do this. they just aren't very useful for writing programs. but they are still cool. My favorite language in this family is F*
I don't think it's fair to say all array accesses are checked at runtime. If you use an iterator, they aren't, at least that's my understanding. If you're writing an algorithm that will make random access, has hot path perf requirements and can know it's safe to subscript, use the _unchecked variant, easy. I don't get why everyone assumes that experts should not use unsafe{} in any circumstance. If they'd otherwise use C++ to get the same perf, it's still strictly better to annotate that behavior and get memory safety by default in 90% of your code.
I can't think of a case where using modern C++ has ever resulted in losing memory safety that wouldn't have needed to be lost in Rust anyway. Every time I've ever run into this its specifically because I used C over relying on C++.
@@diadetediotedio6918 Memory safety is trivial in C++, if you use RAII for memory, especially with smart pointers and containers, you won't run into memory safety issues, if you use the C++17 and C++20 type-erasure types like optional, any, and variant, and if you include ref_wrapper, you can do pretty much everything without needing to deal with memory at all.
i hate rust cuz it has stupid syntax even harder than c++ ; at least in c++ you can use struct and functions without metaprogramming and any defines etc etc ... but rust cannod do that it makes you to use tons of ampersand single quites arrows and buch of moronic stuff which is time consuming and boring . evne vlang seems much better than rust Xd mozila they themselves are using llvm which is written in c++ yet they are saying rust is better lolz with glitching syntax lolz at least they could make it easier why all those boring ampersands single quotes arrows and etc etc am i with you rust is a bad tool
C++ iterators are also zero-cost by design. They messed up with ranges somewhat in that regard, not because the abstractions have overhead, the c++20 ranges abstraction is also zero-overhead. But it prevents you from writing optimal code in a lot of cases because the nature of how the chaining works discards information about the underlying structure which makes the optimization a lot more difficult for the compiler in some cases.
@@thekwoka4707 Ah, no. It's not about compiler flags. C++ iterators are the thing almost everything uses and those are completely fine. Ranges are built on top of iterators as well. (Well, ranges build on the newer iterator concept that works with a sentinel tags but that is besides the point.) One example of a typical C++20 Ranges problem is that if you have a sized range (which means the length is known) then this sizedness gets lost as soon as you chain any kind of range operation. This size property of the range can not propagate through the chaining pipeline in C++ and that is why its almost impossible to optimize those as effectively as they can for the traditional iterator loops. I think the plan was for compilers to be able to optimize this reliably but it turned out this is not actually possible. If the compiler can inline the entire thing then it can probably optimize it down to zero cost but you simply don't know if your compiler will actually be able to do that and you have to check by hand for every single case.
Getting into these kinds of articles its easy to read it as a "X IS SHIT!" "STOP USING X, Y IS BETTER" kind of thing but that's actually not what this one is like. He's "why >I< think ..." and the conclusion is just alike! I have to give him that! So in his situation: keep going dude! But on the other hand I'd say it's also a dangerous take: Most people are not that pro and low skill peeps might take it as an invitation to keep implementing serious stuff in possibly dangerous ways.
I agree. Although I've seen youtube video where Herb Sutter mentioned how the government put out a report specifically calling out C++ as a unsafe language and that it should be avoided. (Unfortunately I can't remember where that video is now.) So I guess the standards committee might still have to take safety seriously, maybe more seriously than it really should need to.
@@diadetediotedio6918 Yeah, it's crazy people are complaining that safety is a high priority for most projects. In fact, safety isn't prioritised nearly enough. I hate this "move fast and break things" attitude, to be honest.
For usual/simple programs at least, Rust is so much easier to use and setup/compile. And everyone else can easily compile my program too with `cargo build --release`. That's it. And, Rust comes with guardrails builtin, which yes can be time-consuming to work with, but helps noobs write safe code by default. And it feels like alot more is caught upfront at compile-time, and I don't need to install special compilers or other software to catch them (nor pass special flags to the compiler). My only gripe with Rust is the macro system. MBE's quickly become an unreadable mess and can be too limited in certain cases, while proc macros require too much setup/code and an extra crate (why?).
His take on crashes is weird to me. I don't particularly think 'Well ~I~ don't write bugs so whats the deal?' Speaking as someone writing C++ professionally for the past 5 years, hiring young C++ devs is an absolute nightmare. Segfaults everywhere, stack overflows, abusing the type system, etc. He's right that Rust wouldn't FIX this, but it definitely would help. If we could ensure that only C++ experts could write C++ code, I generally like C++ more. However, the number of C++ experts is dwindling, as the language is reaching the heavens in terms of complexity, and the number of Rust experts are increasing. Idk, I'd rather hire the Rust devs than the C++ one.
I think "trees" should be arrays where parent/child indexes can be calculated with MATH. First off this is probably faster than basically maximizing cache misses by jumping around memory, but also makes it super easy to insure minimum possible height
I'm not sure but wouldn't there already such an overhead in latency that you might as well just use a higher level, safe, simple language? But maybe you could orchestrate at the process level, not containers, but then we're just going back closer to metal anyway and the micorservice is just the smaller scaling unit conceptually.
C++ do not brings any benefit. Extremely hard to learn and use it well. Then your super fast microservice is packaged into a docker image, uploaded into kubernetes where it runs in a pod.
Acording to jetbrains survey from 2021, 8% of people who work on microservices use C++ and C. Most of them use java, javascript, python. Can you trust this survey? Duno
13:24 that's not "fair" at all, you can't just say "oh, of course you could write the *correct* thing, but i pretend to be stupid just to make my point"
My fast take on Rust vs C++. They are stairs that goes to thousand steps into the sky. Rust's staircase has firm and strong handrails and also provide you with a harness, whereas C++ it might have handrails, but you have no harness. How much can you trust yourself to not f- it up, and fall, and if you're in a team, everyone tied together, how much can you trust everyone from not falling?
Maybe the team aspect of your question is what's really important here. Folks programming in C++ might be writing < 1 memory/concurrency bug / X LoC, but what if the compiler could bring everyone to that level or 0 bugs at all, even if it's sometimes annoying to "fight" it? Frankly I'd prefer the latter even if I *think* I have reasonable competency to produce code that does not have memory/concurrency bugs in C++ (at least not by the time I've identified and fixed them with tests).
Rust dev here. First of all, undefined behavior is non local. Yes, the compiler optimized that part with /2*2, however, the Standard says the whole program is affected. The compiler can literally do some doomed tricks like executing code both inside if(a>0) and else. Thats not what you want in your trading platform. No, memory accesses in Rust arrays are not constantly checked. Due to MIR optimizations the compiler may check it only once or even remove the check at all. No, with Rust one does not pay safety overhead. These runtime checks are done in the same places where C++ programmers would need to insert them. They just don’t. Which results in undefined behavior, angry clients and money loss. You were easily tricked by his flawed logic into promoting his article to 250k people. Congrats 🎉
C pointers are intuitive in that they are literally a variable storing an address in memory.
Don't get confused between pointers and heap allocation and deallocation strategies.
Thats my view as well, but I do believe that pointers are really difficult up until they click, and then they are easy
@@pxolqopt3597 Initially it seems redundant to have an indirection to your variable when you could just use your variable because your first pointer lesson is
int a = 42;
int* aPtr = &a;
// do stuff in the same scope with aPtr instead of a for some unexplained reason.
Then you find out about dynamic allocation needing somewhere to store the address memory was allocated and it makes sense for that, then you see the benefits of pass by reference instead of by value for large objects and realise that's actually passing a pointer too. Then it make sense.
@@pxolqopt3597 I think its just because they're explained badly, the concept that everything in C is really just memory needs to sink in for new users
@@pxolqopt3597 Yeah, the challenging thing is the concept, which really isn't about the coding. Once you piece it all together that variables also point to places in memory, it makes a lot of sense.
@@burnttoast111as soon as my professor spent 10 minutes drawing a memory diagram it clicked for most of the class
"Zero cost abstraction" in C++ was never intended to mean that the feature is totally free to use. Bjarne Stroustrup defined it as meaning that using the abstraction should not cost more than achieving the same thing without the abstraction.
so its totally free to use
@@sineptic No. For example a class with methods is an abstraction over having a set of free functions that take some pointer to a common data structure as a parameter. For example iterators are an abstraction over having a loop that operates on a sequence of data items. Bjarne's idea is that using those abstractions should have the same performances as doing it the old way.
It is quite literal. The abstraction costs nothing. The operation still costs the same.
@@maxdigiacomo4608I'm not even sure that is true in all labelled cases as I have heard an Ada and ex Rust user talk about how the Rust zero cost abstractions were slower.
The feeling I get when using C++ is that if you are really careful and write your code just right, a const here, move semantics here, etc., then you can end up with code that looks very high level (in the sense that a few lines can do a lot), but also very efficient without a lot of memory copies or pointer dereferences. But, the high level part is just an illusion, because you have to think very low level to get the high level code to work efficiently. It becomes almost a game.
This. I wouldn't call it illusion though. It's just that you need a humongous amount of knowledge to do it right.
@@artxiom and you'd have the same problem with any language, except most other languages don't let you do that.
You dont need move semantics. It's optional. I don't use it at all.
And if you're less careful, there are "hidden" copy constructors and stuff behind a simple statement, like A = B + C + D;
@@Vitorian That's silly, almost everything is optional, but move semantics are one of the most useful concepts in C++. If you don't use it you are doing it wrong.
As someone who learned the basics of programming in C in college, pointers are really easy and intuitive.
they are
Amen
Yeah, the problem is when you forget to call "free" or accidentally use the freed memory, or dereference the null pointer... By itself it's not that hard, but sometimes you just forget that precisely BECAUSE it is easy(kinda like forgetting a minus sign in whilst solving a Calculus problem)
@@ЧингизНабиев-э2г
Those are all a solved problem in C++, and has been for a long time.
@@ЧингизНабиев-э2г Well, don't forget.
I lived on C++ for decades. C++ is great when you control *all* the source code. Rust excells in allowing you to use 3rd party libraries painlessly and efficently.
I partially agree with that. C++ as a language is a middling experience, but dependency management is god-awful, and it accentuates the annoyances of having decentralized build systems.
also rust libraries tent to have 0 documentation, or some auto-generated garbage
@@qeqsiquemechanical9041yes because C++ libraries are the hallmark of documentation
@@qeqsiquemechanical9041 "rust libraries tend to have 0 documentation" you sure about that? usually rust documentation is pretty great.
Whether or not "[Technology X] allows you to rely on code that you don't control" is an advantage... well, that's up for debate. Just ask NPM.
Why i think C is better than C++
I mean it just is
Ditto. C-flavored C++ all the way
Why I think x86 ASM is better than C
Why I think
Why I think VHDL for FPGA is better than x86 asm
Love how he just jumped through the section called
' the "unsafe" excuse '
After reading... Thank god, it was a horrible argument
@@thebrowhodoesntlift9613 I mean, you don't expect us to just take your word for it, do you? Are you willing to take a shot at giving a synopsis for why you feel that way?
@@mkvalorI don't, I just gave my opinion... But, I'll expand:
he's equating 'unsafe' rust to writing assembly in C++... but unsafe rust is still rust, valid rust, assembly isn't C++. The real equivalent would be assembly in Rust vs C++. Unsafe it's very much a part of the language, it's just removes some of the EXTRA guarantees that rust compiler gives you. It's much more similar to templates in C++, it's a dirty way to hack around things but it is still the language itself.
The rest of the article was really good though!
Anyways, I might be misrepresenting him so here is the segment from the article for you to decide:
"The “unsafe” excuse
Upon reading this, some pundits will throw the recurring excuse “oh but Rust has unsafe mode where you can do yada yada”, so let me preface this article with this analogy.
Calling “unsafe” as an escape for Rust’s rigid system is the same as saying that C++ can do inline assembly so you can copy/paste infinite optimizations done outside the compiler infrastructure. That defeats the purpose. We are analyzing what the language can do in normal operation is, not the backdoors that it allows to open when it gives in to operational pressure."
@@mkvalor I mean, it is a pretty bad argument. Sometimes you really need to do some inline assembly (this is supported in both rust and c++). One case that comes to mind is byte comparison for cryptography purposes. Often times you _really_ need to compare bytes in a constant amount of time to avoid timing attacks and make sure the compiler doesn't optimize the code and return early on the first differing byte. If you have a need for it, why would you not use this feature?
then you might say "then why don't you just use assembly?" well, c++ and rust are still _much_ safer to use than raw assembly, and containing the hand written assembly code into small pieces where needed is better than throwing your hands in the air and porting your entire project to assembly.
honestly, I'll probably point to this example for people that make the similar argument against rust of "but you have to use unsafe sometimes, so just use c++".
@@mkvalor He says unsafe Rust == assembly in C++ which isn't remotely true. Rust has inverted defaults to C++, it is safe by default and if you want you can slide back to C++-style wild west with no safety via unsafe {}. It will still be valid Rust, which is crucial to understand, but it is your responsibility to make sure it is correct & safe as you explicitly asked the compiler not to check it (e.g. using borrow checker). But the syntax and most rules will still apply as normal with some additional rules specific to unsafe blocks. Completely orthogonal to the inline assembly which can be used in both C++ and Rust.
Safety is really about correctness with regards to the indented behaviour/specification, almost all segfaults are due to a faulty proof of correctness. That’s what rust guards against, whether you accept potential malicious input or not is moot.
A very good point and really the reason I like rust so much compared to C/C++. Once I get a program in rust to compile without errors or (significant) warnings, I can be almost certain that said program will work exactly as I intended (with maybe a few logic bugs and the like) because the rust compiler made sure to check that the code I was writing was correct. This means I only need to be a somewhat decent rust programmer to write good rust code, whereas I would need to be an expert C/C++ programmer to write good C/C++ code.
I wish more rust enthusiasts would emphasize that correctness is what the rust compiler really helps you with and that safety is more of a happy side-effect.
All input is malicious, always! Because the final user will always find a way to insert data coming from an unknown origin into the program...
It's great to watch people argue over whether an abstraction is or is not zero-cost without even thinking to read the assembly generated by the compiler (I checked, and unique ptrs are indeed zero cost and do not appear at the assembly level)
It's common knowledge that they are free, but I guess there is value to occasionally verify those things.
Yep, the only “overhead,” if you can call it that, is that the type tells the compiler to call the deleter when it goes out of scope, which is no different from any correct code you would have written with a raw pointer. The specific deleter to use is included in the type of unique_ptr when you instantiate it, and the type is only visible at compile time. At run time, it’s just a pointer.
undefined behavior means the compiler can do whatever he wants.
they
it
just to be more specific: "implementation-defined behaviour means that the compiler must choose and document a consistent behaviour. unspecified behaviour means that from a given set of possibilities, the compiler chooses one (or different ones within the same program). undefined behaviour means arbitrary behaviour from the compiler."
in c++ and probably in c, the "int" type doesn't have explicit memory allocation value, it can be 1 byte, 2 byte or 4 byte, that depends on compiler. So probably in his case compiler turn that numbers i8 under the hood and because in rust there is no implicit memory allocation on integers, compiler basically makes that operations with 4 byte values.
that case with the multiplication seems more like an unspecified rather than undefined behavior situation imo
Using rust because safety and borrow checker ❌️
Using rust because hating on cmake and others C/C++ build system ✅️
That's actually what got me to make the switch
CMake is just a crappy open-source hack. I will never use it by choice. It has found its way into Microsoft Visual Studio as an option, but again, VS already does a way better job on its own as a C++ build system. Been a C++ developer for 22 years. The only time I've needed CMake was to import some open-source code that didn't have VS project files... and it took forever to get right. CMake is an absolute nightmare. Additionally, Microsoft Visual Studio is a much better development IDE (for C++) than what's available for Rust. I've done some Rust coding in VSCode. Not great.
C build systems are like JS frameworks, I absolutely hate all of them and refuse to learn them. Someone else can waste their time and memory memorizing all the nonsense needed to setup and compile a C project 30 different ways using 30 different tools.
UB doesn't mean compiler WILL optimise the thing, it means it MAY do that, it may also choose NOT to do that which would yield inconsistent results, for same input on different machines which have different C++ compilers.
Like Prime said, it works for the authors use case. They can check all compilers and find the one that works for their hardware. HST developers pay so much attention to the smallest parts of the trade path that they sometimes write their own routers because itll save a couple of nanoseconds. Its crazy world out there, and they Know what the compiler does because they check. There is a good talk on yt about the level of optimisations they do and its mindblowing.
@@TrueHolarctic Except it is not guaranteed. I work in high frequency trading too and relying on stuff like this is stupid. One day it holds and the next day compiler just deleted your code (because it's UB, it can do whatever, especially Clang is super trigger happy and deletes most UBs it detecs). So yeah, crazy optimizations are done but this is beyond crazy. It's called "undefined" for a reason.
@@TrueHolarctic At that point shouldn't you just use the compiler as an intermediate and then just hand tune the assembly to perfection?
@@Resurr3ction UB tends to fall into one of two big camps, it gets optimized out in some manner, or it does something kinda like what you would expect strangely, it depends on what specific behavior you're asking of the compiler as to what it will do, and there are some manners to actually force UB optimizations, 95% of which are compiler intrinsics (like std::unreachable is implemented via compiler intrinsics and is guaranteed to trigger optimizations that either trap or remove the unreachable case)
compiler assumes undefined behavior will never happen. So it can optimize without considering it as an edge case.
In c++ std::unique_ptr is not free(in some not trivial cases), because c++ standard says that the object that is std::move from should still be in a valid state, so when you std::move std::unique_ptr the variable that is moved from is basically a nullptr, that is why before memory is freed by the destructor of std::unique_ptr it checks the internal pointer is null, that is the only cost of using unique_ptr.
I dont think anyone understood what you want to say
I'm very happy you covered this article. I love rust and I am building great projects with it. But I feel like the common take that rust is so superior to C++ has turned into a monocultural echo chamber. I have helped build multiplayer game backends (for published titles) in C++. The truth is that many deployed C++ projects run fine for decades with no stability or security consequences to the contrary of the popular narrative. So this is very worth stating out loud.
"The truth is that many deployed C++ projects run fine for decades with no stability or security consequences to the contrary of the popular narrative"
Until...
And that's the hard truth for (to the best of my knowledge) all C and C++ code bases. I say this a Rust programmer with 45 years of C experience and almost 40 years of C++ experience. I have never seen (and nor have I ever known of) a C or C++ project that didn't have stability or security issues in all of that time. I've merely observed that these things hadn't been discovered yet. Until they were.
@@appstratum9747 In your experience, how many of these issues were due to things that Rust fixes? I don't have the experience that you do, so I don't have a reference for the types of issues that usually pop up in large c++ code bases.
@@sweetcornwhiskey The primary categories of issues with C++ that Rust deals with at source are virtually all related to inappropriate memory access beyond that allocated, use of memory when that memory is no longer valid for use, concurrent reading and writing of the same region of memory (by accident: race conditions) and so on. All of this is covered by the Rust language and most of these problems are eliminated before the compiler compiles successfully.
Another important class of errors that Rust avoids is the failure to code to cover all the possible states of branching logic. At all points in the codebase.
Null pointer issues are another category of errors that are eliminated. There is no null in Rust. In Rust you are forced to deal with the possibility of a value being invalid. In C you're not. In part because what invalid represents is very difficult for the compiler to understand.
Software development practices are almost always designed to limit the negative impact of the weakest members of a team. The problem with C++ is that even on the miniscule chance that everything is defect free today, at any point in future someone on the team might break it in a serious way: by introducing a memory-related defect that hasn't been detected during testing.
All C++ projects have decades of vulnerability to weaker programmers introducing critical defects similar to those already mentioned. Or corner cases that weren't expected. Rust protects against much of this. Even if the software was apparently free of vulnerabilities when it entered production C++ can have serious vulnerabilities introduced down the line.
Another category of errors that Rust eliminates are those related to interchangeable datatypes. Int and char for example. Or relaxed casting from one ordinal type to another. In C++ you can get away with this with possibly very nasty surprises at some point in future when you get unexpected data. Not so with Rust. It's very strict.
I wish I could spend more time answering you. But I hope I have given you at least some idea.
The reason Rust is so frustrating for many at the outset is that the compiler forces you to address all the things that you would otherwise have forgotten to or didn't even consider. Whereas C++ will simply compile and you will find some of those oversights in testing. It's the ones that you don't find that become security vulnerabilities down the road.
The Rust compiler is criticised for being slow. And it is. But it is doing so much more for you that nobody is going to do for you when programming in C++. It annoys you and nags you. But it won't let you build software until you deal with the issues.
Rust holds developers to account for their decisions. It's obstructive in that sense. The pay off is that it's much safer. When you want to do something unsafe you're very well aware that it's unsafe.
@@appstratum9747 It's insane to me that anyone thinks that C and C++ devs are all 100x devs who know EXACTLY what they're doing... Code bases are living and organic - they flex and stretch before they collapse and wind up, they are often subject to change and improvement - the speed that you get when you commit to Agile development practices or follow the "Working, Right, Fast" pattern (MVP: get it working, first pass: get it right, last pass: get it fast) ultimately comes at the cost of safety. Shift-left is a real thing, it's a MASSIVE thing, it's hard to get a job without being proficient in unit testing and a red flag when somewhere doesn't do it: Rust shifts memory errors LEFT. It's made it simple to solve these errors, and helps speed up developers to get safe code who might not be as proficient by way of teaching. C++ is NOT as good as Rust in virtually (haha, virtually...) any situation, and in Rust you can just invoke unsafe code to get C-like syntax: Rust wins, every time, to say otherwise is cope lol.
Addendum: And there's nothing wrong with C. There's plenty wrong with C++, but at the end of the day, it produces results which fulfill requirements. Rust is a very natural evolution of systems programming languages: it's just better at what C and C++ do. It's just better at what C tries to achieve. It's just better when safety is a requirement, and if you have any respect for your users, then safety is a requirement.
It’s almost always an issue with the management of the project (bad code style, etc). Someone who knows how to write c in a way that easy to proof read and verify does not write vulnerable code. The issue is with massive projects where vulnerabilities arise from inexperienced developers and cannot be easily noticed due to poor code style. See nasa’s c style guide for an example of this.
Regarding the compiled examples Rust has different save variants for simple operators:
* Is unchecked multiplication that is, as the article states, not checked in release mode
But there existist: .checked_mul(rhs) .saturating_mul(rhs) .wrapping_mul(rhs) which all have different begaviour regarding the overflow
With that convidered what the rust program means is that programmer wants the unchecked mul and div and thats what the compiler outputs while in c++ the compiler is following this hidden list of UB rules that you need to know to decode what the program would compile into.
In my opinion Rust better reflects what the programmer expressed in the program than C++
What the programmer expects is relative to what language that programmer is used to.
@@AlexSmith-jj9ul Some languages are still more expressive than others
Iterator optimization is actually done before llvm. Also struct layout optimization is happening before llvm. Finally, some vectorization stuff is prepared in MIR. Naturally, clang does neither of these things, so it often fails to vectorize obvious things.
To beat a dead horse:
pub fn noop(x: i32) -> i32 {
((x as i64 * 2) / 2) as i32
}
gets optimized into an actual noop (as we are proving that overflow can not happen). So, a correct program in Rust gets correctly optimized, and shit program in Rust gets to crash rather than UB your entire production.
I don't think I am being arrogant when I say I am a C++ expert. I like C++, but my biggest problem with it is that it is impossible to teach, and many fairly common programming styles in C++ are basically "experts only" (if you care about safety and correctness) and completely impenetrable to junior engineers. They might very well even think they understand it fully, but it is not incredibly likely they do.
I took a C++ class as a junior programmer and hated it so much. Stuck with C# instead and went on to do well. It definitely didn't appeal to me as a beginner, maybe that's just me though
I agree c++ is a language that you can only learn and not teach.
Could you give some examples of C++ expert styles?
What do you mean by programming styles?
prvalues, copy elision, SFINAE etc@@cziwochel3415
learning to code in harmony with the borrow checker in Rust and then coming back to C++ will definitely make you write better C++.
I'm a Rust newby but C++ veteran. As i understand it, unique_ptr is Box and shared_ptr is Rc/Arc. shared_ptr and weak_ptr operations are threadcsafe (but not necessarily the pointee).
That comparison sounds about right to me! I think *std::shared_ptr* in particular is probably more akin to *Arc* in Rust because, if I'm not mistaken, the internal reference count is itself atomic (please feel free to correct me).
@@xplinux22 Actually, std::shared_ptr is more akin to Option because in C++, std::shared_ptr can be null, while Arc cannot.
@@white-bunny Right. If we want to get very deep into semantics, we have to remember that Rust references and smart pointers are not nullable unlike C++.
@@xplinux22 C++ references aren't nullable either.
@@spell105 but c++ references aren't smart pointers.
Unique_ptr are free 95% of the time and usually just get compiled away (though ofc. the deconstructor will get called though that you need to call either way for your program to be correct) with that rest 5% of the time is when you provide a custom deleter (though even that overhead is basically non-existent). Additionally in some cases you need to default construct a unique_ptr and then move the object into it. In this case the default construction can have some overhead but can also be compiled away in some cases.
As someone learning C++ with some C# experience. I dont like Unique_ptr, its more annoying to remember and use then just slapping * and & symbols.
Sure it doesnt look as cute having Unique or ComPtrs and aligning the spacing perfectly. But for learning and do things without a team/personal. RAW all the way
Unique cases are only "not compiled away" in case the object it is holding is not trivially destructible you can check for if the object is non destructible using a cpp concept static assert. Even then it's mostly compiled away for most -flto cases. It's pretty awesome.
In the general case the destructor also incurs an if-not-null check for the delete call or let's delete's internal logic do that check. So pick your poison: redundant if-checks or redundant delete-calls. You are at the mercy of the SSA to get rid of these. Admittedly most programming patterns are amendable to this.
@@freezingcicada6852 unique_ptr is not replacement for & and *, it's meant to be replacement mostly for `new` and `delete` (and other for custom allocators and deallocators)
if using `new Class()` and later on `delete obj` rather than just using `std::make_unique()` and letting the compiler handle later on deleting of the objects is annoying for you, then I'm not sure what to tell you. But from what you wrote, it feels more like you're using it wrongly, imo it can be especially useful while being member variable of classes, where you can just omit having to handle destruction of it in the dtor and not having "space" for leaking memory.
See Chander Carruth's talk on cppcon2019 why unique_ptr is not a zero cost abstraction ruclips.net/video/rHIkrotSwcc/видео.html
Unique pointers are a (practically) zero cost abstraction at runtime. It's interesting that such a experienced developer can be completely oblivious to core C++ concepts like RAII.
Did you measure the performance cost of unique_ptr vs raw ptr?
@@vladimir0rusgo to godbolt and check the generated assembly. Most of the cases, the unique ptr is not compiled into the code because it's literally just replaces a manual delete.
@@vladimir0rus Measure what? It just limits how a pointer can passed. It shouldn't generate different code from a raw pointer.
@@vladimir0rus
unique_ptr is just a wrapper for a raw pointer that disallows copying and has some additional functions. Do you honestly think there's a difference in performance for any compiler worth its salt? Adding functions and compile-time checks does not affect runtime performance.
@@greg77389 additional code always affects runtime performance, even if it is code for exception or some additional function (because L1 code cache is small). Even if it just a wrapper it might add code which will be present after optimization stage.
17:03 might be just me, but pointers were super simple when I was first learning C++ in my first year of college. The problem I had to help people with the most, was the concept of inheritance and access control. They all got the pointers thing pretty quick. I even wrote my own ref-count smart pointer classes (this was well before C++11) to use in my school projects.
Pointers are def easy, it's just an index into the 4GB byte array we call RAM.
(I say 4GB as I recommend anyone to learn systems programming in 32bit first)
@@npip99 In some systems pointer was not a plain index - in real mode of x86 pointers are made of segment and offset. Also null pointer was not always equal to zero (in other systems). And pointer concept is not very hard to understand, but effective dynamic memory management is - and most of mistakes in programs are related to the memory management (memory leaks, dangling pointers and so on).
Some applications confuse people, like void or function pointers. But since I came from higher level languages where I constantly made use of callbacks and arrays holding different data types, I understood the application right away.
The idea when the unique pointer was designed and added to the language was that compilers should generally be able to completely eliminate it at compile time so its the same as rust. But in practice it turned out, because of exception stack unrolling and translation unit boundaries that there are a lot more instances where it can not be optimized away as the committee initially intended. It has gotten better with more advanced link time optimization but you still have to check if you want to make sure it actually gets optimized away fully and in normal use it probably won't be in some scenarios.
C++ is probably the Pioneer of no cost abstraction, modern C++ can be ridiculously optimized due to it(if you know how to use it well).
It's pretty damn amazing seeing a 6 line matmul function running that fast which it has no right to.
The tooling is hard to learn and idioms are hard to follow but at the end you can write with your knowledge and compiler veryy efficient code.
"if you know how to use it well" is always the argument I see thrown around. What if a language would just have these zero-cost abstractions built in where you don't have to "know how to use it well" and it just works? Sounds crazy, I know.
compiler likely will do better job optimizing than you
@@dealloc Maybe you are just very lazy and/or not smart enough?
@@dealloc no language is like that. They will all run like poo if you don’t know the pitfalls.
@@dealloc You are now describing C.
I never had an issue using ASM. In fact it was a requirement when I went to school ASM was required before C or C++. I also was required to learn it in the military. I tend to find most the programmers I know who had to learn ASM first are better programmers by far than the one's I know who didn't.
The main issue is not how things are taught, but that so many people get into CS for the money and not for the interest. People who like coding will learn more about it and it's concepts, people that want to make money easily will hit a roadblock
I believe this is an example of survivorship bias. You’re only looking at the people who started off learning assembly and *kept going*. You are not considering the many many people who would start learning assembly get intimidated/ make minimal progress and then quit. Those programmers who started using assembly are likely naturally more talented and would have succeeded regardless of what lang the started with.
@@rkidyalso looking at ppl with a 20+ years of experience since ASM was a thing back then. Kind of like saying ppl who learned to drive on stick shifts are better drivers.
btw, I had classes in 6502 and 8088 ASM and I just do Python now ;)
Interesting....Allah bless you my feller. I was thinking about this. Am a ME by trade, struggling w/ employment but have a REALLY nits strong interest in coding including niche stuff...or stuff that is niche so far. I will take this as a sign from God (maybe) that I am on the right track. @@DoctorWhoNow01
@@DoctorWhoNow01 the main issue with computer science majors is that most of what they should be learning as part of their degree has been shifted over to electrical engineering and computer science retained pretty much just the theoretical stuff
9:15 Though even performance cost of the bounds checks is a bit nuanced.
Like when using a iterator over `Vec` these checks can be mostly compiled out.
wdym compiled out? the vec size is not known at compile time.
@@hwstar9416 No, but when iterating over every element you don't have to test the length every single access.
@@hwstar9416 Rust bounds checks are minimal. If you have iterator loop , you know it is safe. If known size vs known index - no need for check. If you iterate from 0 to n (n user given) in array of size X, it will do bounds checks, but if you make a line before if(n < X ), the bounds check will be done only 1 time not for every iteration. And even if you do full check, max performance penalty with special loop for bounds check is 10%. typical is 2%.
Yes but likewise if you iterate over every element in C++ with a range for loop / standard algorithms you're also not going to access invalid memory. So yes, Rust incurs no performance penalty but it also has no safety benefit. This case is the same in both languages because there is no tradeoff to make. It's the raw access case where C++ chooses performance and Rust chooses safety.
"can be" Is nearly sufficient when you are dealing with high frequency trading. 😉
Liked this article, especially the part where it questions the safety claim. I know I am learning Rust because it looks easier to learn than C++ today and I don’t strictly need to work with it.
people don't realise the ridiculous amount of details one needs to know to write proper C++, to write Ok rust you don't need to hold so many miniscule details in your mind to do so, just the ownership thing(and lifetimes that are actually difficult)
That's simply the price you pay for having a language that is actually useful, allows you to do what you need to, and doesn't constantly get in the way. If you don't need the features of C++, feel free to use something else.
another skill issue. to write good c++ you have to know c++. what's the problem with that?
C++ is not difficult to grasp, it just has an enormous amount of details you need to know, to write it properly. Sb unironically said when teaching it don't use this, that's a design flaw
If that is a problem then just write C, the complexity of C++ is just there for when you need it, but nobody is forcing you to use it and instead you can write proper C code that is more simple.
@@ColinBroderickMaths uh, the high number of miniscule details is constantly getting in the way, that's the point
Errors as return values and throw:
Erlang IMO has the best separation and does both. Exceptions are... for exceptional situations. Things you haven't dealt with and rarely should try. The "let it crash" idea is important to the platform. (Though that works only because of the actor concurrency model.) Errors are known failures and should be handled as such. Most languages that have exceptions use them for both and it just confuses the situation and due to not having concurrency which allows you to separate out responsibilities such that you can "let it fail."
TBF, you can implement Exceptions in Rust using panic hooks, but, I'm too stupid/lazy to dig deeper into the topic (it's also probably unidiomatic and destroys the correctness/soundness of your code).
Erlang assumes the role of both an operating system and a programming language here, though. For long-running processes in most environments, the equivalent of erlangs 'let it crash' is programs just literally crashing and being restarted by something like systemd. Its outside of a more general purpose programming languages scope of responsibility to bother caring about this.
But i do agree that using exceptions as control flow is dumb.
@@sproccoliI disagree it is outside the scope. These general purpose languages are providing concurrency features which lack good error handling making usage far more difficult than needed. They treat concurrency as a 3rd class citizen and it is really harming the space. And the language typically *must* worry about this topic because without language integration they will always be lacking. Which is why every actor model grafted onto other languages always feel incomplete and more fragile than Erlang.
Concurrency will only get more important going forward and if languages really need to catch up with Erlang wrt concurrency (and error handling).
I left to go work in C++ because the alternative was to stay and continue coding in Matlab
F in the chat for everyone still using matlab
@@isodoubIet F
F
F
F
As someone who just started learning Rust, and who has no computer science background. I'm really really proud of myself for understanding 50% of what qas said in the video and in the comments. especially the comments 😂😂🎉
Couple of notes on optimizations:
1. What if you WANT overflow semantics in multiplication? Like, what if you're trying to clear the top bit? How do you force a C++ compiler to generate appropriate code?
2. Exceptions as return value are not a 0-cost abstraction. Real exceptions can be implemented by having a static mapping of function return addresses to catch blocks. This mapping can be read when throwing. So long as an exception is not thrown, the program makes a return behave as a return. Turning every throwing return into a switch creates run-time cost in the event that no exception is thrown.
Java and Swift allow exceptions to be a 0-cost abstraction by forcing you to declare which functions do or don't throw, with Swift also requiring a decoration of the call site. This means you need to know an exception is there, and you must handle it or account for it.
As for "You don't know what your state is after throwing", that's what RAII is for. You should ALWAYS have a clean state when returning from anything, any way. The stack rewind is meant to guarantee that.
The unique pointer is free in an optimized build or your compiler is stupid, unless you do unidiomatic stuff. The problem is the destructor which tests if the pointer is null and otherwise frees the pointee. On a null pointer, ideally the check can be optimized away or would be necessary anyway. Unique pointers are zero overhead. It does things for you which you want to happen.
Shared pointers have a reference count which is an obvious cost. It's honestly a complete different use case. The old C++ code base that I continue to modernize can live completely on unqize pointers, it doesnt need shared pointers anywhere. I threw so many of them in the bin.
You can make linked lists in rust without unsafe, much easier and cleaner, if you enable the polonius borrow checker, which is currently experimental. It is an issue in stable rust, but hopefully not for much longer.
You can get rust (nightly) to not even produce code for the function and inline it as identity (using unsafe). Calling it "the unsafe excuse" is IMO a misunderstanding of what unsafe does. Unsafe makes it sound like you're now free to do as you want, but unsafe allows you to do only three specific things you could not do before, not throw out the entire promise of rust.
Which is a limitation that NOONE competent needs.
@@shinobuoshino5066 Sometimes, it's better to assume everyone is incompetent.
The article is wrong about SEGFAULTs.
They cannot be cleanly handled. Calling anything after a segfault is undefined behavior. Parts of functions could've been overwritten. Just calling printf might yield a very unexpected result after a segmentation fault.
Yes, this is very important to stress. All out of bounds accesses are truly bad in C. When you're lucky, the processor will catch the out of bounds memory access, when the accessed byte is on a page that hasn't been mapped into memory (X86 pagination), and in that case the kernel catches the fault and allows the program to crash immediately. But this good behaviour is more the exception than the norm. When not lucky (in the majority of cases), the out of bounds access touches a page that is mapped into memory, the out of bounds goes undetected by the processor, and whatever malware that was crafted into that png image (or pdf, of whatever!) can now happily start encrypting the disk... Because, I think we have to stress that the other wrong point the article makes is to assume that a computer that is not a server is not under attack! Everything is under attack, always. Open any document, and the software is potentially under attack! All data are unsafe! Just an example: think about an online library. They OCR books and magazines, and put the contents in a database. Guess what happens when they process a magazine about SQL attacks? Yes, you guessed it: the "DROP ALL TABLES;" gets OCR'd correctly, and deletes all the data of the project! The whole project was offline, and the attack came from a piece of paper!!! This is just an illustration that any data can contain malicious code!
As an older guy, I found that understanding assembly was easier than pointers and not harder. I started programming assembly on a Commodore VIC-20 before I ever heard of pointers or C. If you understand the addressing modes in assembly then you just naturally understand pointers.
hi dad
This is a bit of a tangential point, but I find it strange that many Rust developers put the language in the forefront as if the language is a feature
Languages are tools, the end user doesn't care how it's made
Depends on the context and who's the user. Rust is used to build a lot of tools and libraries in which the language does used has implications and the users usually care.
"many Rust developers"
You started biased as this is not a "Rust thing", this is a "programming thing". Just as I saw many, many, many times C++ programmers saying C++ is the unique tool that should be used.
I usually use C instead of C++ because the minimum runtime is big with C++ and it's a chore to port it to weird microcontrollers. In driver developer where I do use C++, we use a tiny subset of it that works out pretty well. Rust lets you do some really wild stuff to pare down the runtime support while still retaining some of the features, or explicitly disabling them to enable some compile-time errors. Such as defining your own allocator and or disabling exception handling.
-fno-exceptions -fno-rtti for major embedded compilers(GCC and clang)
@@uis246 nooo, you cannot just do this...
I have to say I love your videos! Despite all the humor you truly look over the bias fence while not being afraid to not know all the answers. That's refreshing and inspiring in a world so full of bulshit certainties.
Totally true about the large majority of applications running with mitigations disabled. Spectre mitigated libs are not the default on MSVC and not many people change that default.
Worth mentioning that in rust if you're using Iterators you mostly shouldn't have bound checks
C++'s biggest problem is the amount of people who - last year - learned how to write C++ as one did in the early 90s. At universities.
shared_ptr is essentially rust's Arc. unique_ptr is the same as Box
A little correction. It is essentially Rust's Rc, std::atomic is Arc.
@@basilefffThe refcounting in Rc is not atomic, or is it? In shared_ptr it is atomic, as in Arc.
@@mario7501shared_ptr has some thread safety but it isn’t actually atomic
Isn't unique_ptr more like Option, but with a lot of implicit `unsafe { x.unwrap_unchecked() }`?
@@mario7501 Oh, read a little bit more on this. It seems you are correct, and I was wrong.
Point around 9:22.
That is mostly false in article. There is bounds checking but not always. Do iterator loop? No bounds check. If compiler knows your index can go max to 100 index and statically you made something size of 100.. no bounds check. If you have user inputted index to check up to that number in array - yes it will do bounds check but if you do something like if (user_input< array.size() ) (basically like you would do in C) the bounds checks will be removed on array and you will do check only once instead on every iteration on loop. Even if you do full bounds check on every iteration of loop, highest performance drop i ever saw in benchmark was 10%, mostly it is 2%. Literally compiler knowing more about types from Rust and being allowed to do more agressive optimalizations sometimes easly can overtake that.
The fact that the author has provided no code examples repro cases or benchmarks for their performance claims just shows to me that they are using the ancient technique of just making stuff up.
Basically: rust is better, easier, and more modern (maybe too new/modern), while C++ is more raw and established. More projects use C++, and learning it will land you more jobs. Rust is still in the phase of early adoption.
Pointers absolutely are simple and intuitive. Or do you consider paper with "your jacket is in the closet" complicated?
I mean, as soon as you understand that computer has a memory and it's actually stored somewhere physically, pointers are the most obvious way of working with it in every regard.
Surely, it's a big difference to just explain the concept to someone versus when you're trying to fix a bug in a complex or badly written codebase.
Hey Prime. You mentioned about implementing a Double linked list in Rust. You can break Rust ownership model by just adding an extra pointer to it.
Having an Arc that points to itself may not work. But you can make an Arc that points to itself. Bypassing the memory checks of Rust.
that does not in any way "break ownership". That's just a reference cycle, possible in every language, and perfectly valid in Rust as it's a shared ownership primitive. And since you're pointing this out as if it's specific to Rust, reference cycles apply to C++'s shared_ptr in the same exact way.
@@skeetskeet9403Sorry about the mistake. What I meant is that it is easy to leak memory that way. Rust's compiler will not bother us with it. So implementing the double linked list using two pointers will proceed just like any other language.
His arguments about memory safety not being a pressing issue seem extremely hand-wavy to me.
Coming from C++ POV, I concur.
Well, it is not a pressing issue until they lose all their money due to some UB they missed.
After that it is not important anyway because the company is no more.
Domain dependent. In video games they're not a pressing issue, for example. And the idea that because you can be memory unsafe you will be is overall a stupid point. If you wrote a high-performant game in Rust you would use unsafe a lot. So the end result is the same. If it doesn't cause very many crashes then nobody cares and we all move on. If it does then you get a bug report and fix it.
@@lucass8119
>In video games they're not a pressing issue, for example.
Considering that many games nowadays operate with real money, it isn't true. And even without money, it is very hard to debug bugs caused by undefined behavior. Time spent on that bugs better be spent on implementing features.
Also, writing correct unsafe code in Rust is easier compared to C++ so the only reason to use C++ over Rust for videogames is the current lack of mature game engines.
@@ХузинТимур games don't operate with real money, if they did, I could modify my game to say that I paid the company billions and have all most expensive cosmetics in the game instantly.
I agree with the guy. C++ is great. It's powerful. It's fast. It's flexible. C++ is my favorite programming language. Been using it for over 20 years. Tried Rust. Didn't like it. I don't want to program in a straight jacket. I don't need to be hand held and bossed around by Rust.
Rust has overflow flags which you can use to instruct it how to treat places where it is possible. Don't remember them off the top of my head but it's Rust's solution to avoiding undefined behaviour in this case, it lets the programmer decide "what do you want to do if this happens, should I give a warning or what?"
Google giving a Portuguese definition and prime's reaction broke me
/r/suddenlycaralho
not being able to deal with C pointers is a skill issue
I think my knowledge of c++ has helped with me improving my mathematical and computer science fundamentals, many concepts are very similar to in the literature.
Yeah I know a company that has the same c++ code running from everything from smartcards to mainframes, rust has not nearly the toolchain to support that
yet.
The gap is closing year to year
C++ had 30+ years to develop, so obviously there is a gap to close.
I always hear c /c++ programmers talk about how to properly write code, and never hear them talk about how to create software.
15:17, ancestor is someone long passed away, no longer alive. Antecessor is someone previous in line, like queued or something, still alive.
@@anon_y_mousse Overqualified for life 🤣
20:00 That's not even a positive though... If you have worked on windows, you know that MSVC shouts at you if you use scanf and forces you to use scanf_s instead, however the scanf_s doesn't exist on the GCC and many other distributions. It happens because while scanf_s isn't an ISO standard C++ function, Microsoft can basically do what it wants with MSVC, and get away with it.
GCC is full of shit that isn't in the ISO standard
It's a compiler extension, you can turn it off.
I mean it is out of standards. why use it? Nothing outside the standard is guaranteed by C++ compilers. I haven't used scanf in ages lol.
@@scion911compiler can guarantee whatever they want
@@scion911 It's not about out of standards or inside standards. The thing is, it is part of one compiler but not of other and is allowed. It can happen for other things and does happen.
The article is not bad, but I had to chuckle when the author wrote segmentation faults are not an issue in C++. I once tried to integrate DJI's own lib to extract thermal information from drone pictures and got a crash within 5 minutes, because their SDK developers allocated something with "new" and tried to deallocate the same thing with "free()". Pretty much every single time I try to integrate a third party solution to any of our business apps I can find glaring memory issues right away. We even had obscure cases of libcurl crashing on us in the past. So yeah, big disagree.
The guy misunderstands what security is. On any "normal" desktop, as long as user input is interpreted in any way and there is a network connection, any program such as a simple ls can be exploited to gain privilige.
Also there really is no difference betwen crush bug and a security bug. Even if a crash is caused not by memory misswrite, it means the situation wasnt planned for - and that opens for it being misused.
The program can only be exploited to get the privilege it has itself. Like if you make a single player offline game, what does the user have to get from compromising your program? Maybe they can get around piracy locks but the user doesn't get admin access if they didn't already have it.
Maybe your program can be taken over by a malicious file stupid user got from the internet and loaded, the fault is mainly on the user there. If you install a mod, you have to understand it could break your computer, especially if games allow mods that are compiled libraries and not just scripts.
-fsanitize=address and -fsanitize=thread can catch most memory and concurrency issues in your CI, but you can’t leave them on in production since they consume too much memory and slow down your execution X times. Thus you need great test coverage to stay somewhat safe.
Agree but it need testcase to capture error case. In rust you get as part of compiler.
How tf are pointers difficult?
Is this easy and intuitive for me because C++ was my first "real" programming language?
Heck, C++ makes it MORE clear by explicitly distinguishing between values, pointers, and references, as opposed to Java where a variable is a value or a reference depending on its type.
I think what a lot people don't realise between C++ and Rust is the usefulness of the compiler. I write both languages a huge amount and the compiler errors in C++ are so useless. I basically look at the rough area in the code I need to be looking at and rely on my knowledge of the language because the error messages just aren't useful. Rust actually tells you what is happening
re "array accesses are checked": *kinda* true. For example in iterators, tbey are *not*, because it's known at compile time that it's within bounds.
You want to know how important latency is to trading, The programs run on the processor in the Mellanox network card because the latency between CPU and NIC is too expensive.
std::unique_ptr does not have runtime overhead. std::unique_ptr is essentially Rust’s Box.
std::shared_ptr is reference counted like Rust’s Arc.
Comparing apples to apples, C++ is essentially the same to Rust in this regard.
provided you use the default deleter.
@@khatdubell, unique_ptr with non-default deleter is analogous to Box where Wrapped has custom Drop.
@@mina86 I meat it’s zero overhead if you use the default deleter. I can’t imagine it is if you don’t.
@@khatdubell, if the custom deleter has no state than it’s also zero overhead.
@@khatdubell Actually no, even with a custom deleter on unique_ptr its free. Because its templated, there's no indirection happening at all. There's no function pointer stored or called. The function is written into the type itself and inlined. Templates can really be quite magical. This is the same reason why std::sort is significantly faster than C qsort. No function pointers, no indirection, and no jumps.
I asked a biologist friend and their guess is that antecessor is "the thing that filled this niche before I did"
It's possibly archaic from what I've read and 'predecessor' would be more common.
There is no difference from a unique ptr and a raw initialized pointer with delete being called on destruction. You would have to do something to move a raw pointer to a different class as well.
rust is a language everyone loves, but nobody uses.
Bruh
wtf are you talking about.. Rust is fucking all over
Having multiple layers of IR isn't just a Rust thing. GCC's compilation pipeline for C and C++ involves a "GENERIC" IR that, as I understand it, is equivalent to LLVM IR, and then a more internal IR named GIMPLE which is processed in three distinct stages (High-level GIMPLE, Low-level GIMPLE, and SSA GIMPLE).
You get around a lot of the overhead of shared_ptr's by passing around references to it. This prevents all the reference counting overhead required if you pass it by copy.
... but doesn't that defeat the purpose of a shared pointer?
@@ThePrimeTimeagen Not if you don't need the atomic counter, weak references should only be used when you absolutely need the atomic reference counter.
@@ThePrimeTimeagen No. Sometimes, all you want is the object itself, and don't really care who owns it as long as it lives "long enough" to complete whatever operation you're doing. For example, if you pass the object to a function (and that function won't place a reference to that object into another structure, which should be the case for the vast majority of free functions). In that case, the lifetime of the shared_ptr is guaranteed to exceed whatever lifetime the callee requires, so you can just pass a reference to the shared_ptr (or, as is my preference, to the owned object itself -- I care about what the object is, not the implementation detail of how it's allocated in memory).
I pass a shared_ptr by value only when it makes sense for two entities to own a given object, for example in a concurrent cache. If my cache holds shared_ptrs I get to evict items from it with complete abandon because whoever picked up stuff from the cache will get a shared_ptr which won't get deleted until they're done with it.
A lot of it is also down to what is idiomatic. Rust, seems to me, uses a lot of Arc and Rc where a C++ programmer would just use values instead.
you do the same thing in rust btw
@@ThePrimeTimeagen Depends if its meant to be an owner of it or a thread-safe view. Shared-pointer is multi-owner, its okay to pass around raw-dog pointers as non-owning views of the data. You wouldn't write void foo(const std::shared_ptr k) { std::println("K's value is {}", *k); }, instead prefer void foo(const T*) { std::println("K's value is {}", *k); }. Because you're just taking a peek at the data, even if you were mutating it, you're not taking ownership of it. If you might need to check for nullptr, then you need to do that with the raw pointer and smart pointer either way.
I think this article is essentially arguing that if you’re already a skilled C++ developer then there isn’t a compelling reason to switch to Rust.
If you don't care about the absolute best performance, then don't use C++. Simple.
I have done some speed comparisons between Rust and C++ and C++ wins by around 10% every time. I was comparing Rust with MSVC. When I switched to Clang/LLVM, they were almost identical. That kind of makes sense given that Rust uses the same back-end. MSVC seems to provide better optimizations.
The best abstraction over assembly is and always will be calling functions. My parameters gets passed, my registers get saved, my value is returned, and I don't worry about any of it.
A seasoned C++ professional is just as bad at writing C++ as a junior developer. It's not a question of "how many bugs do I have in my module I just wrote", it's a matter of "how many hidden errors will be in my code after some other programmer fixes a bug in another module/does a branch merge", large C++ codebases are very far from being robust, very hard to fix (e.g. a merge in Perforce messed up the math in the allocator, it made the GPU execute random garbage from the RAM and rewrite some CPU resources in the shared CPU+GPU memory which lead to segfaults in semi-random places in CPU code). I've seen a high-salary Silicon Valley programmers not understanding how to do a multithreading in UI-heavy apps.
I'm yet to see Rust's ability to catch all these problems beforehand myself (at least having a test framework integrated into compiler helps a lot here), but a C++ programmer claiming they don't have problems in their codebase is a joke.
That's a long-winded way of saying you don't know how to write good code.
I don't think there's a language that can help with those kind of issues. As far as c++(or rust or any other language) is concerned, there's no GPU. And don't even get me started on GPU hangs.
It sounds like OSX GPU driver
skill issue
@@khatdubell If we go that route we can assume most of the world doesn't know how to write good code. Statistically speaking, chance of you belonging in that set of programmers is higher than other set. So we are either in the presence of a demigod, your highness, or a talking donkey; decide for yourself...
Inexperienced developers, use of decade-old C++ standards and missing tooling is contributing to C++s bad reputation. Modern C++ (and use of its concepts) combined with good tools is so much better and safer and still very fast.
Can you suggest the "good tools" if you don't mind
C++ is not going to go away, there is too much value in the echo system of tools and libraries.
I like Herb Sutters approach to addressing the problems c++ with cppfront or "cpp2" which is more of a refactor of the c++ syntax which automatically defaults to the "c++ best practices" than a new language.
Would be great if his proposal is merged with C++. Really made a lot of sense.
Unique pointer is basically just a typesafe RAII move-only pointer with an optional custom deleter.
I find the argument at 9:00 very tired. Just because raw array access are bounds checked doesn’t mean it is always slower to use rust.
1. Most of the time you don’t use the square brackets because there are zero cost iterators.
2. LLVM often removes bounds checks because programs (including non rust programs) are often written where it is impossible to have an out of range index.
3. Bounds checks actually have very little runtime cost in 2023 because unless your programs frequently read out of bounds the cpu branch predictor will ignore the branch %100 of the time
There's a blog post out there by a guy who tried to follow a textbook about interpreters in rust and, even after some herculean optimization efforts, was still lagging behind C-like performance by double percentage digits, so I find it hard to believe that bounds checks are that irrelevant.
C++ STL has zero-cost abstractions almost everywhere, and it's THE founding principle around its design.
What that other guy basically is saying, that C++ is better because it does rely on undefined behavior to a greater extent than Rust (or to be more exact: the language implementors like CLang and GCC are optimizing more aggressively than it could be done in Rust bc of UB). That cannot be a good thing! I've seen examples, where GCC cares about signed integer overflow when in debug and "optimizes" that away whn in -O3. You don't want to have such an erratic behavior!
No you haven't. You've seen cases where the bug you introduced via producing UB wasn't made apparent prior to the compiler actually doing its optimisations (which are based on the assumption that you have not written a program that will allow UB to occur).
This is not erratic - its entirely predictable that UB bugs are often only apparent after full optinisation.
Henrique wrote the article and antecessor is a common word in spanish speaking places for 'predecessor'.
16:00, pointers are easy to understand (it really is just an address), pointer syntax on the other hand, that needs some work......
yes and it's especially annoying considering how simple it would be to remove the arrow syntax (->). The compiler already knows if things are pointers. It doesn't need the arrow. It could have used dot syntax for everything from the beginning!
@@sciencedude22 This is actually what D already does.
It's not just an address. void pointers are just addresses. Normal pointers are addresses to a certain type. The second part is just as important because it shows how dereference works.
Idk, I was using rust for a while and thought I liked it over cpp, but then I realized I was wrong as I got more and more frustrated with how it wasn’t meeting my performance expectations and how much friction it felt like there was with the compiler when writing it
Exactly my experience. Switched to C++ after 4 years of Rust.
@KayOScode
I am sure you work at Nasa and you search ultimate frontier or real time operating system which can kill people if error happens... Like Airbus embedded system ?
No kidding you are produce useless programs in a giant entreprise which produce useless programs ....
@@jumpman120k well newsflash bud NASA isnt the only place programmers work.
Yep, this is the reason why the adoption stagnates even though there is this incredible push by a small community. They never mention actual downsides, they either hide them or they have no idea.
I find that people who dismiss Rust's safety advantages as "I'm too smart/good/careful to ever need safety" are just dumb lol, it's the same line of reasoning that would do away with access modifiers since "I would simply not reference private fields/methods", real life has you sharing a codebase with the dumbest/careless/hackiest individuals you'll ever meet and that includes yourself from two months ago, any feature that prevents an idiot from hurting others or themselves in the 99.99% of cases in exchange for a theoretical 00.001% performance hit that only happens in some scenarios is inherently good. Any environment that doesn't have you blindly troubleshooting in complete dark for no good reason other than "This is an old language, we can't change that now" is inherently superior. While there are instances where C++ is the better choice, the truth is, for nearly every application someone may ever need to develop, Rust will be easier and less time consuming to build with both short and long term.
Although shared pointer have an overhead in C++, it generally only hurts in loops and thus get getting the raw pointer before entering the loop solves pretty much all performance issues.
I haven’t had a segfault or stackoverflow in C/C++ in ages, either. Already helps when you enable all the warnings and set the compiler to pedantic checking and solve each message.
Now memory leaks I’ve had but Rust doesn’t prevent memory leaks either. You just need to profile your application and see that happen and solve it.
@@anon_y_mousse you can always get memory leaks. In any language. A typical situation is a map to an object pointer. Don’t say that the object should be in the map because there are many reason when you need a non-owning pointer. You do have to manage both deleting the pointer from the list and delete the object. It’s easy to overlook to do one or the other.
And there are many more cases where a little oversight can cause a memory leak and you should therefore always profile your memory, don’t trust yourself!
Now I do agree that especially since C++17 a lot of things to shoot yourself in the foot is taken care off. And I adore the newer versions.
@@anon_y_mousse I just don’t understand where the combining C with C++ had bearing on in my reply? Did I miss something?
@@anon_y_mousse "Also, if you're getting memory leaks in C++, then you're writing your code wrong." Well yeah that's how memory leaks _usually_ happen and it's not unique to any language. No language or compiler can check for memory leaks deeper than you can see with syntax, because it can occur in various states, at various times in various cases, that are not trivial for a compiler, let alone a human, to spot.
"Idiomatic C++ should never have memory leaks", what does "idiomatic" mean? Do you have reference for every possible idiom that makes it ideomatic for every possible problem you may encounter? Idiomatic is a loose definition for using a _syntax_ of a language, not for eliminating things like memory leaks. Iterators, which are arguably idiomatic in Rust (and C++) doesn't solve every problem, whether you use C++ or Rust.
@@CallousCoder That's not a leak, that's a lifetime issue. It's still a problem, arguably a more serious one, but it's not a leak. I agree with the poster above that idiomatic c++ should never leak.
@@isodoubIet the effect is a memory leak, memory keeps leaking away to the point the user gets an OOM. And those things won’t be governed by any idiomatics. And they are very real.
I’m not arguing that RAII doesn’t significantly reduce the risk of memory leaks with 24 years ago when I started C++ they are for more rare for sure, but they will not be circumvent all memory “non return” (let’s call it that then) issues. That’s what the original poster sounds like they mean. You cannot be sure unless you test drive the application extensively and profile the memory as there can be memory leaks for sure.
Take Chrome for example 🤣any sufficiently complex 3D application has/had memory leaks galore. From AutoCAD, Maya, to Blender to Modo I had them crash with OOM errors.
Oh, but there are languages that never crash!
A cop out example would be anything without runtime exceptions and the has its standard library built around only total functions. So indexing into a list would be `(List, Nat) -> Option`.
This is a bad answer, because you will still have to consider apparently impossible cases (like assertion violations) and do _something_ then. The most sensible choice being recording the program state so you can debug it later.
An actual example would be languages that let you show to the compiler a case is impossible. So instead of indexing into a list you could use `(Vect, Fin) -> T`. This is a function that takes a list of a given length, and than a finite number - between 0 and that length.
What's surprising, it is possible to statically typecheck this. That is, without knowing the concrete value of n the compiler can verify your program uses this function correctly. The big problem is such languages are niche and generally lack libraries and tooling.
that* has
I suppose functional languages come close to the hypothetical language you were talking about, only adding things like monads in the part of the program that has to interface with the outside world..
*there are languages that force you to turn runtime errors into logical errors, or give up on the program ever running
@@Lttlemoi he's not talking about a hypothetical language. dependently typed languages exist and let you do this. they just aren't very useful for writing programs. but they are still cool. My favorite language in this family is F*
I don't think it's fair to say all array accesses are checked at runtime. If you use an iterator, they aren't, at least that's my understanding. If you're writing an algorithm that will make random access, has hot path perf requirements and can know it's safe to subscript, use the _unchecked variant, easy. I don't get why everyone assumes that experts should not use unsafe{} in any circumstance. If they'd otherwise use C++ to get the same perf, it's still strictly better to annotate that behavior and get memory safety by default in 90% of your code.
I can't think of a case where using modern C++ has ever resulted in losing memory safety that wouldn't have needed to be lost in Rust anyway. Every time I've ever run into this its specifically because I used C over relying on C++.
@@Spartan322
Gladly, the world is larger than your imagination.
@@diadetediotedio6918 Memory safety is trivial in C++, if you use RAII for memory, especially with smart pointers and containers, you won't run into memory safety issues, if you use the C++17 and C++20 type-erasure types like optional, any, and variant, and if you include ref_wrapper, you can do pretty much everything without needing to deal with memory at all.
As a C++ expert, I can only agree with this article. Except that I do *not* like Rust.
i hate rust cuz it has stupid syntax even harder than c++ ; at least in c++ you can use struct and functions without metaprogramming and any defines etc etc ... but rust cannod do that it makes you to use tons of ampersand single quites arrows and buch of moronic stuff which is time consuming and boring . evne vlang seems much better than rust Xd mozila they themselves are using llvm which is written in c++ yet they are saying rust is better lolz with glitching syntax lolz at least they could make it easier why all those boring ampersands single quotes arrows and etc etc
am i with you rust is a bad tool
C++ iterators are also zero-cost by design. They messed up with ranges somewhat in that regard, not because the abstractions have overhead, the c++20 ranges abstraction is also zero-overhead. But it prevents you from writing optimal code in a lot of cases because the nature of how the chaining works discards information about the underlying structure which makes the optimization a lot more difficult for the compiler in some cases.
So this might be more a matter that the "plethora of compiler options" just have made them not zero cost?
@@thekwoka4707 Ah, no. It's not about compiler flags. C++ iterators are the thing almost everything uses and those are completely fine. Ranges are built on top of iterators as well. (Well, ranges build on the newer iterator concept that works with a sentinel tags but that is besides the point.) One example of a typical C++20 Ranges problem is that if you have a sized range (which means the length is known) then this sizedness gets lost as soon as you chain any kind of range operation. This size property of the range can not propagate through the chaining pipeline in C++ and that is why its almost impossible to optimize those as effectively as they can for the traditional iterator loops. I think the plan was for compilers to be able to optimize this reliably but it turned out this is not actually possible. If the compiler can inline the entire thing then it can probably optimize it down to zero cost but you simply don't know if your compiler will actually be able to do that and you have to check by hand for every single case.
Getting into these kinds of articles its easy to read it as a "X IS SHIT!" "STOP USING X, Y IS BETTER" kind of thing but that's actually not what this one is like. He's "why >I< think ..." and the conclusion is just alike! I have to give him that! So in his situation: keep going dude!
But on the other hand I'd say it's also a dangerous take: Most people are not that pro and low skill peeps might take it as an invitation to keep implementing serious stuff in possibly dangerous ways.
Finally, someone brought up that "safety" is not a high priority for every project!
I agree. Although I've seen youtube video where Herb Sutter mentioned how the government put out a report specifically calling out C++ as a unsafe language and that it should be avoided. (Unfortunately I can't remember where that video is now.) So I guess the standards committee might still have to take safety seriously, maybe more seriously than it really should need to.
Finally? This has been the defacto standart since software development started, why do you think we have such a big mess nowadays with safety?
@@diadetediotedio6918 Yeah, it's crazy people are complaining that safety is a high priority for most projects. In fact, safety isn't prioritised nearly enough. I hate this "move fast and break things" attitude, to be honest.
For usual/simple programs at least, Rust is so much easier to use and setup/compile. And everyone else can easily compile my program too with `cargo build --release`. That's it. And, Rust comes with guardrails builtin, which yes can be time-consuming to work with, but helps noobs write safe code by default. And it feels like alot more is caught upfront at compile-time, and I don't need to install special compilers or other software to catch them (nor pass special flags to the compiler). My only gripe with Rust is the macro system. MBE's quickly become an unreadable mess and can be too limited in certain cases, while proc macros require too much setup/code and an extra crate (why?).
His take on crashes is weird to me. I don't particularly think 'Well ~I~ don't write bugs so whats the deal?'
Speaking as someone writing C++ professionally for the past 5 years, hiring young C++ devs is an absolute nightmare. Segfaults everywhere, stack overflows, abusing the type system, etc. He's right that Rust wouldn't FIX this, but it definitely would help. If we could ensure that only C++ experts could write C++ code, I generally like C++ more. However, the number of C++ experts is dwindling, as the language is reaching the heavens in terms of complexity, and the number of Rust experts are increasing. Idk, I'd rather hire the Rust devs than the C++ one.
Speaking as someone who has been writing C++ for the past 25 years, you *are* a young C++ dev. 😉
I think "trees" should be arrays where parent/child indexes can be calculated with MATH. First off this is probably faster than basically maximizing cache misses by jumping around memory, but also makes it super easy to insure minimum possible height
The age of microservices should favor C++.
The toughest challenges come from working on large C++ projects. The same goes for C.
Do you have an opinion on which one of those scales better with increasing project size?
I'm not sure but wouldn't there already such an overhead in latency that you might as well just use a higher level, safe, simple language? But maybe you could orchestrate at the process level, not containers, but then we're just going back closer to metal anyway and the micorservice is just the smaller scaling unit conceptually.
C++ do not brings any benefit. Extremely hard to learn and use it well. Then your super fast microservice is packaged into a docker image, uploaded into kubernetes where it runs in a pod.
Acording to jetbrains survey from 2021, 8% of people who work on microservices use C++ and C. Most of them use java, javascript, python. Can you trust this survey? Duno
13:24 that's not "fair" at all, you can't just say "oh, of course you could write the *correct* thing, but i pretend to be stupid just to make my point"
My fast take on Rust vs C++.
They are stairs that goes to thousand steps into the sky.
Rust's staircase has firm and strong handrails and also provide you with a harness, whereas C++ it might have handrails, but you have no harness.
How much can you trust yourself to not f- it up, and fall, and if you're in a team, everyone tied together, how much can you trust everyone from not falling?
Maybe the team aspect of your question is what's really important here. Folks programming in C++ might be writing < 1 memory/concurrency bug / X LoC, but what if the compiler could bring everyone to that level or 0 bugs at all, even if it's sometimes annoying to "fight" it? Frankly I'd prefer the latter even if I *think* I have reasonable competency to produce code that does not have memory/concurrency bugs in C++ (at least not by the time I've identified and fixed them with tests).
Rust dev here.
First of all, undefined behavior is non local. Yes, the compiler optimized that part with /2*2, however, the Standard says the whole program is affected. The compiler can literally do some doomed tricks like executing code both inside if(a>0) and else. Thats not what you want in your trading platform.
No, memory accesses in Rust arrays are not constantly checked. Due to MIR optimizations the compiler may check it only once or even remove the check at all.
No, with Rust one does not pay safety overhead. These runtime checks are done in the same places where C++ programmers would need to insert them. They just don’t. Which results in undefined behavior, angry clients and money loss.
You were easily tricked by his flawed logic into promoting his article to 250k people. Congrats 🎉