And they are correct. Performance and energy efficiency are not a significant consideration for the vast majority of programs and shouldn't be. If we wrote most software in C we wouldn't get very much done. A program that does what you want and has the features you want is vastly more valuable than a program that is energy and RAM efficient but isn't finished or lacks half the features you want because the programmer is too busy hand optimizing the assembly. There is definitely a place for C, Rust, Go etc but they are not replacing Javascript any time soon as the most used language and rightly so.
@@TimothyWhiteheadzm If we wrote most software in C we would have libraries/frameworks for everything and and the programmer wouldn't be too busy hand optimizing the assembly.
Considering all Windows 10 versions come with a sort of .NET Framework 4 pre installed.. makes you wonder, if the whole OS was written in dotNET C, C# or C++, except a few Exe, which have to run on baremetal like the kernel.
@@werthorne true. But some of them are just c library wrappers. If you want to write a loop in python you should just use numpy (or a different language).
I think there's far less cause to be concerned about the energy requirements of developing software versus actually running it as a user. A developer might run a test, for example, that mimics the actions (and energy use) of thousands of users. That's still an efficient practice.
The reason people are able to get by with Python is because the Python libraries that are computationally expensive are written mostly in C or C++. Well, that and the whole "scalability" where people run 100 servers in a cluster to do the same thing 1 server could do if the application was well designed.
My first program was a multithreaded file optimizer script written in MS batch, where the actual optimizing was carried out by the worker script calling an external application. Even there is was very obvious how much the interpreted language acting as a coordinator slowed things down. I see this every time python is brought up "but the actual libraries are C...", and meh.
@@superfoxbat as glue an order of 1% of the program's cpu cycles are used by it, while the other 99% are used by C. If you got rid of it, the maximum performance benefit would be 1%, assuming your replacement language is infinitly faster.
python is pushed by academics that don't really program but do statistics or somethin else instead lol people that want to lazily use other people's libraries also bootstrappers for GPU-driven AI these days, but that's what python was supposed to be, a scripting language, not something to make entire programs in that's the fucking problem
a.) all rather academic usecases and b.) nobody in their right mind uses pure python for heavy lifting. you use libraries.. which are written in C(++).
In reality, Python programs outside of extremely narrow use cases (inferencing, training, any trivial array programming problem) spend most of their time in plain Python.
Academics do. I'm not even joking. We do. Are we in our right mind? 🤔Some are, others aren't. Clearly I am, otherwise I wouldn't be subscribed to Lunduke.
@@microcolonel those narrow use cases are also the most common python use cases. anything other than that is probably a cli app where performance doesn't really matter. sure, you can write a backend server using django or something but python provides little to no benefits there compared to any typical backend language like c# or js and there are great performance downsides.
@@Flackon That would be Zig I’m pretty sure. They are now already blazing fast. And the zig team isn’t blowing itself up with internal politics, so they’ll likely still be around in 40 years. 😉Man 54 years for a language to stand firm and be the core language for many kernels is an achievement.
It's nothing to do with Ritchie. It's all about the compiler, 40 years back all the assembly guys were whining that C compilers were terribly unoptimized.
"Back in the day" when I programmed only on VAX/VMS, one day my boss came over all excited and made me come to his office. We often wrote our programs in VAX BASIC because it was just so darn powerful, but of course even back then the same concerns arose regarding memory, execution speed, etc. So he wrote a quick program that basically just counted to 100,000,000 or something like that printing the start time/end time. For the sake of argument (it has been 30+ years) BASIC took 20 seconds to run, COBOL took 15, Pascal took 7 and C took 1.5... but the REAL shocker was FORTRAN... it completed in 0 seconds. So we compiled/machine and looked at code generated by the compiler and found that FORTRAN was so smart it optimized out everything but the final value. :D
...Also, who wrote the programs for each test type? Because different languages are stronger in different ways and often algorithms can be implemented in a fashion that takes advantage of the strengths of that language? You know what I mean?
I had this thought as well. I doubt it would make any interpreted languages faster than their compiled cousins but doing things in idiomatic ways can make a big difference.
something as simple as "count to 100,000,000,000" is not great for a benchmark, there's a good possibility that it ends up just running at whatever the clockspeed of your CPU is
It's the degradation of the education system. Programming in C and other older languages is hard to do well. It takes time to learn and many will never become really good at it. Many of the newer languages have greatly reduced learning curves, but that is paid for in other ways and this illustrates those ways.
Well good thing that all you need to do decent C is to just be okay at it and know how to read read to know what operations are unsafe as the spec literally tells you that
Corporations don't have a "degrade education system", if anything it is a symptom of enterprise programming and practices. Teaching how computers work is not their primary goal, their goal is faster time to onboarding which requires easy to understand programming languages and practices. Object oriented programming was the first step. It made it possible to hire engineers at never before seen rates globally.
@@Nicholas-nu9jxC is easy to learn but incredibly hard to master. Writing a resilient and sufficiently complex piece of software in C is objectively not simple.
I'd say c/c++ is not that hard for what you get in return. You learn a language that the whole world can share and expand, and that has been the case for decades. You get a language that can achieve anything. You get a language that every processor is optimized to run, and thus the fastest non-assembly language. And the great part about c++ specifically, is it is pretty much c. Class inheritance is just embedded structs. Polymorphism is just a function pointer table. ect... C++ is just all the things people normally do in a large C program project, packaged up to make it easier, so its going to perform mostly the same unless you deliberately write a c program that avoids all that for as close to assembly performance as possible. And yeah, you can do that. What we need are AI tools that can take these other slow languages and convert them to c/c++. Then, you can just tweak out any mistakes, and have a functionally equivalent natively compiled application that runs better. The real question is how libraries are translated because a scripting language eventually reaches natively compiled libraries in the interpreter or virtual machine, so its not a simple button click conversion. The goal would be to have a language that is super fast to write, but in the compilation process becomes equivalent to a hyper efficient c program.
In the real world, python is just business logic code that does so few computations, that It's really insignificant how slow it is. Once anything remotely computationaly expensive is required, C libraries such as numpy, tensorflow, pytorch etc. Are used. So this is a really silly test. No one is going to use pure python to list the 1000000th digit of pi or break the highest prime found record. That's just not what the language is for and how it's used.
Yes and no. The problem is that some people are oblivious to proper use cases and inefficiencies, or at least _how_ inefficient it can be (ex. maybe they think it's 5x less efficient at a task instead of 50x). Because of this it ends up slowly creeping into other work over time like a weed unless it's put into it's place with reality checks like this. Most programmers are smart, but there are a ton of well-educated highly knowledgeable non-smart people who will still do mistakes like this.
@@jeffreygordon7194 ya after reading all the comments on this video I gathered that. I haven't learned Python myself, but I've read some code. I like the dynamic variable declarations and the language reminds me of basic.
I'm no expert, so feel free to chime in and add corrections. My understanding that adding abstraction slows down the computer. The problem with "Everyone should use C" argument based on power and time can be applied to people who code in assembly. We can go further, why not just code directly in binary if you want to go max speed and power savings? You won't need linkers or compilers. The obvious answer is that we want some level of convenience. The best language balances convenience with power saving and efficiency. But once again, the more convenient a language is, the more abstract from computer code it is, and the less efficient in general it is. The other question posed, will there be a faster language than C made? Possibly, but unlikely since it's a simplistic language that's fairly close to matching computer code. You'd need to get something closer to assembly like Fortran or something. I use C++ myself and am happy with it even if it's not the best. Anyway, that's my 2 cents.
Not everyone, not everytime one should use C, always use the right tool for the right job, but when it comes to system programming C is still king and will continue to be so
Adding abstractions does not "slow down" anything. You literally cannot write a meaningful program without abstractions. Every time you write a function, you are writing an abstraction. Compile options matter greatly and C has its own abstractions
@@jshowao Right, all languages have abstractions, I was trying to use the term 'abstraction' to refer refer to the convenience of languages. The more the language resembles regular English and/or more bells and whistles it has, typically the more the compilers have to do to get it to line up with computer code.
@@haroldcruz8550 Right, I'm not here to say C is bad. I was poking Lunduke's argumentation a bit. It's a fine language and I'm glad people are still learning it.
Gotta keep in mind that even though Python itself may be extremely slow compared to something like C, a lot of the compute intensive Python libraries (think numpy, pytorch, etc.) actually make use of native C / CPython libraries behind the scenes, so the Python code itself doesn't really do a whole lot of computation in these scenarios, and is only used to interface with the more complex parts of a library.
Those are python libraries, so it is python. What you are stating would be like saying "This specific c library on this specific platform is written in raw assembly, so using those libraries is not really c." Obviously, everything in c is converted to assembly. Similarly with python, everything in python is eventually run natively through the python interpreter. A better description would be that the python functions in numpy tend to run closer to the native hardware. The implementation could easily change.
@@projectnemesi5950 That is not the same thing _at all_ . Python is a single threaded, garbage collected language. The instructions you write in Python are executed by an interpreter, which itself performs translations (and probably some heuristics, as well as garbage collection, just-in-time compilations, etc.) _at runtime_ , whereas in compiled languages like C or Rust, this heavy lifting is done in advance, at compile time. So yeah, it does make a tangible difference if a library is merely exposing an API that is making function calls to compiled shared objects in the background, as opposed to _also_ being interpreted at runtime.
@@projectnemesi5950 I'm not sure I understand what you're saying. Are you saying that numpy is written in Python? From looking at the source it seems to be mostly c++ or c.
I mean nobody is using Python or something for high performance application, all of that is offloaded to an actual fast language. Python for "Script you may run a few times a day and completes in 50ms" prob uses less electricity all time than it would for your brain to sit down and spend the time to write it in C
Wow you people still just don't want to admit that Python sucks... just learn AWK and be done with it. AWK is fast, easy and efficient, far easier and less complex than Python. And for data science, just learn R.
@@CaptainMarkoRamius that's correct, for R's package ecosystem is phenomenally comprehensive and well written. It's like comparing a children's tricycle to a space ship with a gravity distortion drive.
@@dercooney I won't fight, but I would like to know how many C programs you're ported from one system to another. OS portability? Computer architecture portability? Floating point portability?
@@quintrankid8045 not many, but to answer your question, it's portable in the sense that you don't have to rewrite the whole thing for a new computer. prior to C, you'd do that, but with C and unix, you have source code that works across different revisions of an OS and possible cpu arch. but it doesn't offer much abstraction
@@dercooney Debatable. Two things come to mind, floating point portability (I recall this was a problem for some science app using Python and Python uses native floating point,) and I ran across a guy on the web somewhere who claimed his CHAR_BITS was 48. I guess it's a bad idea to write code with these sorts of dependencies, but then, doesn't asm always deal with dependencies?
If you do a simple search for ranking programming languages by energy efficiency it shows you what you are looking for. As you start typing it in, it even auto fills it.
@@wernerviehhauser94 never had that issue. It’s called learning how to program I guess ;) When it comes to debugging it’s always business logic for me that is the challenge, never memory management and pointer management. But that’s probably a generational thing, as we grew up doing assembly and C and as a result learned how to manage memory and own the hardware. If you haven’t grown up doing it, it maybe challenging.
Reptiles can only regulate their body temperature by moving to where they can warm themselves up or cool themselves down. They are optimistic about finding cold spots.
Hypothetically speaking, Java had much time invested in its compiler/JVM development from highly talented developers from early 1990s. Which is probably why it is so high on the list. I also agree with your point that there is no investment in compilers developments due to how much speed in hw we currently have compared with early 1990s. Which is why the recent languages don't perform as well.
Thanks for pointing this out. I know a certain switch company that, using Linux and C++ for their operating software, decided to code "non-critical functions" in Python. They were pretty good at it, incorporating the Python interpreter in the OS and getting pretty good at translating between the languages. After a while, they announced a new policy, namely going back and rewriting python programs in C/C++. This was due to customer complaints about how sluggish the OS was. The moral of the story was that, Python wasn't wasting time in any critical functions. It was wasting time *everywhere*, and general slowness was the result.
People say Python is fast to develop in, because you can write less boiler plate, but it becomes really slow when you have to start over in a good language.
How is JavaScript just 6x slower but TypeScript 46x? TypeScript is transpiled to JavaScript before execution. Are they including the transpilation time? (That'd be a bit like including compilation time of compiled languages.)
Rust never claimed to be faster than C... It claims to help to write safer software with more creature comfort than C without being noticeable slower. And honestly looking at those charts they've done good job. Only 4% that's really impressive. Also Lua JIT is missing, which shouldn't if you include TypeScript.
@@cajonesalt0191 Programmers are inherently lazy. wasting 4% to make sure they don't screw up because of how lazy they are is an acceptable trade-off. That being said, I'm not certain, but I think we could minimize that 4% further by changing pc architecture.
@@cajonesalt0191 Yeah, but since it replaces the current node/TypeScript garbage it is still a huge improvement. Now, if all the unnecessary cloud nonsense, including the network traffic, would be gone we could decommission additional power plants.
@@cajonesalt0191 Rust is only slightly slower in synthetic benchmarks in a lab setting. Compare real world C projects to Rust projects, and the difference is often Rust being 100-500% more efficient than C.
@@ThomasCpp Including all of the processing time would be correct for scripting languages. Javascript can be converter into an internal format to gain speed too.
I think everyone agrees its performance is nearly identical to C. Some cases it may be slower, some it may be faster, but it's at the point where it's hard to compare.
I missed zig too, they probably didn't include it because it isn't used much yet. I think that zig is very promising but it needs more libraries and all that.
I can make the comparison especially between C, Zig, Python and maybe Rust depending on the benchmark. Which of the benchmarks do you think are the most important? Give me the top 5!
I agree that the Free Pascal compiler is a real gem. It also has the advantage of taking in a language that is fairly easy to compile. I will point out that the software development times don't seem to be getting faster with the newer languages and also these languages that are supposed to defend you against errors seem to be used to write a lot of buggy code.
That's due to Scrum making every issue take multiples of two weeks, and only hiring the cheapest devs, because "more is better" even though that makes the useless Scrum meetings take even longer.
As a user of many langs, I think pascal is a great combination of speed and easy to debug, plus being RAD. For me, easily the most overlooked and underrated.
This was a reply I made but it deserves to be a top-level comment. Ultimately, Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I think that it's overused. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Wholeheartedly agree. The beauty of Python comes from being able to pump out a script to do a simple, seldom-repeated task in an hour or so. Python is a scripting language and should be treated as such. There is no reason to be spending the time creating a website back-end in a Python framework like Django when the benefits of such a "simple syntax" are so heavily outweighed by the drastic loss in performance. You will eventually just have to rewrite the whole thing in a more efficient language or stomach the increased operational costs anyways. I'm a relatively new programmer, but for things I intend to have running 24/7, I much, much prefer slow development over slow performance. Migrating things I originally wrote in Python when younger to Rust has allowed me to do so much more with my raspberry pi micro-homelab than I originally thought would have been possible with the hardware.
Well that's the big problem, isn't it? Python was created to be a more robust alternative to bash when bash isn't quite enough for what you're trying to do. Javascript was meant to be a simple language for simple front-end tasks in a web browser. Now most the world is running on languages that were never meant to be doing what they're doing.
Ts doesn't transpire to js the way js is jit processed. It adds tons of js you wouldn't need if it had been written in js to begin with. Python is mostly used as a wrapper language, most of the work happens in libraries that are written in c.
when you create a NewLanguage that in some place faster or use less memory than C, it only means that C can also benefit from the same speed or memory optimization and will be also faster and/or use less memory when this trick will be implemented in the next version of C compiler. but without overhead of NewLanguage features. so probably there will be no NewLanguage that is faster and more memory efficient than C. and it only means that you can decide how much efficiency you are ready to pay for the new features. if you want a more efficient language than C, go to assembly language, but nobody now wants to do it.
10:40 is it me or since c is about memory manipulations and everything is directly handled by the developer, it should actually still win on the ram benchmark? programming differences.. so if the code was totally the same on both language c would probably still win on memory
C++ does not need to be slower than C. Makes you wonder what exactly they did, for example did they write a custom algorithm in C, for some task rather than using a generic one from C++. In that case the study is nonsense.
I have been using pascal for over 40 years and forever looking for alternatives and no other language has come close for me and I have tried all the main competitors. I now use Lazarus Free Pascal to write fast GUI applications for Windows 11, MacOS and my favourite Arch Linux. Well done Brian, I am with you on this study and agree that these factors should be taken into account. About time a good modern C language should be developed, or an improved modern C++ developed on these criteria.
The thing about C is that it's pretty much as close to writing in assembly as it gets. Minus the compiler optimizations, the generated assembly is as "literal" as it gets. In my opinion there's no need to "reinvent" C.
The price of abstraction is sometimes worth it. I'm a C-zealot myself but I understand the appeal of languages that make life easier. Some people cannot handle the amount of power you're given as a C-programmer, and that's fine. However, I'm still conflicted as I recognize this "slacktitude" is contributing to a slower, more unreliable software ecosystem.
thing is back in the day we used to use Python to prototype things and then once it was figured out we'd write the real thing in C++, not happening anymore
@@Zuranthusin some numerical work it's not even good for prototyping as its too slow even for test problems unless you heavily use numba, cython, jax, cupy etc, at which point it wont be much easier to develop than c++.
@@hawkanonymous2610 And again I'll point out that unless you're a newbie you'll either have written your own libraries or know which ones to use to accomplish tasks equally as fast as programming in Python. It's just a matter of your experience level.
@@transcendtientC programmers say C is the best for everything and if someone criticizes C their only response is “sKiLL IsSuE”. I would know, I write C at my job and I know a lot of those types of people.
I started programming in about 1983 in BASIC. My first compiled language was FORTRAN. I was always impressed how much faster compiled languages were, but I never thought about power efficiency. I started programming in Python several years ago, and I was amazed that an interpreted language seemed instantaneous, but I never thought about how the indiscernible slower speed would consume so much more power. Interesting.
Every programming languages has to make certain trade offs because we don’t live in a perfect world. The developers of the Python language wanted to prioritise development time and ease of code maintenance over execution time and efficiency. That’s a perfectly reasonable trade off to make in a lot of cases. For cases where that isn’t desirable there’s C or Rust or whatever. What’s the problem?
Java is a good language. A good language is one used by a lot of people to accomplish a lot of tasks. That is all that really matters. Speed is a side effect of language optimization because people actually use it. That is why c, c++, java, rust, ect.. are all the best performers of the study. And its the reason everyone is criticizing python for performing as poorly as it does despite popular adoption. I think the reason for this is most of pythons popularity are due to a few powerful libraries and prototyping. So you will see a ton of numpy scripts, or opencv, ect..
Compilers do a better job at optimizing machine code than hand optimized assembly, so there really isn't a point in doing that. Naive assembly will look roughly like what -O0 will emit. To improve that you'd start by adding optimizations but that's actually exactly what a C compiler does when you use -O1 -O2 -O3, so really C is just a way of automating the process of writing assembly.
@@josephp.3341 Depends on what you're writing, but some compilers are not as good as they should be at optimizing poorly written code. Even just the difference between calling a function with two arguments versus a struct with two members can cause wildly different results.
@@josephp.3341 I see this repeated a lot, but it's not really true: compilers are written by humans and the techniques compilers use are the ones humans used to use (excluding a few new peephole optimizations and whatnot). It's all about time, it takes a lot of time for a humans to repeatedly try inlining a few function calls, optimizing the result and finally figuring out whether that was worth it or whether it's best to undo all of that work. Let alone keeping all of that maintainable. Edit: Also, -O0 is (for most compilers) much worse than naive because it's too systematic, for example a human would tend to use more registers at once, use assumptions the compiler isn't allowed to consider and do simple optimizations like "call somewhere ret" -> "goto somewhere"
I'd love to see something like this with different versions of C compilers. C is the language for efficiency, so a lot of work was put into C compilers to make that even better.
According to one benchmark i've seen, oksh compiled with cproc, a very small compiler which is only 17479 lines of code (including code for it's backend), is only 1.35 slower than if you would compile it with gcc or clang. Oksh is probably not a good benchmark for code, but i bet with almost any c compiler you'll get higher performance than something like go or C#.
Most people doing python are not making complex maths stuff. They're creating scripts to iterate over a CSV file or an API that grabs some data from a db and sends it as json, and even if they're doing more chances are python is calling a c library trying the heavy lifting.
@@JodyBruchon And not to mention, academic work! I see Python heavily used by lab scientists are Stanford studying things way beyond my comprehension (like comparisons and data analysis involving zebrafish spines). It works well for them. Pytorch is also another thing academics love. So, once again: there's pros and cons to everything. It may be slower, but the trade off is convenience and familiarity within their tribe (lab). Just food for thought!
Pretty sure the default json module that comes with Python is written in pure Python for various reasons, portability being the big one. Please check this though, I could be wrong. But what I know for sure: there are several third-party JSON modules that defer to Ctypes or Cython which perform a lot better, but lack all the bells and whistles the pure Python one has.
@@koitsu2013 Indeed. Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I do think that it's overused, though. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Interesting that Pascal is slower than Chapel but uses less electricity, which suggests that whatever Pascal compiler they are using isn't using the processor as intensively hence the speed difference. So an implementation of the language rather than something inherently slow in the language by design.
Pascal is C are basically equivalent beyond syntax. The only difference there is compiler. I write modern industrial Win32 GPU graphics software in Pascal.
@@LTPottenger As both C and Pascal put raw data into memory and don't wrap it in some kind of object structure, that also is pure compiler logic. My guess is on not aligning record structures to 4/8 bytes. This makes the access slower with modern CPUs but uses less memory.
It's because all the postdocs who spent their years researching algorithms and math have an easier time dealing with the abstraction layers that Python affords. It's easier to spend time thinking about the problem domain when you aren't simultaneously obliged to think about type safety and footguns.
So according to this study, if I take a C program, and compile it with c++ compiler (with no, or very minor alterations), the program will become 1.5 times slower? I hope I don't have to explain why this claim is silly, and casts a shadow onto the whole research?
As far as I can tell, the applications tested are meant to make heavy use of the cpu. I am wondering about the actual impact when using a normal application which is mostly waiting for io. Eg when using application written in X vs V, how much delta in energy cost would be accumulated over a year.
So, it makes you think. Our Linux systems often use like 0.5GB of ram, and have allot of python apps in them. So what runs in Windows, since it takes 5GB of ram, and is filled with c/cpp. > puts on a tinfoil hat
Even worse, when you consider the applications running on these languages. OpenStack (written in Python) is running in containers on top of OpenShift (written in Python). So interpreted code on top of interpreted code on top of interpreted code.
Rust being faster and less power consuming than C++ was mildly surprising to me. I get a slight negative rust bias but that seems extremely small cost for more automated memory safety.
@@anon_y_mousse It is better than C++ for memory safety - the compiler catches bugs that most people never do. It has a smarter compiler so you can use your cognition within the problem domain, instead of wasting brain cells figuring out what the compiler already knows better than you. C++ people are upset they're not the best in town anymore.
Rust is exactly just as fast as C according to the most up-to-date results: energy-efficiency-languages/updated-functional-results-2020 Everything the guy in the video said about Rust is outdated by the same study.
Many of Rust's guarantees are build time rather than runtime but I am also suspicious of that. In fact having C++ be slower than C seems like a bug either in the implementation or the compiler. Templates are build time and member functions are basically just normal C functions that take in a this pointer as the first argument. Unless they are using dynamic dispatch or something I can't think of any reason the c++ version should be slower.
We already have cython that does something similar. While widespread, it is not that impressive as a bare speedup to pure python because of python syntax, and to achieve really big speedups requires a massive rewriting with obscure annotations, which require considerable knowledge of how memory works similar to C (sometimes is just easier to rewrite the function in C). I do not see how mojo will benefit the general pythonist.
Software is going backwards in performance because it started out at the bare metal, where you HAD to understand the architecture. All this abstraction to hide the underlying hardware is where all this inefficiency comes from. Optimizing compilers are grossly overrated, and I have been saying this since the first version of C++. Most libraries are junk, and again, all comes back to dumb lazy programmers who don't want to spend the time understanding the underlying hardware. Compilers can't get better because they would need to anticipate every single possible line of code, or blocks of code, or every algo that could possibly be written, which is impossible. We need to get back to KNOWING THE HARDWARE, and KNOWING WHAT WE ARE REALLY DOING. There is no other replacement. Convenience ALWAYS comes at a price.
Its a mutual relationship. The software must know the hardware, and the hardware must know the software. Processors today are optimized for c programs. Everything is human made. The entire point of a programming language is to make things work and do it in a way that everyone can understand. Just like a spoken language, a "good" spoken language is one that is heavily used.
In defense of Lua, the 100% interpreted version is very slow. But LuaJIT is more on par with C# and C++ in some little bench testing that I was running a while back. Also, Zig is newer and is supposed to be faster and more efficient
One thing I saw that I do take objection to with regards to their test is that of using trees. The implementations can vary to the point of degrading performance a noticeable amount. I didn't see if they used hash tables too, but those can be excessively different in implementation as well. I primarily use C myself, but not for environmentalism reasons, but because I want to save my users time when they use my software. I'm not surprised about Python, because I use it fairly often too, but I am surprised about JavaScript. Yeah, I use that on occasion and it's slower than molasses in a freezer, but I would've expected that all of the work that has gone into optimizing the various interpreters would have meant it would perform better than it did. At least now there's some degree of quantification of how much the other languages suck.
A few observations: 1) Ada is still largely ignored even though it's actually a joy to write in. 2) They did not include Forth, which is telling. I have little to no doubt that Forth would either tie, or beat C in at least one of these metrics. The catch is, it sucks trying to read the code several months later - but that's not what this study is testing for. 😉 3) If Pascal received the amount of mindshare that C has received, it probably would beat C in all metrics. Ada is heavily influenced by Pascal, so you can get an idea of what Pascal would be able to do (It's just that Ada is to Pascal, what C++is to C - in terms of size and complexity).
Not surprising seeing how despite computers becoming more powerful, because of all the bloat and inefficient coding, nothing feels all that different in performance than 15-20 years ago. Also I am not sure how many of the background processes are that useful. It;s really annoying that something like a youtube page needs I don't how many gigs of ram just a for a normal HD video...this tab right now uses 1.1 Gb ....that is insane.
Me too, because my friend at SMHI (a meteorological institute) says that their old FORTRAN programs run circles around their new Java reimplementations. Problem is that they can't find FORTRAN programmers anymore.
I did not read the fine article referenced but are these the programming languages used to run most of the CPU time of the world? Zig and Odin are not really consuming much cycles globally. Web browsing and video rendering are the most used applications for end-users, I'd guess. Libraries are doing the CPU cycle work, and they are written in in C/C++/C#, mostly? And GPU helps so there is more research to do.
5:57 I’ve actually done a ton of embedded language testing. Compare LuaJIT vs the others and you’ll be SHOCKED by how fast it is. The “regular” lua engine (not LuaJIT) is only optimized for portability. The performance is as weak as you imply.
Well, python would've looked a whole lot faster too if they used pypy, which is about as competent as luajit. I'm only suspicious about Lua having been slower than python in their tests. Lua can be as slow as python in some things, but in my experience it generally isn't.
I will say, Java and C# have actually been putting in investment towards speed and efficiency. They just started with the handicap of being managed languages, which sometimes is a useful price to pay. Rust is even similar to that in a way. Every runtime check you add for safety and consistency at the language level is going to slow things down. C doesn't care and trusts the dev did that checking at programming time so there's nothing holding it back (from exploding at spectacular speed, sometimes)
Inferior error checking and many other types of issues can conspire to cause more development cycles, potentially far exceeding whatever time and energy savings are achieved by the final working application.
Some thoughts... * Compilers and runtimes are written by programmers, so it's not surprising that they'd optimize for developer convenience * Companies want to optimize for feature output -- a proxy for developer time, which (currently!) is the long pole for costs * As a user of a rented Azure Kubernetes cluster, I'm *painfully* aware of C# and its insane desire to hold all memory at all costs End of the day, it all makes sense!
I'm very surprised that java outperforms Go in energy and time? How is that even possible? I understand there's a GC in Go, but Java has WHOLE VIRTUAL MACHINE which is a lot of C code by itself. Then standard library, then the application. While Go compiles to the native code with an embedded GC. And Java still has A HEAVIER gc, heavier class runtime structures, heaviver runtime? Why the first entries are all round up at 1? 1 Megabyte, 1 joule, 1 ms. Also Lua is way simpler than python. How on earth does GO uses less memory than C if it has the same GC. This article is so BS honestly, just for C fanboys. No, I'm not pooing on C, it's an awesome language and Python is indeed slow, but this article got something seriously wrong.
I would like to use C or C++ for more, but for rapid development, for systems that change just as rapidly, JITted languages are a healthy alternative (C#, Java, etc). As a C# dev, this chart can be misleading. Run a service for days and you'll notice initial memory usage seems high but compared to other languages and frameworks it does as good or better than the majority. Ruby, Lua, Perl, Python are fine for scripts, but honestly they suck for real production perf. As a language, I can't stand Ruby. It's awful. RoR is a bag of hot doodoo.
Last year, I developed a simple genetic algorithm library in pure Python called bluegenes largely as an exercise. I then spent a couple of days translating it to Go and got a 100x increase in performance. I then did some Go-specific memory optimization and got a further 100x increase in performance. It blew my mind.
inb4 someone tries to incorporate transportation, man hours, and calorie consumption relative to error handling and memory safety, into some metric. Ackshtually Rust is the best if you look at it like [blah....]
Rust avoids many costly bugs that can occur in less safe languages. It might even have a formal prover like Spark Ada, which was designed for mission-critical applications like weapons and satellites.
@@CTimmerman Some of that is legitimate, some of it is just to avoid errors made by people who don't belong doing what they're doing. And should never have been incentivized, coerced, and drawn into those fields. I know how that sounds but if I went into the story of how I learned to program and the frame of life I was in, which basically amounts to constant whole body pain, fasting for days at a time, inability to sleep, and functional brain damage, the fact that I'm not only self taught but came out of it having sought out and knowing how the machine works down to the RAS and CAS signals sent to the memory controller, I can bluntly say most people don't belong anywhere near software. They just don't. They can't do recursion, they never bothered with (inline) assembly, the idea of cache misses and data locality is like a whoooaaaaa mind blown moment several years into their career, they just learned java and typescript or whatever and never bothered to learn data structures and how the machine actually works. I mean look, you can argue that it's a manpower and volume issue, so you'll architect tools and frameworks to keep all the normies on the rails so to speak. And at those organization and society scales you can make a case for that. But the fact remains, everything I stated is true. These people are not programmers, and they don't belong doing it either. Rather writing software is a surrogate, it gives them identity and money, and so they do it in order to get the money. This curns out mediocre "bare minimum" self serving types and then equally incompetent layers of management that have to try to channel that writhing malformed mass, that self serving blind beast, into doing something other than devouring itself. The whole mindset is wrong. This fixation on memory safety is partly quality of life improvements, and partly damage control. Mostly, damage control. For people who barely care what they're doing, because they're a product of the Prussian educational model, and due to the media exposure in early childhood [...]. I can say it, I lived it. That was my late teens and twenties, wasted being tortured. And even there what I created was the best. The best. No corners cut, no sloppy hypersocialized water-cooler mentality. No, the best. I take what is, and I make what ought. Those are the types of people you should either seek out, or rework aspect of society and human development in order to manufacture. Those others who can't or won't, don't belong, and shouldn't be pandered to. They need the boot. Get the hell out of here buddy.
I'm deeply skeptical. These large scale comparisons are extremely difficult, as the quality of the implementation is huge factor. Naively implementing a particular algorithm in each language, when no "skilled user" of that language would do it that way, is a sure way to create very misleading results. C and C++ having significant variation is a clear indication of this - written properly, C++ is virtually performance indistinguishable from C. I think the paper is asking the right questions, but I'm not gonna be waving it around saying "we must switch to C" (as much as I would be okay with that, personally...)
Written properly for C++ (in regards to performance) just means never using templates, never using virtual methods and basically never including any header from the standard library. So basically just write C, but instead of functions where the first argument is a pointer to a struct, that struct can be a class with the function as a method and thats it. Oh and maybe you can use range based for loops and std::vec.
4 месяца назад+1
Looks like they used benchmarks game. So it's optimized but unidiomatic code ...
By the way there was (I guess Canadian) student challenge in engineering to build satellites and program them. Some company launched them in a rocket to reach orbit. Ada might not be the fastest and the most energy efficient, but it is very close. The sattelite written in Ada is the only one remaining working in orbit. Now think about that. C is nice, but ...
LuaJIT tends to be about 100% slower (i.e., taking 2x the time) compared to optimized programs in compiled languages. It even seems to be somewhat behind the Java VM, though it does appear to hold pace with JavaScript's V8 engine (Source: Mike Pall's scimark code vs. equivalents created by others in Rust, C++, etc.) . IME, that's about the difference between competently-but-lazily-written C++ and optimized-by-experts C++, so not a bad result overall. The best results are probably achieved by combining the FFI with optimized native libraries, while minimizing context switches to get the most out of both the JIT compiler and the heuristics used by LLVM/GCC. LuaJIT without native libraries doesn't make sense, so it's not useful to benchmark interpreted code that should have been put in a native library to begin with. And the default interpreter is so slow that talking about its performance is effectively a complete waste of time, especially since it doesn't have a FFI and its C-interoperability layer incurs very high overhead costs. Still better than CPython of course, but the value of Python lies in its vast ecosystem. Lua doesn't even try to compete with its minimalist Do-It-Yourself philosophy.
This is a myth. Luajit is very competent, but it can never be as fast as C. Luajit doesn't statically compile Lua. It compiles it where possible, and still does plain interpretation where not. Code with string concatenations, for example, will run at normal Lua speed (slow). However, in this regard, pypy is very much comparable to Luajit. If they included Luajit, they would've also included pypy. I don't buy the part where they made lua look slower than python, though. That's not been my practical experience with the two. Not at all. There are things where lua can be as slow as python, but for the most part, it's not.
If you want a language that makes you care about memory, efficiency and speed, Zig is a good option to check out. More ergonomic and readable than C anyway!
The fact that Pascal is not tied with C for 1 or indiffernetiably close only illuminates the imperfection of their method. But yes compared to the BS modern languages this video is great. Long live Pascal and C, death to the modern bloated slow BS.
I would be interested to see how Zig ranks in a test like this. I see Java doing its thing taking quadruple the RAM it should but being blazingly fast for a garbage collected language. What stands out most to me as someone just getting into Node is typescript being _substantially_ heavier than vanilla JS.
A 75x slowdown is wildly optimistic. Compared to finetuned, optimized C or assembly, the slowdown factor is easily in the thousands. But it doesn't matter, because Python is mostly used for business logic or as a glue script, with the bulk of the task done in well optimized libraries written in C or other languages. Most software written in Python would not actually benefit from a big speedup if it were rewritten in C (some would, because many programmers are unfortunately not aware of the performance implications of what they do and would happily write in Python things that really should be part of a C library).
When you realize that every function call and every operator requires a hash table lookup in a script language (where the code model can change between two calls and nothing can be optimized) you will we surprised. Writing a compression algorithm shows you it's more like 300x time slower (done this in 2005). By the way 300 times is just what you get when an instruction has a L2 cache miss. And scripting language have terrible memory locality.
@@metaforest I suggest it almost isn't even a skill issue. There are some quite simple rules you can follow even if you don't have a lot of skill and you can avoid memory leaks.
Unless you are talking about very limited resources the difference between Rust and C in memory usage is insignificant, It's all just right tool for the job.
Lua isn't that slow. I don't buy it for a second. In my experience, pure Lua is quite faster than pure Python, and internally it's a lot less bloated than python, with a whole lot less sanity checking overhead, etc. I'm finding that study suspicious.
I had a similar idea a while ago while debating whether or not performance mattered. It might not matter for your application itself, but making something 5x faster means you can get by using about 1/5th the same computing power you'd otherwise need. That means either less poweful hardware or fewer replicas, meaning lower cost. If that can be achieved simply by moving from Node.js to Java, Go or Zig, it's a good idea (especially because no one deserves the pain of doing JS on the backend, so everybody wins).
Well Javascript that's compiled from Typescript looks nothing like hand-written Javascript. I guess the huge performance losses are mostly due to automatic polyfills of even slightly modern Javascript features.
@@perguto That also means that when writing software to clean Typescript it gets performance improvements for free when browsers/NodeJS have better capability. Just upgrade NodeJS/browser, change JS target where to compile, build again and boom, you get more performance.
If you use modern TS features, but target a really low ES version, the output is horrendous. If you target modern engines with a recent ES version, the JS output looks almost exactly the same - just the types are stripped out.
A study from Portugal. Nice! I'm surprised with how Rust compares with C, but that was years ago. I think, since the Rust specification evolved, a new comparison should be done.
wtf why C++ is 50% slower? how? I can compile C code as C++ and in some rare cases code will be FASTER (one youtuber compiled org DOOM in C++ and had 1% speed up). Same with TypeScript, how is worse than JS??? I would not believe this this research much.
@@Sneg00vik Yes we have safer `std::array` and `std::span` instead of `int[]` and `std::unqiue_ptr` instead `T*` and now where is this 50% overhead? We have RTTI and Exceptions that are heavy but if you not using them directly and have serious work load (aka start up is only fraction of time) you should not be able to notice it. Of corse we could created bloated code that will be slow but then we testing bad code or C++? We could probably create equality bad code in C. Only place where C++ is always worse is compile time, where all overloads and templates could make compiler glow red, but then if we include it in cost, how in earth python become 70x worse? When in very big program python could finish workload before C or C++ finish compilation, not mentions Rust that have similar bad compilation times as C++.
Yup. C++ uses the same exact compiler. Maybe they used specific C++ features, like vtables and RAII? I know Common Lisp code isn't the usual code, but uses typed Lisp, which is basically the C/C++ code but with worse compiler.
Memory usage is kind of misleading. The energy graphs show one reason why, the other is if you have a 100Mb executable for C then it's a 100.5Mb executable for Rust. There's a static overhead for many languages but no overhead in the actual code.
There's also compiler optimization you are not taking into account. Rust and C does not create identical Assembly code so the memory usage won't be the same.
@@asandax6 And Rust apps are almost always blobs with all dependencies compiled in. That provides more flexibility, since there is no pressure to have a stable ABI limiting progress (also a lot of code is generated via macros), but at the expense of no reuse of loaded modules across many instances of an app and increased compilation times.
@@JanuszKrysztofiak That's what I hate about rust. It compiles with bloat. Stable ABI is important and saves memory space (which every first world developer says it's cheap and don't realize the huge number of people that can't afford phones with more than 64GB and laptops with 128GB SSD.
@@asandax6 yeah, ram is no problem when you're talking megabytes no reason to not pre-allocate a couple of megabytes these days, ram is abundant on computers these days i pre-allocate tons of memory on the heap for easier memory designing (allowing for growth) all the time because everyone has it anyway. anyone that can run a web browser with all the bloat on those today, is still EASILY within range of my program, i'd have to allocate hundreds of megabytes to even come close
these studies need to take account of the brain's food consumption during days spent finding which libraries of which versions are in which repositories of Ubuntu
This is pretty cool. I know the authors did some splitting of the results by compiled / virtualized / interpreted, but I think what would be even more insightful would be to break down the languages by feature (e.g., with or without run-time reflection, with or without dynamic types, with or without garbage collection, with or without strict-aliasing rules, etc.). Basically, beyond the d-measuring contest between languages, it would be nice to measure the cost that each feature adds to begin (finally!) to do some sober cost-benefit analysis. For example, I suspect Python pays a huge cost for having to translate most attribute or method call into a run-time reflection lookup, for the marginal convenience of occasionally being able to write some pretty nifty code.
There are two definitions of speed; how fast does the program run, and how quickly can you get the job done. Often, 71x slower means it runs for a minute, while c runs for a second or so. Cool. However, if you need half an hour more to write the thing in c, and the program is basically something that does number crunching and writes it down into the database or in a file, you basically wasted 20 minutes to save one. Sure, if it has to run frequently and especially if a user has to wait for the result, by all means rewrite it in c.
And yet again I find myself pointing out that newbies will not be competent in using C and will have an increased development time. This is who Python is aimed at. With more experience comes using either your own libraries or those that you have learned to use and development time is reduced. Go ahead and ask your mother to write something in C and offer no help.
In re Python... yeah. I feel a little guilty whenever I code in Python, but for a one-time script, or some infrequent audit script, it's so quick and easy to get something spun up.
you are not thinking in the economics behind python right. the energy (money equals energy consumed with that money) you've saved in engineers time due to utilizing python is WAY higher than developing everything in C
Say you can learn Python more than 70-80 times as fast as C. Which seems like an overestimation, imho. That still does not mean people who know both Python and C well, can write the same script 70-80 times faster in Python than in C.
For competent developers, development time isn't nearly so big a factor as people think. Generally, if you've been in the business for any length of time, then you've either built up your own libraries for doing things or you use specific libraries that someone else wrote. It's only for the newbies that the language's standard library really matters in most instances.
Python is 1,000x faster than using a bunch of Excel sheets and 100,000x cheaper than hiring a bunch of extra accountants and business analysts to manually re-compile a "database" from unstructured data. And the use case for basic business applications still heavily favors high-level languages like Python for most straightforward use cases (and if you need to do stochastic simulations then yeah the libraries are written in C).
These tests show nothing as we know nothing about the actual code and compilation. Maybe it's just shitty code with disabled optimizations. In some sense that would make it more realistic :D
I went ahead and took a look at the code that they have published, and I am not impressed, and you're right that we don't know how they've compiled it. As a C# developer I know that C# can compile to native code, if you instruct the compiler to compile for a certain target platform, whereas Java cannot do so without going through its VM first.
@@anon_y_mousse Funny that you say I am spreading whatever "FUD" is, when the first thing you write about is a compiler that is discontinued, absolute tool.
@@chralexNET And yet, you dishpit, it still exists, still works and isn't the only example of a native compiler. Since you claim to not know what FUD means, you're either a liar and I can expect more BS arguments from you, or you were born this decade and don't know squat. If you want to make the absolute zygote argument that it won't handle the latest version of Java, I'll inform you that it's an open source project and could be picked up by anyone at any time but that yet again, it's not the only one and others are still being worked on.
What I think is that the bench mark would be a very strong indicator if your language is compatible with how you solve the problem. So left or right, the paper would be very interesting to instrument your solutions.
Tech companies: programmer hours are expensive. Energy usage and RAM consumption, that's the customer's problem.
The planned obsolescence and mountains of e-waste, that's Pakistan's problem
*server farms left a message*
And they are correct. Performance and energy efficiency are not a significant consideration for the vast majority of programs and shouldn't be. If we wrote most software in C we wouldn't get very much done. A program that does what you want and has the features you want is vastly more valuable than a program that is energy and RAM efficient but isn't finished or lacks half the features you want because the programmer is too busy hand optimizing the assembly. There is definitely a place for C, Rust, Go etc but they are not replacing Javascript any time soon as the most used language and rightly so.
@@TimothyWhiteheadzm the market forces are part of the self-destruct mode. Also your false dichotomy is false.
@@TimothyWhiteheadzm If we wrote most software in C we would have libraries/frameworks for everything and and the programmer wouldn't be too busy hand optimizing the assembly.
I did not C that coming. 😂
C la vie
🤣
HolyC still the king.
HolyC performance so tremendous the study was afraid to include it.
Because thou shall not take the name of HolyC in vain...
There's a reason the glowies had to eliminate the creator of HolyC.
AMEN!! 🙏
TemplateOS might be coming out soon FYI...🎉
Only those who are chosen can program in HolyC
holyc is bloated c
Microsoft and Intel will now collaborate to rewrite Windows in Lua.
Considering all Windows 10 versions come with a sort of .NET Framework 4 pre installed.. makes you wonder, if the whole OS was written in dotNET C, C# or C++, except a few Exe, which have to run on baremetal like the kernel.
Yes please make this happen🙏
They should do it in rust so they can brag about doing it in rust. :|
@@BDaltonYoung Have you heard of TRACTOR?
Meanwhile the energy used to help the programmer with A"I" is dusting all the energy spent running the result.
most of the libraries are in python
@@werthorne true. But some of them are just c library wrappers. If you want to write a loop in python you should just use numpy (or a different language).
Correct !!! with this logic everyone should walk all the time, but accessibility should count for something
I think there's far less cause to be concerned about the energy requirements of developing software versus actually running it as a user. A developer might run a test, for example, that mimics the actions (and energy use) of thousands of users. That's still an efficient practice.
@@jeffreygordon7194 except that most creations of us won't ever see thousand of users 🙂
The reason people are able to get by with Python is because the Python libraries that are computationally expensive are written mostly in C or C++. Well, that and the whole "scalability" where people run 100 servers in a cluster to do the same thing 1 server could do if the application was well designed.
My first program was a multithreaded file optimizer script written in MS batch, where the actual optimizing was carried out by the worker script calling an external application. Even there is was very obvious how much the interpreted language acting as a coordinator slowed things down. I see this every time python is brought up "but the actual libraries are C...", and meh.
Python is best when it is used as a glue to assemble various C components. Keep it simple and let C do what it does best under the covers.
python is the most expensive glue on earth
@@superfoxbat as glue an order of 1% of the program's cpu cycles are used by it, while the other 99% are used by C. If you got rid of it, the maximum performance benefit would be 1%, assuming your replacement language is infinitly faster.
python is pushed by academics that don't really program but do statistics or somethin else instead lol
people that want to lazily use other people's libraries
also bootstrappers for GPU-driven AI these days, but that's what python was supposed to be, a scripting language, not something to make entire programs in
that's the fucking problem
a.) all rather academic usecases and
b.) nobody in their right mind uses pure python for heavy lifting. you use libraries.. which are written in C(++).
This was my first thought. Why reinvent the wheel, when perfectly optimized libraries exist?
In reality, Python programs outside of extremely narrow use cases (inferencing, training, any trivial array programming problem) spend most of their time in plain Python.
Academics do. I'm not even joking. We do. Are we in our right mind? 🤔Some are, others aren't. Clearly I am, otherwise I wouldn't be subscribed to Lunduke.
@@microcolonel those narrow use cases are also the most common python use cases. anything other than that is probably a cli app where performance doesn't really matter. sure, you can write a backend server using django or something but python provides little to no benefits there compared to any typical backend language like c# or js and there are great performance downsides.
@@jjtt stuff like Django and Flask is 99% of global Python running right now.
More than C being the best language, it's the C compiler being the best compiler.
There is no single C compiler. There are probably a few hundred as it's been taught for a long time in compilation courses.
Fortran has the best compilers because there is no pointer aliasing in Fortran, so the compiler can do optimizations C compilers need help with.
Once once Rust has 40 years of compiler optimizations let's see which one runs better...
@@Flackon That would be Zig I’m pretty sure. They are now already blazing fast.
And the zig team isn’t blowing itself up with internal politics, so they’ll likely still be around in 40 years. 😉Man 54 years for a language to stand firm and be the core language for many kernels is an achievement.
@@Flackon Rust is unlikely to survive that long.
All Hail Dennis Ritchie and his 50+ year old language that trumps all except, assembly..
It's nothing to do with Ritchie. It's all about the compiler, 40 years back all the assembly guys were whining that C compilers were terribly unoptimized.
And even if it's the same assembler it has to be reoptimized for every new generation of every vendor.
Doesn't trump Pascal, which is older.
I loved Pascal, and still have a soft spot for it.
Heh. Lisp. More expressive often compiled.
"Back in the day" when I programmed only on VAX/VMS, one day my boss came over all excited and made me come to his office. We often wrote our programs in VAX BASIC because it was just so darn powerful, but of course even back then the same concerns arose regarding memory, execution speed, etc. So he wrote a quick program that basically just counted to 100,000,000 or something like that printing the start time/end time. For the sake of argument (it has been 30+ years) BASIC took 20 seconds to run, COBOL took 15, Pascal took 7 and C took 1.5... but the REAL shocker was FORTRAN... it completed in 0 seconds. So we compiled/machine and looked at code generated by the compiler and found that FORTRAN was so smart it optimized out everything but the final value. :D
...Also, who wrote the programs for each test type? Because different languages are stronger in different ways and often algorithms can be implemented in a fashion that takes advantage of the strengths of that language? You know what I mean?
And not that anyone cares, but Ada (Ada 83) is my favorite language. :)
I had this thought as well. I doubt it would make any interpreted languages faster than their compiled cousins but doing things in idiomatic ways can make a big difference.
something as simple as "count to 100,000,000,000" is not great for a benchmark, there's a good possibility that it ends up just running at whatever the clockspeed of your CPU is
@@steve55619 For a good compiler that possibility should be close to 100%.
It's the degradation of the education system. Programming in C and other older languages is hard to do well. It takes time to learn and many will never become really good at it. Many of the newer languages have greatly reduced learning curves, but that is paid for in other ways and this illustrates those ways.
Well good thing that all you need to do decent C is to just be okay at it and know how to read read to know what operations are unsafe as the spec literally tells you that
Agreed but C is not hard. It's very simple.
Corporations don't have a "degrade education system", if anything it is a symptom of enterprise programming and practices. Teaching how computers work is not their primary goal, their goal is faster time to onboarding which requires easy to understand programming languages and practices. Object oriented programming was the first step. It made it possible to hire engineers at never before seen rates globally.
@@Nicholas-nu9jxC is easy to learn but incredibly hard to master. Writing a resilient and sufficiently complex piece of software in C is objectively not simple.
I'd say c/c++ is not that hard for what you get in return. You learn a language that the whole world can share and expand, and that has been the case for decades. You get a language that can achieve anything. You get a language that every processor is optimized to run, and thus the fastest non-assembly language. And the great part about c++ specifically, is it is pretty much c. Class inheritance is just embedded structs. Polymorphism is just a function pointer table. ect... C++ is just all the things people normally do in a large C program project, packaged up to make it easier, so its going to perform mostly the same unless you deliberately write a c program that avoids all that for as close to assembly performance as possible. And yeah, you can do that.
What we need are AI tools that can take these other slow languages and convert them to c/c++. Then, you can just tweak out any mistakes, and have a functionally equivalent natively compiled application that runs better. The real question is how libraries are translated because a scripting language eventually reaches natively compiled libraries in the interpreter or virtual machine, so its not a simple button click conversion.
The goal would be to have a language that is super fast to write, but in the compilation process becomes equivalent to a hyper efficient c program.
In the real world, python is just business logic code that does so few computations, that It's really insignificant how slow it is. Once anything remotely computationaly expensive is required, C libraries such as numpy, tensorflow, pytorch etc. Are used. So this is a really silly test. No one is going to use pure python to list the 1000000th digit of pi or break the highest prime found record. That's just not what the language is for and how it's used.
Thanks for making this rather obvious point.
Yes and no. The problem is that some people are oblivious to proper use cases and inefficiencies, or at least _how_ inefficient it can be (ex. maybe they think it's 5x less efficient at a task instead of 50x). Because of this it ends up slowly creeping into other work over time like a weed unless it's put into it's place with reality checks like this.
Most programmers are smart, but there are a ton of well-educated highly knowledgeable non-smart people who will still do mistakes like this.
I see data analysts using Python all the time to crunch large data sets. They have no idea how to code in C.
@@microdesigns2000you don't need to code in c to use c bindings in Python. The library does it for you.
@@jeffreygordon7194 ya after reading all the comments on this video I gathered that. I haven't learned Python myself, but I've read some code. I like the dynamic variable declarations and the language reminds me of basic.
I'm no expert, so feel free to chime in and add corrections. My understanding that adding abstraction slows down the computer. The problem with "Everyone should use C" argument based on power and time can be applied to people who code in assembly. We can go further, why not just code directly in binary if you want to go max speed and power savings? You won't need linkers or compilers. The obvious answer is that we want some level of convenience. The best language balances convenience with power saving and efficiency.
But once again, the more convenient a language is, the more abstract from computer code it is, and the less efficient in general it is.
The other question posed, will there be a faster language than C made? Possibly, but unlikely since it's a simplistic language that's fairly close to matching computer code. You'd need to get something closer to assembly like Fortran or something. I use C++ myself and am happy with it even if it's not the best. Anyway, that's my 2 cents.
Best 2 cents I have red so far. Bang for the buck 😂
Not everyone, not everytime one should use C, always use the right tool for the right job, but when it comes to system programming C is still king and will continue to be so
Adding abstractions does not "slow down" anything. You literally cannot write a meaningful program without abstractions. Every time you write a function, you are writing an abstraction. Compile options matter greatly and C has its own abstractions
@@jshowao Right, all languages have abstractions, I was trying to use the term 'abstraction' to refer refer to the convenience of languages. The more the language resembles regular English and/or more bells and whistles it has, typically the more the compilers have to do to get it to line up with computer code.
@@haroldcruz8550 Right, I'm not here to say C is bad. I was poking Lunduke's argumentation a bit. It's a fine language and I'm glad people are still learning it.
Year 2024, people are still shocked to learn about the performance difference everyone has been talking about since 1998.
Gotta keep in mind that even though Python itself may be extremely slow compared to something like C, a lot of the compute intensive Python libraries (think numpy, pytorch, etc.) actually make use of native C / CPython libraries behind the scenes, so the Python code itself doesn't really do a whole lot of computation in these scenarios, and is only used to interface with the more complex parts of a library.
Not to mention doing work on GPUs, which is increasingly common these days. I’d rather use Python for setting up a GPU job than C.
Those are python libraries, so it is python. What you are stating would be like saying "This specific c library on this specific platform is written in raw assembly, so using those libraries is not really c." Obviously, everything in c is converted to assembly. Similarly with python, everything in python is eventually run natively through the python interpreter. A better description would be that the python functions in numpy tend to run closer to the native hardware. The implementation could easily change.
@@projectnemesi5950 That is not the same thing _at all_ . Python is a single threaded, garbage collected language. The instructions you write in Python are executed by an interpreter, which itself performs translations (and probably some heuristics, as well as garbage collection, just-in-time compilations, etc.) _at runtime_ , whereas in compiled languages like C or Rust, this heavy lifting is done in advance, at compile time.
So yeah, it does make a tangible difference if a library is merely exposing an API that is making function calls to compiled shared objects in the background, as opposed to _also_ being interpreted at runtime.
@@projectnemesi5950 That's not even close to accurate.
@@projectnemesi5950 I'm not sure I understand what you're saying. Are you saying that numpy is written in Python? From looking at the source it seems to be mostly c++ or c.
I mean nobody is using Python or something for high performance application, all of that is offloaded to an actual fast language. Python for "Script you may run a few times a day and completes in 50ms" prob uses less electricity all time than it would for your brain to sit down and spend the time to write it in C
Wow you people still just don't want to admit that Python sucks... just learn AWK and be done with it. AWK is fast, easy and efficient, far easier and less complex than Python. And for data science, just learn R.
@@AnnatarTheMaia the rule of thumb is once you need arrays you switch your script from bash to a proper scripting language
@@AnnatarTheMaia Python’s library ecosystem is not even close to R
@@Ganerrr what does that have to do with anything I wrote?!?!?
@@CaptainMarkoRamius that's correct, for R's package ecosystem is phenomenally comprehensive and well written. It's like comparing a children's tricycle to a space ship with a gravity distortion drive.
What about C being a "portable assembly language" don't people understand?
everything. i used that line and people just try to fight me
@@dercooney I won't fight, but I would like to know how many C programs you're ported from one system to another. OS portability? Computer architecture portability? Floating point portability?
@@quintrankid8045 that's the point: C is portable assembly
@@quintrankid8045
not many, but to answer your question, it's portable in the sense that you don't have to rewrite the whole thing for a new computer. prior to C, you'd do that, but with C and unix, you have source code that works across different revisions of an OS and possible cpu arch. but it doesn't offer much abstraction
@@dercooney Debatable. Two things come to mind, floating point portability (I recall this was a problem for some science app using Python and Python uses native floating point,) and I ran across a guy on the web somewhere who claimed his CHAR_BITS was 48. I guess it's a bad idea to write code with these sorts of dependencies, but then, doesn't asm always deal with dependencies?
Could you possibly add a link to the paper in the video description so that your viewers can go through the paper?
If you do a simple search for
ranking programming languages by energy efficiency
it shows you what you are looking for. As you start typing it in, it even auto fills it.
And writing C is a 100 times more fun!
As in taking 100 times longer due to debugging?
:-)
@@wernerviehhauser94 never had that issue. It’s called learning how to program I guess ;) When it comes to debugging it’s always business logic for me that is the challenge, never memory management and pointer management.
But that’s probably a generational thing, as we grew up doing assembly and C and as a result learned how to manage memory and own the hardware. If you haven’t grown up doing it, it maybe challenging.
if you're a noob yeah
@@kevinstefanov2841 You mean if you are an expert. Real men program assembly and C after all :D
And takes 10 times more time to write.
Pythons hate polar bears.
Got it.
lol
It is funny because there is a data frame library for python, written on rust, which is called Polars.
How dare you!
Reptiles can only regulate their body temperature by moving to where they can warm themselves up or cool themselves down. They are optimistic about finding cold spots.
Hypothetically speaking, Java had much time invested in its compiler/JVM development from highly talented developers from early 1990s. Which is probably why it is so high on the list.
I also agree with your point that there is no investment in compilers developments due to how much speed in hw we currently have compared with early 1990s.
Which is why the recent languages don't perform as well.
Rust
Thanks for pointing this out. I know a certain switch company that, using Linux and C++ for their operating software, decided to code "non-critical functions" in Python. They were pretty good at it, incorporating the Python interpreter in the OS and getting pretty good at translating between the languages. After a while, they announced a new policy, namely going back and rewriting python programs in C/C++. This was due to customer complaints about how sluggish the OS was.
The moral of the story was that, Python wasn't wasting time in any critical functions. It was wasting time *everywhere*, and general slowness was the result.
People say Python is fast to develop in, because you can write less boiler plate, but it becomes really slow when you have to start over in a good language.
How is JavaScript just 6x slower but TypeScript 46x? TypeScript is transpiled to JavaScript before execution. Are they including the transpilation time? (That'd be a bit like including compilation time of compiled languages.)
There’s no way Java is faster than go
@@MrFunny01 Modern JVMs have quite impressive Just In Time compilers....
Yeah. Exactly. I don't understand why ts is that much slower. Typescript it's just getting transpiled into js.
@@chainingsolidJIT makes Java likely to beat Go in a benchmark due to repeatability, but I'd be interested to see how it fares in more varied usage.
I suspect they took it from the start to end.
This would include the conversion
Rust never claimed to be faster than C...
It claims to help to write safer software with more creature comfort than C without being noticeable slower.
And honestly looking at those charts they've done good job.
Only 4% that's really impressive. Also Lua JIT is missing, which shouldn't if you include TypeScript.
On the scale of all computation being done on Earth, 4% is absolutely massive.
Using Rust is genocide. The children, the children.
@@cajonesalt0191 Programmers are inherently lazy. wasting 4% to make sure they don't screw up because of how lazy they are is an acceptable trade-off. That being said, I'm not certain, but I think we could minimize that 4% further by changing pc architecture.
@@cajonesalt0191 Yeah, but since it replaces the current node/TypeScript garbage it is still a huge improvement.
Now, if all the unnecessary cloud nonsense, including the network traffic, would be gone we could decommission additional power plants.
@@cajonesalt0191 Rust is only slightly slower in synthetic benchmarks in a lab setting. Compare real world C projects to Rust projects, and the difference is often Rust being 100-500% more efficient than C.
So now we know why newer computers run slower than older computers despite being better, stronger, faster. 😊
I did not expect TypeScript to be less effective than JavaScript...
That is the biggest head scatcher...
Yup. I also don't get that. It's just transpiled to js. Right?
I think it results in some junk JavaScript getting made
They must be transpiling at runtime or somthing, that gap indicates something has gone horribly wrong.
@@ThomasCpp Including all of the processing time would be correct for scripting languages. Javascript can be converter into an internal format to gain speed too.
@@kensmith5694 Converting ts->js is a compile time step, not a run time step.
Pascal is waaaaay better than many people give it credit for. After all these decades for it to still outshine most others speaks for itself.
graphics.h is a bit of a problem with modern computers though lol
Lazarus Free Pascal, once mastered it's hard to bet for GUI applications and cost. $0
@@pwalkz I used to code in Borland Pascal (plus TASM) a LOT ))
I hear this a lot . But C won over pascal for some reason .
Embarcadero rad studio is incredible but everyone agree to ignore it
Would have been very nice to have zig in the comparison.
It would be faster and more efficient than C or at the same level and if it's less it won't be by much
I think everyone agrees its performance is nearly identical to C. Some cases it may be slower, some it may be faster, but it's at the point where it's hard to compare.
The study is from 2021. It would be nice to have one from 2024 too, some of the slow languages have improved.
I missed zig too, they probably didn't include it because it isn't used much yet. I think that zig is very promising but it needs more libraries and all that.
I can make the comparison especially between C, Zig, Python and maybe Rust depending on the benchmark.
Which of the benchmarks do you think are the most important? Give me the top 5!
I agree that the Free Pascal compiler is a real gem. It also has the advantage of taking in a language that is fairly easy to compile.
I will point out that the software development times don't seem to be getting faster with the newer languages and also these languages that are supposed to defend you against errors seem to be used to write a lot of buggy code.
That's due to Scrum making every issue take multiples of two weeks, and only hiring the cheapest devs, because "more is better" even though that makes the useless Scrum meetings take even longer.
Pascal can beat C. Nothing beats Pascal-C.
As a user of many langs, I think pascal is a great combination of speed and easy to debug, plus being RAD. For me, easily the most overlooked and underrated.
I don't think we could've done mental ray in Pascal. :)
This was a reply I made but it deserves to be a top-level comment. Ultimately, Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I think that it's overused. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Wholeheartedly agree. The beauty of Python comes from being able to pump out a script to do a simple, seldom-repeated task in an hour or so.
Python is a scripting language and should be treated as such. There is no reason to be spending the time creating a website back-end in a Python framework like Django when the benefits of such a "simple syntax" are so heavily outweighed by the drastic loss in performance. You will eventually just have to rewrite the whole thing in a more efficient language or stomach the increased operational costs anyways.
I'm a relatively new programmer, but for things I intend to have running 24/7, I much, much prefer slow development over slow performance. Migrating things I originally wrote in Python when younger to Rust has allowed me to do so much more with my raspberry pi micro-homelab than I originally thought would have been possible with the hardware.
Well that's the big problem, isn't it? Python was created to be a more robust alternative to bash when bash isn't quite enough for what you're trying to do. Javascript was meant to be a simple language for simple front-end tasks in a web browser.
Now most the world is running on languages that were never meant to be doing what they're doing.
Sounds like a lot of excuses for clubbing baby seals.
@@FatherGapon-gw6yo They deserve it for sending ninjas to kill my parents.
@@JodyBruchon Since I know you use Windows, why not just use the Win32 API functions for extracting bits of an INF file.
Ts doesn't transpire to js the way js is jit processed. It adds tons of js you wouldn't need if it had been written in js to begin with.
Python is mostly used as a wrapper language, most of the work happens in libraries that are written in c.
when you create a NewLanguage that in some place faster or use less memory than C, it only means that C can also benefit from the same speed or memory optimization and will be also faster and/or use less memory when this trick will be implemented in the next version of C compiler. but without overhead of NewLanguage features. so probably there will be no NewLanguage that is faster and more memory efficient than C. and it only means that you can decide how much efficiency you are ready to pay for the new features. if you want a more efficient language than C, go to assembly language, but nobody now wants to do it.
10:40 is it me or since c is about memory manipulations and everything is directly handled by the developer, it should actually still win on the ram benchmark? programming differences.. so if the code was totally the same on both language c would probably still win on memory
C++ does not need to be slower than C.
Makes you wonder what exactly they did, for example did they write a custom algorithm in C, for some task rather than using a generic one from C++. In that case the study is nonsense.
They could have copy pasted the C code to C++, and be done with it.
I have been using pascal for over 40 years and forever looking for alternatives and no other language has come close for me and I have tried all the main competitors. I now use Lazarus Free Pascal to write fast GUI applications for Windows 11, MacOS and my favourite Arch Linux. Well done Brian, I am with you on this study and agree that these factors should be taken into account. About time a good modern C language should be developed, or an improved modern C++ developed on these criteria.
I loved Pascal back in the day. I would love to see what happened to Modula-2 which I also used some at university.
The thing about C is that it's pretty much as close to writing in assembly as it gets. Minus the compiler optimizations, the generated assembly is as "literal" as it gets.
In my opinion there's no need to "reinvent" C.
The price of abstraction is sometimes worth it. I'm a C-zealot myself but I understand the appeal of languages that make life easier. Some people cannot handle the amount of power you're given as a C-programmer, and that's fine. However, I'm still conflicted as I recognize this "slacktitude" is contributing to a slower, more unreliable software ecosystem.
thing is back in the day we used to use Python to prototype things and then once it was figured out we'd write the real thing in C++, not happening anymore
@@Zuranthusin some numerical work it's not even good for prototyping as its too slow even for test problems unless you heavily use numba, cython, jax, cupy etc, at which point it wont be much easier to develop than c++.
@@hawkanonymous2610 And again I'll point out that unless you're a newbie you'll either have written your own libraries or know which ones to use to accomplish tasks equally as fast as programming in Python. It's just a matter of your experience level.
@@anon_y_mousse Gotta love the disingenuity in this response.
@@transcendtientC programmers say C is the best for everything and if someone criticizes C their only response is “sKiLL IsSuE”. I would know, I write C at my job and I know a lot of those types of people.
That's why Pythonists use C libs for any non trivial compute
I started programming in about 1983 in BASIC. My first compiled language was FORTRAN. I was always impressed how much faster compiled languages were, but I never thought about power efficiency. I started programming in Python several years ago, and I was amazed that an interpreted language seemed instantaneous, but I never thought about how the indiscernible slower speed would consume so much more power. Interesting.
Every programming languages has to make certain trade offs because we don’t live in a perfect world. The developers of the Python language wanted to prioritise development time and ease of code maintenance over execution time and efficiency. That’s a perfectly reasonable trade off to make in a lot of cases. For cases where that isn’t desirable there’s C or Rust or whatever. What’s the problem?
I look at python kinda like a shell scripting language.
It's there to run other programs and pipe them to each other.
Python was designed for prototyping. It's only when you use it on things that it was never designed for that it will bite you in the ass.
Java is doing surprisingly well there
Java is a good language. A good language is one used by a lot of people to accomplish a lot of tasks. That is all that really matters. Speed is a side effect of language optimization because people actually use it. That is why c, c++, java, rust, ect.. are all the best performers of the study. And its the reason everyone is criticizing python for performing as poorly as it does despite popular adoption. I think the reason for this is most of pythons popularity are due to a few powerful libraries and prototyping. So you will see a ton of numpy scripts, or opencv, ect..
This is why the industry should just use Assembly Language
You win the entire discussion with this comment, IMO.
No. ASM is way too slow. We should all learn how to wire TTL gates together to get the answers we need.
Where is pure assembly ?
Compilers do a better job at optimizing machine code than hand optimized assembly, so there really isn't a point in doing that.
Naive assembly will look roughly like what -O0 will emit. To improve that you'd start by adding optimizations but that's actually exactly what a C compiler does when you use -O1 -O2 -O3, so really C is just a way of automating the process of writing assembly.
right below HolyC
@@josephp.3341 Depends on what you're writing, but some compilers are not as good as they should be at optimizing poorly written code. Even just the difference between calling a function with two arguments versus a struct with two members can cause wildly different results.
@@josephp.3341 I see this repeated a lot, but it's not really true: compilers are written by humans and the techniques compilers use are the ones humans used to use (excluding a few new peephole optimizations and whatnot). It's all about time, it takes a lot of time for a humans to repeatedly try inlining a few function calls, optimizing the result and finally figuring out whether that was worth it or whether it's best to undo all of that work. Let alone keeping all of that maintainable.
Edit: Also, -O0 is (for most compilers) much worse than naive because it's too systematic, for example a human would tend to use more registers at once, use assumptions the compiler isn't allowed to consider and do simple optimizations like "call somewhere
ret" -> "goto somewhere"
C is 765x slower than assembly.
I'd love to see something like this with different versions of C compilers.
C is the language for efficiency, so a lot of work was put into C compilers to make that even better.
According to one benchmark i've seen, oksh compiled with cproc, a very small compiler which is only 17479 lines of code (including code for it's backend), is only 1.35 slower than if you would compile it with gcc or clang. Oksh is probably not a good benchmark for code, but i bet with almost any c compiler you'll get higher performance than something like go or C#.
Most people doing python are not making complex maths stuff. They're creating scripts to iterate over a CSV file or an API that grabs some data from a db and sends it as json, and even if they're doing more chances are python is calling a c library trying the heavy lifting.
You realize that heaps of the infrastructure at Google and every other big tech firm is written in Python right?
@@JodyBruchon And not to mention, academic work! I see Python heavily used by lab scientists are Stanford studying things way beyond my comprehension (like comparisons and data analysis involving zebrafish spines). It works well for them. Pytorch is also another thing academics love.
So, once again: there's pros and cons to everything. It may be slower, but the trade off is convenience and familiarity within their tribe (lab). Just food for thought!
Pretty sure the default json module that comes with Python is written in pure Python for various reasons, portability being the big one. Please check this though, I could be wrong.
But what I know for sure: there are several third-party JSON modules that defer to Ctypes or Cython which perform a lot better, but lack all the bells and whistles the pure Python one has.
@@koitsu2013 Indeed. Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I do think that it's overused, though. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Not true. All AI scripts and people are using python.. Yea I know, right?
Interesting that Pascal is slower than Chapel but uses less electricity, which suggests that whatever Pascal compiler they are using isn't using the processor as intensively hence the speed difference. So an implementation of the language rather than something inherently slow in the language by design.
Pascal is C are basically equivalent beyond syntax. The only difference there is compiler. I write modern industrial Win32 GPU graphics software in Pascal.
It uses less memory
@@LTPottenger As both C and Pascal put raw data into memory and don't wrap it in some kind of object structure, that also is pure compiler logic. My guess is on not aligning record structures to 4/8 bytes. This makes the access slower with modern CPUs but uses less memory.
No COBOL, huh?
That would require someone to write the COBOL implementations of the benchmark tasks.
COBOL is a carbon sink. 😅
COBOL was classified as a prescription sedative in my country. I think it killed off some of my older colleagues.
And most of the AI libraries and software are written in Python. What's the big idea?
Written in Fortran, for Python, because it's short and Jupyter notebooks are executable papers.
It's because all the postdocs who spent their years researching algorithms and math have an easier time dealing with the abstraction layers that Python affords. It's easier to spend time thinking about the problem domain when you aren't simultaneously obliged to think about type safety and footguns.
So according to this study, if I take a C program, and compile it with c++ compiler (with no, or very minor alterations), the program will become 1.5 times slower? I hope I don't have to explain why this claim is silly, and casts a shadow onto the whole research?
I wondered the same. For a fair comparison, you need an expert in each language to write the code.
As far as I can tell, the applications tested are meant to make heavy use of the cpu. I am wondering about the actual impact when using a normal application which is mostly waiting for io. Eg when using application written in X vs V, how much delta in energy cost would be accumulated over a year.
So, it makes you think.
Our Linux systems often use like 0.5GB of ram, and have allot of python apps in them.
So what runs in Windows, since it takes 5GB of ram, and is filled with c/cpp.
> puts on a tinfoil hat
Not a result of the language . Windows is filled with bloat and other design nonsense
It's mining BitCoin 24/7
@@42Cosmic42 exactly my point, even tho they use C/C++ for that bloatware, it still uses so much more resources :D
Windows has very little C. It's almost entirely C++ and MS in-house languages like C#
UWP, .NET and spyware are what slows down Windows.
Tinfoil hat is venerating the attributes of Jupiter.
Even worse, when you consider the applications running on these languages. OpenStack (written in Python) is running in containers on top of OpenShift (written in Python). So interpreted code on top of interpreted code on top of interpreted code.
Rust being faster and less power consuming than C++ was mildly surprising to me. I get a slight negative rust bias but that seems extremely small cost for more automated memory safety.
It's no better than C++ for memory safety, it just has a more annoying compiler.
@@anon_y_mousse It is better than C++ for memory safety - the compiler catches bugs that most people never do.
It has a smarter compiler so you can use your cognition within the problem domain, instead of wasting brain cells figuring out what the compiler already knows better than you.
C++ people are upset they're not the best in town anymore.
Rust is exactly just as fast as C according to the most up-to-date results:
energy-efficiency-languages/updated-functional-results-2020
Everything the guy in the video said about Rust is outdated by the same study.
@@jboss1073 Where?
Many of Rust's guarantees are build time rather than runtime but I am also suspicious of that. In fact having C++ be slower than C seems like a bug either in the implementation or the compiler. Templates are build time and member functions are basically just normal C functions that take in a this pointer as the first argument. Unless they are using dynamic dispatch or something I can't think of any reason the c++ version should be slower.
Skill issue kills the planet
Mojo looks like an amazing alternative, but it is still in the early stages
Also Bend, which forks loops onto CUDA cores.
Also, Mojo is not open source, and that's repugnant.
@@nandoflorestan Because it's not released yet? It will be open sourced once it's released
We already have cython that does something similar. While widespread, it is not that impressive as a bare speedup to pure python because of python syntax, and to achieve really big speedups requires a massive rewriting with obscure annotations, which require considerable knowledge of how memory works similar to C (sometimes is just easier to rewrite the function in C).
I do not see how mojo will benefit the general pythonist.
@@lorenzomonacelli Cython works with standard Python type annotations, and Mojo if not typed probably infers types before compilation.
Software is going backwards in performance because it started out at the bare metal, where you HAD to understand the architecture. All this abstraction to hide the underlying hardware is where all this inefficiency comes from.
Optimizing compilers are grossly overrated, and I have been saying this since the first version of C++.
Most libraries are junk, and again, all comes back to dumb lazy programmers who don't want to spend the time understanding the underlying hardware.
Compilers can't get better because they would need to anticipate every single possible line of code, or blocks of code, or every algo that could possibly be written, which is impossible.
We need to get back to KNOWING THE HARDWARE, and KNOWING WHAT WE ARE REALLY DOING.
There is no other replacement.
Convenience ALWAYS comes at a price.
Its a mutual relationship. The software must know the hardware, and the hardware must know the software. Processors today are optimized for c programs. Everything is human made. The entire point of a programming language is to make things work and do it in a way that everyone can understand. Just like a spoken language, a "good" spoken language is one that is heavily used.
In defense of Lua, the 100% interpreted version is very slow. But LuaJIT is more on par with C# and C++ in some little bench testing that I was running a while back.
Also, Zig is newer and is supposed to be faster and more efficient
LuaJIT is compiled LUA - the bytecode, right?
Wait, what is Go's excuse?!
I think this whole article is BS. There's no way java which runs on a VM with a GC can be faster than go which runs natively with a native GC.
java is jitted, so it should perform similarly to go
@@FinaISpartan similar but not that big of a difference. And certainly go shouldn’t be that slower than Java.
@@MrFunny01Java has 20+ years of hacky optimisations so it does make sense
What vreion of go was used?
One thing I saw that I do take objection to with regards to their test is that of using trees. The implementations can vary to the point of degrading performance a noticeable amount. I didn't see if they used hash tables too, but those can be excessively different in implementation as well. I primarily use C myself, but not for environmentalism reasons, but because I want to save my users time when they use my software. I'm not surprised about Python, because I use it fairly often too, but I am surprised about JavaScript. Yeah, I use that on occasion and it's slower than molasses in a freezer, but I would've expected that all of the work that has gone into optimizing the various interpreters would have meant it would perform better than it did. At least now there's some degree of quantification of how much the other languages suck.
A few observations:
1) Ada is still largely ignored even though it's actually a joy to write in.
2) They did not include Forth, which is telling. I have little to no doubt that Forth would either tie, or beat C in at least one of these metrics. The catch is, it sucks trying to read the code several months later - but that's not what this study is testing for. 😉
3) If Pascal received the amount of mindshare that C has received, it probably would beat C in all metrics. Ada is heavily influenced by Pascal, so you can get an idea of what Pascal would be able to do (It's just that Ada is to Pascal, what C++is to C - in terms of size and complexity).
I do find it interesting that zig wasn't on there, never tried it but knowing it can use C headers without issue I might consider it
Because it's not release-ready yet
Not surprising seeing how despite computers becoming more powerful, because of all the bloat and inefficient coding, nothing feels all that different in performance than 15-20 years ago. Also I am not sure how many of the background processes are that useful. It;s really annoying that something like a youtube page needs I don't how many gigs of ram just a for a normal HD video...this tab right now uses 1.1 Gb ....that is insane.
I was expecting Fortran to be highly competitive with C, but it is clearly not.
Me too
There is not a lot of incentive to improve Fortran these days. Almost all the legacy source code for it got ported to C or C++ long ago.
Older versions of Fortran would be faster than newer Fortrans.
@@metaforest However, in the TIOBE index, Fortran is among the top 10 languages, and the Intel Fortran compiler is very active!
Me too, because my friend at SMHI (a meteorological institute) says that their old FORTRAN programs run circles around their new Java reimplementations. Problem is that they can't find FORTRAN programmers anymore.
I did not read the fine article referenced but are these the programming languages used to run most of the CPU time of the world? Zig and Odin are not really consuming much cycles globally. Web browsing and video rendering are the most used applications for end-users, I'd guess. Libraries are doing the CPU cycle work, and they are written in in C/C++/C#, mostly? And GPU helps so there is more research to do.
5:57 I’ve actually done a ton of embedded language testing. Compare LuaJIT vs the others and you’ll be SHOCKED by how fast it is.
The “regular” lua engine (not LuaJIT) is only optimized for portability. The performance is as weak as you imply.
Well, python would've looked a whole lot faster too if they used pypy, which is about as competent as luajit. I'm only suspicious about Lua having been slower than python in their tests. Lua can be as slow as python in some things, but in my experience it generally isn't.
"Rust, GO, Swift, Dart, Ruby. All slower and use more electricity, than plain old C"
You forgot to add Javascript
I will say, Java and C# have actually been putting in investment towards speed and efficiency. They just started with the handicap of being managed languages, which sometimes is a useful price to pay.
Rust is even similar to that in a way. Every runtime check you add for safety and consistency at the language level is going to slow things down. C doesn't care and trusts the dev did that checking at programming time so there's nothing holding it back (from exploding at spectacular speed, sometimes)
Inferior error checking and many other types of issues can conspire to cause more development cycles, potentially far exceeding whatever time and energy savings are achieved by the final working application.
Some thoughts...
* Compilers and runtimes are written by programmers, so it's not surprising that they'd optimize for developer convenience
* Companies want to optimize for feature output -- a proxy for developer time, which (currently!) is the long pole for costs
* As a user of a rented Azure Kubernetes cluster, I'm *painfully* aware of C# and its insane desire to hold all memory at all costs
End of the day, it all makes sense!
I'm very surprised that java outperforms Go in energy and time? How is that even possible? I understand there's a GC in Go, but Java has WHOLE VIRTUAL MACHINE which is a lot of C code by itself. Then standard library, then the application. While Go compiles to the native code with an embedded GC. And Java still has A HEAVIER gc, heavier class runtime structures, heaviver runtime? Why the first entries are all round up at 1? 1 Megabyte, 1 joule, 1 ms. Also Lua is way simpler than python. How on earth does GO uses less memory than C if it has the same GC. This article is so BS honestly, just for C fanboys. No, I'm not pooing on C, it's an awesome language and Python is indeed slow, but this article got something seriously wrong.
I would like to use C or C++ for more, but for rapid development, for systems that change just as rapidly, JITted languages are a healthy alternative (C#, Java, etc). As a C# dev, this chart can be misleading. Run a service for days and you'll notice initial memory usage seems high but compared to other languages and frameworks it does as good or better than the majority. Ruby, Lua, Perl, Python are fine for scripts, but honestly they suck for real production perf. As a language, I can't stand Ruby. It's awful. RoR is a bag of hot doodoo.
Pascal bit makes sense since Pascal was genuinely superior to C (i love C)
*IS superior, or at least, on par. Although I wish we had C preprocessor.
Both were and are still great. I wonder where Modula-2 would stand ? It was the better Pascal but never took off it seems.
Last year, I developed a simple genetic algorithm library in pure Python called bluegenes largely as an exercise. I then spent a couple of days translating it to Go and got a 100x increase in performance. I then did some Go-specific memory optimization and got a further 100x increase in performance. It blew my mind.
inb4 someone tries to incorporate transportation, man hours, and calorie consumption relative to error handling and memory safety, into some metric. Ackshtually Rust is the best if you look at it like [blah....]
Rust avoids many costly bugs that can occur in less safe languages. It might even have a formal prover like Spark Ada, which was designed for mission-critical applications like weapons and satellites.
@@CTimmerman Some of that is legitimate, some of it is just to avoid errors made by people who don't belong doing what they're doing. And should never have been incentivized, coerced, and drawn into those fields. I know how that sounds but if I went into the story of how I learned to program and the frame of life I was in, which basically amounts to constant whole body pain, fasting for days at a time, inability to sleep, and functional brain damage, the fact that I'm not only self taught but came out of it having sought out and knowing how the machine works down to the RAS and CAS signals sent to the memory controller, I can bluntly say most people don't belong anywhere near software. They just don't. They can't do recursion, they never bothered with (inline) assembly, the idea of cache misses and data locality is like a whoooaaaaa mind blown moment several years into their career, they just learned java and typescript or whatever and never bothered to learn data structures and how the machine actually works.
I mean look, you can argue that it's a manpower and volume issue, so you'll architect tools and frameworks to keep all the normies on the rails so to speak. And at those organization and society scales you can make a case for that. But the fact remains, everything I stated is true. These people are not programmers, and they don't belong doing it either. Rather writing software is a surrogate, it gives them identity and money, and so they do it in order to get the money. This curns out mediocre "bare minimum" self serving types and then equally incompetent layers of management that have to try to channel that writhing malformed mass, that self serving blind beast, into doing something other than devouring itself. The whole mindset is wrong. This fixation on memory safety is partly quality of life improvements, and partly damage control. Mostly, damage control. For people who barely care what they're doing, because they're a product of the Prussian educational model, and due to the media exposure in early childhood [...].
I can say it, I lived it. That was my late teens and twenties, wasted being tortured. And even there what I created was the best. The best. No corners cut, no sloppy hypersocialized water-cooler mentality. No, the best. I take what is, and I make what ought. Those are the types of people you should either seek out, or rework aspect of society and human development in order to manufacture. Those others who can't or won't, don't belong, and shouldn't be pandered to. They need the boot. Get the hell out of here buddy.
I'm deeply skeptical. These large scale comparisons are extremely difficult, as the quality of the implementation is huge factor.
Naively implementing a particular algorithm in each language, when no "skilled user" of that language would do it that way, is a sure way to create very misleading results.
C and C++ having significant variation is a clear indication of this - written properly, C++ is virtually performance indistinguishable from C.
I think the paper is asking the right questions, but I'm not gonna be waving it around saying "we must switch to C" (as much as I would be okay with that, personally...)
Written properly for C++ (in regards to performance) just means never using templates, never using virtual methods and basically never including any header from the standard library. So basically just write C, but instead of functions where the first argument is a pointer to a struct, that struct can be a class with the function as a method and thats it. Oh and maybe you can use range based for loops and std::vec.
Looks like they used benchmarks game. So it's optimized but unidiomatic code ...
By the way there was (I guess Canadian) student challenge in engineering to build satellites and program them. Some company launched them in a rocket to reach orbit. Ada might not be the fastest and the most energy efficient, but it is very close. The sattelite written in Ada is the only one remaining working in orbit. Now think about that. C is nice, but ...
It's a pity they did not test LuaJIT, which can be as fast as C, as opposed to the interpreted Lua.
Most people use interpreted Lua.
Once they're running yeah. Java scored well too. They often are slow to start though.
LuaJIT tends to be about 100% slower (i.e., taking 2x the time) compared to optimized programs in compiled languages. It even seems to be somewhat behind the Java VM, though it does appear to hold pace with JavaScript's V8 engine (Source: Mike Pall's scimark code vs. equivalents created by others in Rust, C++, etc.) .
IME, that's about the difference between competently-but-lazily-written C++ and optimized-by-experts C++, so not a bad result overall. The best results are probably achieved by combining the FFI with optimized native libraries, while minimizing context switches to get the most out of both the JIT compiler and the heuristics used by LLVM/GCC. LuaJIT without native libraries doesn't make sense, so it's not useful to benchmark interpreted code that should have been put in a native library to begin with. And the default interpreter is so slow that talking about its performance is effectively a complete waste of time, especially since it doesn't have a FFI and its C-interoperability layer incurs very high overhead costs.
Still better than CPython of course, but the value of Python lies in its vast ecosystem. Lua doesn't even try to compete with its minimalist Do-It-Yourself philosophy.
This is a myth. Luajit is very competent, but it can never be as fast as C. Luajit doesn't statically compile Lua. It compiles it where possible, and still does plain interpretation where not. Code with string concatenations, for example, will run at normal Lua speed (slow).
However, in this regard, pypy is very much comparable to Luajit. If they included Luajit, they would've also included pypy.
I don't buy the part where they made lua look slower than python, though. That's not been my practical experience with the two. Not at all. There are things where lua can be as slow as python, but for the most part, it's not.
If you want a language that makes you care about memory, efficiency and speed, Zig is a good option to check out. More ergonomic and readable than C anyway!
The fact that Pascal is not tied with C for 1 or indiffernetiably close only illuminates the imperfection of their method. But yes compared to the BS modern languages this video is great. Long live Pascal and C, death to the modern bloated slow BS.
I would be interested to see how Zig ranks in a test like this. I see Java doing its thing taking quadruple the RAM it should but being blazingly fast for a garbage collected language. What stands out most to me as someone just getting into Node is typescript being _substantially_ heavier than vanilla JS.
A 75x slowdown is wildly optimistic. Compared to finetuned, optimized C or assembly, the slowdown factor is easily in the thousands. But it doesn't matter, because Python is mostly used for business logic or as a glue script, with the bulk of the task done in well optimized libraries written in C or other languages. Most software written in Python would not actually benefit from a big speedup if it were rewritten in C (some would, because many programmers are unfortunately not aware of the performance implications of what they do and would happily write in Python things that really should be part of a C library).
When you realize that every function call and every operator requires a hash table lookup in a script language (where the code model can change between two calls and nothing can be optimized) you will we surprised. Writing a compression algorithm shows you it's more like 300x time slower (done this in 2005). By the way 300 times is just what you get when an instruction has a L2 cache miss. And scripting language have terrible memory locality.
All computational algorithms in Python are implemented in C or Fortran anyway. If you use JIT extensions such as taichi io get same performance
Does the C memory score take into account the inevitable memory leak?
OUT_OF_MEMORY
Memory leaks are *NOT* inevitable. Stop blaming your tools for your mistakes
L + meme + skill issue
'Inevitable memory leak' is a skill issue.
@@metaforest I suggest it almost isn't even a skill issue. There are some quite simple rules you can follow even if you don't have a lot of skill and you can avoid memory leaks.
Unless you are talking about very limited resources the difference between Rust and C in memory usage is insignificant, It's all just right tool for the job.
Lua isn't that slow. I don't buy it for a second. In my experience, pure Lua is quite faster than pure Python, and internally it's a lot less bloated than python, with a whole lot less sanity checking overhead, etc.
I'm finding that study suspicious.
Experts agree that Rust is both safe and effective.
I had a similar idea a while ago while debating whether or not performance mattered. It might not matter for your application itself, but making something 5x faster means you can get by using about 1/5th the same computing power you'd otherwise need. That means either less poweful hardware or fewer replicas, meaning lower cost. If that can be achieved simply by moving from Node.js to Java, Go or Zig, it's a good idea (especially because no one deserves the pain of doing JS on the backend, so everybody wins).
Why is typescript even in this list????
Well Javascript that's compiled from Typescript looks nothing like hand-written Javascript. I guess the huge performance losses are mostly due to automatic polyfills of even slightly modern Javascript features.
I don't understand why ts is so much slower than js. While ts is transpiled into js. Right??
@@MelroyvandenBerg It may make messy JavaScript and the conversion time was likely included
@@perguto
That also means that when writing software to clean Typescript it gets performance improvements for free when browsers/NodeJS have better capability. Just upgrade NodeJS/browser, change JS target where to compile, build again and boom, you get more performance.
If you use modern TS features, but target a really low ES version, the output is horrendous. If you target modern engines with a recent ES version, the JS output looks almost exactly the same - just the types are stripped out.
A study from Portugal. Nice! I'm surprised with how Rust compares with C, but that was years ago. I think, since the Rust specification evolved, a new comparison should be done.
wtf why C++ is 50% slower? how? I can compile C code as C++ and in some rare cases code will be FASTER (one youtuber compiled org DOOM in C++ and had 1% speed up).
Same with TypeScript, how is worse than JS???
I would not believe this this research much.
There can be good explanations for TypeScript. The conversion time would be included and the JS code made may not be quite as good
OO is glacial but agree that well written generic C++ is as fast or faster than C
Because modern C++ is not "C with classes"
@@Sneg00vik Yes we have safer `std::array` and `std::span` instead of `int[]` and `std::unqiue_ptr` instead `T*` and now where is this 50% overhead? We have RTTI and Exceptions that are heavy but if you not using them directly and have serious work load (aka start up is only fraction of time) you should not be able to notice it.
Of corse we could created bloated code that will be slow but then we testing bad code or C++? We could probably create equality bad code in C.
Only place where C++ is always worse is compile time, where all overloads and templates could make compiler glow red, but then if we include it in cost, how in earth python become 70x worse? When in very big program python could finish workload before C or C++ finish compilation, not mentions Rust that have similar bad compilation times as C++.
Yup. C++ uses the same exact compiler. Maybe they used specific C++ features, like vtables and RAII? I know Common Lisp code isn't the usual code, but uses typed Lisp, which is basically the C/C++ code but with worse compiler.
It would be interesting to see the Lua interpreted run time vs Lua's cached byte code.
Memory usage is kind of misleading. The energy graphs show one reason why, the other is if you have a 100Mb executable for C then it's a 100.5Mb executable for Rust. There's a static overhead for many languages but no overhead in the actual code.
There's also compiler optimization you are not taking into account. Rust and C does not create identical Assembly code so the memory usage won't be the same.
@@asandax6 And Rust apps are almost always blobs with all dependencies compiled in. That provides more flexibility, since there is no pressure to have a stable ABI limiting progress (also a lot of code is generated via macros), but at the expense of no reuse of loaded modules across many instances of an app and increased compilation times.
@@JanuszKrysztofiak That's what I hate about rust. It compiles with bloat. Stable ABI is important and saves memory space (which every first world developer says it's cheap and don't realize the huge number of people that can't afford phones with more than 64GB and laptops with 128GB SSD.
@@asandax6 yeah, ram is no problem when you're talking megabytes
no reason to not pre-allocate a couple of megabytes these days, ram is abundant on computers these days
i pre-allocate tons of memory on the heap for easier memory designing (allowing for growth) all the time because everyone has it anyway.
anyone that can run a web browser with all the bloat on those today, is still EASILY within range of my program, i'd have to allocate hundreds of megabytes to even come close
these studies need to take account of the brain's food consumption during days spent finding which libraries of which versions are in which repositories of Ubuntu
That is why I think it is funny when people program AI infrastructure in Python, and then claim it is running too slow. 😂
This is pretty cool. I know the authors did some splitting of the results by compiled / virtualized / interpreted, but I think what would be even more insightful would be to break down the languages by feature (e.g., with or without run-time reflection, with or without dynamic types, with or without garbage collection, with or without strict-aliasing rules, etc.). Basically, beyond the d-measuring contest between languages, it would be nice to measure the cost that each feature adds to begin (finally!) to do some sober cost-benefit analysis. For example, I suspect Python pays a huge cost for having to translate most attribute or method call into a run-time reflection lookup, for the marginal convenience of occasionally being able to write some pretty nifty code.
There are two definitions of speed; how fast does the program run, and how quickly can you get the job done. Often, 71x slower means it runs for a minute, while c runs for a second or so. Cool. However, if you need half an hour more to write the thing in c, and the program is basically something that does number crunching and writes it down into the database or in a file, you basically wasted 20 minutes to save one. Sure, if it has to run frequently and especially if a user has to wait for the result, by all means rewrite it in c.
And yet again I find myself pointing out that newbies will not be competent in using C and will have an increased development time. This is who Python is aimed at. With more experience comes using either your own libraries or those that you have learned to use and development time is reduced. Go ahead and ask your mother to write something in C and offer no help.
Python dev cope
@@JiggyJones0 Python devs don't need cope, they get paid. "But it's not optimal" is cope from unemployed c programmers.
In re Python... yeah. I feel a little guilty whenever I code in Python, but for a one-time script, or some infrequent audit script, it's so quick and easy to get something spun up.
you are not thinking in the economics behind python right. the energy (money equals energy consumed with that money) you've saved in engineers time due to utilizing python is WAY higher than developing everything in C
C++ is still quite good.
Why hire competent developers to do the job properly when you can cause more environmental damage instead?
That equation breaks down pretty quickly once you're running the python code at scale
Say you can learn Python more than 70-80 times as fast as C. Which seems like an overestimation, imho. That still does not mean people who know both Python and C well, can write the same script 70-80 times faster in Python than in C.
For competent developers, development time isn't nearly so big a factor as people think. Generally, if you've been in the business for any length of time, then you've either built up your own libraries for doing things or you use specific libraries that someone else wrote. It's only for the newbies that the language's standard library really matters in most instances.
Python is 1,000x faster than using a bunch of Excel sheets and 100,000x cheaper than hiring a bunch of extra accountants and business analysts to manually re-compile a "database" from unstructured data. And the use case for basic business applications still heavily favors high-level languages like Python for most straightforward use cases (and if you need to do stochastic simulations then yeah the libraries are written in C).
These tests show nothing as we know nothing about the actual code and compilation. Maybe it's just shitty code with disabled optimizations.
In some sense that would make it more realistic :D
I went ahead and took a look at the code that they have published, and I am not impressed, and you're right that we don't know how they've compiled it.
As a C# developer I know that C# can compile to native code, if you instruct the compiler to compile for a certain target platform, whereas Java cannot do so without going through its VM first.
@@chralexNET java even has a couple of differnt vm's
@@chralexNET You say that as if `gcj` and other native compilers don't exist. You're just spreading FUD like every other MS fangirl.
@@anon_y_mousse Funny that you say I am spreading whatever "FUD" is, when the first thing you write about is a compiler that is discontinued, absolute tool.
@@chralexNET And yet, you dishpit, it still exists, still works and isn't the only example of a native compiler. Since you claim to not know what FUD means, you're either a liar and I can expect more BS arguments from you, or you were born this decade and don't know squat. If you want to make the absolute zygote argument that it won't handle the latest version of Java, I'll inform you that it's an open source project and could be picked up by anyone at any time but that yet again, it's not the only one and others are still being worked on.
What I think is that the bench mark would be a very strong indicator if your language is compatible with how you solve the problem. So left or right, the paper would be very interesting to instrument your solutions.