Jonathan Blow on Apple's M1 Processor (Faster CPUs don't matter)
HTML-код
- Опубликовано: 2 окт 2024
- Jonathan Blow tells why he thinks Apple's M1 processor won't be impactful.
Clipped from a stream recorded on November 30, 2020.
Tip me: ko-fi.com/blowfan
Jon's Twitch: / j_blow
Looking forward to seeing how cranky Jon Blow is about bad programmers when he's 80. It's gonna be awesome!
Can't wait for Jon Blow's version of Joe Armstrong's "The mess we're in" talk.
Renewable energy.
@@SaHaRaSquad He already gave that talk. It is titled "Preventing the Collapse of Civilization" and it is phenomenal.
But totally justified
Maybe we can even get his next game by then
Lol, it sounds like a crazy scammy coaching session about unlocking one's potential (you could be 100x faster starting right now!), except it's about CPUs and it's completely legit.
Yep.
It would take good programmers who understand the hardware to do that. And faster is relative, premature optimization is bad. But there are good decisions that will make optimizations easy and those are things that programmers can't learn soon enough. I think Casey Muratori makes good points on this, he's also worked with Jon in the past I believe.
@@xhivo97 I think Muratori's point about performance-aware programming can and should be the norm. It's something that should've been taught when I took CS, yet I was taught useless Clean Code ™ and "design patterns" instead. What I've learned about performance is that being 20x off the theoretical maximum is fairly easy and can be taught to an average programmer in a few weeks. But most software is at least 1000x off. So that's a lot of gains for a small change. On top, simple profiling tools should be commonplace and would make it so much easier for the industry to not be so bad.
Faster cpu for React and JS applications. Hell yeah!
That's one of the reasons computers are slow, just remove React and JS from it, never install anything that uses Node or Chrome. Instant 100x more performance !
Just let the web die, its a terrible environment, it should just be rewritten from scratch !
@@monad_tcp Could not agree more as a web dev. *cries in js*
Good luck finding anything that doesn't use Chrome
@@squishrabbit I found wickr as replacement for slack/telegram. They even let you use their server software if you sign a service contract with them. All C++ with QT and no JS. I literally pay for it.
@@monad_tcp I understand why want to replace Slack. But why Telegram?
I care because these programmers will switch to these new faster computers and write even slower code which will hardly work on the rest of the computers
I think it was probably silly of him to have such a kneejerk take like that. The big thing about Apple Silicon is its staggering power efficiency, not its speed.
"YOUR SOFTWARE CAN BE 100X FASTER STARTING TODAY (you just have to write it first)", legendary
joke's on you, I use Ocaml to write software and use the MirageOS UniKernel to run it directly on the hardware, it is 100x faster. Think of link-time-optimization, now think if it was possible to do it to all the code that runs, to the entire runtime, everything, no dynamic linking, no code on runtime to escape from it. That's possible if you don't use C. Its possible with ObjectiveML, aka, OCaml.
@@monad_tcp "That's possible if you don't use C". Ocaml benchmarks are worse than C++, so not even close to C. Then why you said something like that?
benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/ocaml-gpp.html
@@NeZversSounds yeah idk what that guy is on. But a language like ocaml does have a lot of advantages over c, in areas like productivity and correctness. But it’s not a systems Lang lol
@@32gigs96 but which of you writes systems ?
@@NeZversSounds C++ is not C . I said it is possible to optimize, not that OCaml is faster than C++.
I don't like micro-benchmarks, there's lots of things that affects the performance, the way your algorithm is written, if you properly extracted recursion for tail-call, those things.
That benchmark is a game, and its very naive, a straight port from one language to other will never work.
I need a faster CPU so I can port all my applications to JavaScript and run them in Chrome. REEEEEEEE
RAM: _No! Please! Have mercy!_
Cool
LMAO
If something can be done it will be done Mr. blow, either making CPUs 100x faster and software libraries 100x more bloated! Alright?? So let me emulate my intel instruction sets through my arm instruction set which runs a java machine which run javascript on my 3d browser. Thank you! :)
But the JVM implementations of JavaScript aren't that slow.
@@HrHaakon Yeah, we ought to be able to run it through at least 10 of them with no visible slowdown at all! Only 100 ms each!
@@Drillgon
Why stop at 10? We're programmers, we know three numbers: 0, 1 and n, where n is a sufficiently large integer.
When I first played MW2: Mercenaries on a Pentium-1 120 MHz, it ran like shit. Absolute shit. Some months later a patch was released and it ran flawlessly on the same hardware. I've also experience this in my web-development career. When I started writing less Ruby/PHP, and more SQL, everything got faster.
lol "blowfan" is such a great name. that shadow pic of him is also so lol. hahaha
Utterly Based
extremely baste
@championchap thats based too
I'm on Windows, I've got about 50 useless processes running right now, wouldn't it be great if they could be useless 100 times faster ?
It would be even better to not run useless processes - so you don’t need better pc - don’t have to spend money
Then there would be 500 useless processes.
I conclude that coding on a slow machine will give the motivation to code faster software. S( the industry should strive less on speed and more on efficiency, which i think what M1 did..
A similar analogy with mobile phones. Someone invents a better battery. What phone manufacturers do? They make thinner phones. Someone invents a faster CPU, we make the software slower.
They are conspiring together to stop us overthrowing the globalists who influence Big Tech and Big Pharma and most world governments except some stuff Trump got past RINOs.
@@____uncompetative Bro, chill.
@@OverG88 I wasn't being serious. LOL.
Funny how half the comments here either don't understand the point he's trying to make or try to discredit him. Sure, convince yourself that he's wrong, while you load the next website with 5MB.
"While you load the next website with 5MB"? What? Is this English? What the hell are you talking about?
his logic is kinda dumb. why not have both? faster CPU's do matter, its apart of his point in the end about how insanely fast modern computers are
Because it misses the point. Better hardware is great, but in my experience general software tend to get worse when the hardware gets better. So you eventually gain nothing by better hardware, unless you address the root problem.
I write efficient parallel algorithms in C to do my arithmetic computations (cryptographic stuff mostly). So aside from the OS overhead, I am making use of the CPU capacity. However they are usually command line programs that produce a number or transform data, rather than graphicy things that go through layers of libraries. The speeding up of CPUs has enabled me to get more computation done. For example, a program that takes 2 days to run on my work laptop runs in an hour on my new 18 core, hyperthreaded linux box.
@Cities of the Plain Doubtful. Apple is really good at selling hype though. Same morons that before the PS5 & Xbox launch talked endlessly and painfully about how "PS5 won't have feature X, Y, Z" only to now, kind of be hiding in shame, in the dark corners of internet, due to *once* again falling for feature hype, not understanding architecture as a whole (meaning, there were hundreds of thousands online trying to paint the picture MS did, that the Xbox would be vastly superior when it all turned out to not be the case, the difference was/is negligible). The comment you responded to, was not about that feature hype, he merely made an observation about his computational output, which much more has to do with concurrency and parallelism - something which Apple hardware is *not* that good at.
"my new 18 core, hyperthreaded linux box." implies that your laptop has no hyperthreading, if that where the case then it would indicate that your laptop is quite old and your "new 18 core, hyperthreaded linux box" isn't that impressive.
@@cranknlesdesires Nope. My laptop has 4 cores and is hyperthreaded but is slower and power constrained due to the thermal and power supply constraints that are the norm for laptops. The new linux box uses server silicon with lots of cooling and a silicon process optimized for speed over power efficiency.
@@davidjohnston4240 You misunderstand my point. I was saying that your use of language undermined your point and introduced ambiguity, I was not challenging the speed your box.
Also, it quickly becomes obnoxious if you're constantly dropping the name or brand of a item as it's seen that it's done for clout and is unnecessary for your point. I'm replying to you from my debian desktop, used Linux as my operating system for 5 years over multiple devices. This fact isn't relevant to my computers speed, I need not tattoo it onto my eyelids.
Furthermore, Linux is not the second coming of Jesus in OS form, it's a mess, among it's cohort from the 90's came much cleaner, leaner and faster systems (QNX, Plan 9, the BSD's, Next's os) Linux just happened to the system to take off because of it's timing got it a initial boost of support. Treat the system as it is, flawed and with issues to be fixed, not what you would wish it would be.
@@cranknlesdesires You seem like a pleasant person.
Of course faster cpus don’t matter to Jon Blow, have you seen his games?
Hi, Tech Ignoramus here. If I understand you correctly, it doesn’t matter if the CPU is 10x faster if your programming is lazy and makes the CPU do 10x the work, correct? Because if this is true then it makes sense why my MacBook Air from 2013 started out fast (on OSX Mavericks) and got progressively slower with Yosemite, Sierra, etc.
Yes
Yup, and the relationship isn't linear either.
It's known as "Andy and Bill's law".
Here's from the Wiki page:
"Statement that new software consumes any increase in computing power that new hardware can provide"
Try installing Linux on it through bootcamp and you'll see that MacBook come back to life 🌱
You can also make your program miss the cache 10x as often, the thing about some of the layers of abstractions being run, is that they move you further away from being able to optimize your program around caches.
100% correct. And for as long as the OO dogma dominates the industry, things will never change.
Not sure what OOP has anything to do with this?
@@vladinosky The way it is taught and employed in most institutes and companies these days is inherently cache unfriendly. Multiple layers of abstraction (lots of virtual function pointer lookups), containers of pointers (for inheritance), frequent memory allocations, "RAII" etc. etc.
@HalibetLector Well we'll agree to disagree on that one. The OO bandwagon is still going strong as far as I can see. Universities are still teaching it and the vast majority of job listings state it as a requirement. Most modern frameworks are OO based and most guides/tutorials for modern OS/mobile apps encourage an OO mindset. A "FP revolution" may be coming, who knows, but it definitely hasn't happened yet! OO still dominates.
@@higgins007 I read so much about OO hatred, but to be very honest I never was convinced by the rant. I never see a clear demonstration of why that is systematically worse than any other paradigm. Yes you can make horribly slow OO by using large vtables, cache inefficient code and stupid class hierarchy but this is not a fate. Modern C++ surely doesn't help with designing faster code but again it's up to how you use it. Like in martial arts, it's not the style that's bad, it's the practitioner. The more freedom of movement, the more mistakes. It's statistics. Also, developing device drivers or high performance code using any exclusively FP languages does not look very appealing to me. But I'm willing to be proven wrong.
@@vladinosky I'm talking specifically about the style of OO that is prevalent in education institutions and companies. If there is a nice, sleek, efficient, cache-friendly way to program in an OO style then that's fine, but that is not the prevalent style taught, used and encouraged by most platform frameworks and api's.
The hardware designers at Apple are probably aware of the same fact, which is why the M1 barely consumes any power. Why focus on getting the most performance out of the design if it's going to be wasted on the programmers, many of which are probably in the same building, when there's the easy consumer-facing win of extended battery life?
If the CPU is 40% faster Apple will add 40% more intensive animations for opening folders.
@@HairyPixels There are no folder animations what are you talking about
"which is why the M1 barely consumes any power"
There have been mobile chips which barely consume any power, FOREVER. Literally since the goddamn 180nm process node, in 1998, there were 5w mobile chips. With modern flash storage they edit text, open games, and browse the (old) internet, just as swiftly as the m1.
You are completely, COMPLETELY, missing his point.
@@OmniscientOCE There literally are.
I bet you they don't bother one bit about that, because they are not Jonathan Blow and they don't live according to a single obsession (manner of speaking). Do you realize he could have talked about the chip in other terms, but made it about the same thing he constantly talks about? I sound like I'm hating on him, but he's actually one of my favorite people. Also, I believe he's going to be successful on his mission with Jai (programming language).
I boosted my productivity by making my pointer larger (System Preferences - search for Mouse - Pointer size Accessibility) and making the Pointer Outline Color BLACK and the Pointer Fill Colour RED. That's right RED. Now I don't spend ages trying to find it on my 6K display.
This would only be true if there were literally zero software that is highly optimized.
Things like launching the file browser are not going to get any faster, because the industry designs those to some sort of acceptable user experience threshold.
They look at what typical hardware people are running, and then decide on thresholds like "< three seconds to launch file explorer", and then bloat it out as much as they want as long as it launches within three seconds on the typical hardware. The slowdown multiplier on that task is millions of times slower than it "could" be on that hardware, but they don't care because their goal is user expectations, not "how fast it theoretically could be".
The simpler the task, the more bloated the slowdown multiplier, because the perception is that the average user will accept things taking a second or two.
As you get to more complex tasks, tasks that genuinely would take multiple minutes even when theoretically optimized, the importance the industry places on hitting those optimal times gets higher. And as you go into constrained environments like video games, where speeds of sub-milliseconds are necessary to achieve 120 FPS, performance optimization is more important.
If you can make a 4K HEVC rendering pipeline twice as fast, then the industry WILL do that, because that means saving the user hours waiting for renders. If you can make opening the file explorer 100 times faster, the industry won't bother, because it only saves the user a second or two.
Yes, the only solution to this is force people to stop buying new hardware thus forcing new software to be optimised or atleast not being more bloated.
Andy giveth, Bill taketh away
It's more like JS devs taketh away nowadays lol
i accidentally formatted my 15 year old mac recently, and it returned it to the operating system it shipped with rather than the latest os. it went from extremely laggy and fans screaming for every click/scroll/video-play to being as fast as my new M1and never hearing the fan. there is almost 0 difference in functionality...
The major difference that you will experience is an unsupported OS with web browsing that is out of date and most likely running with many security vulnerabilities.
He is right, the macOS has already been shown to be slower on M1 than Linux. So even Apple has these kind of developers and then they want to lock down the system. Apple’s own argument for keeping iOS/iPadOS locked down is it would be fill with malware very quickly.
Finally - someone who really understands programming!
Jon becoming a suckless software fanboy over here.
His point is honestly very depressing.
You've only now realized how depressing the things he talks about are?
It's quite inspiring. I like how he speaks with "ElonMuskean" type of madness seasoned with some emotions. That inspires me to push my cerebral limits further. When he says that you can do it two magnitudes faster starting right now, he sets new goals to reach.
These optimizations are really not that far away. I pondered and read about some old and modern optimization techniques and you can achieve quite a massive performance boost with some thinking applied. Read or watch video about SIMDJSON, it's exactly one of the things Jonathan talks about.
@@KETHERCORTEX no no, don't even use json this being with. That is the point.
@@KETHERCORTEX I feel like I need to elaborate on my previous comment.
Jon's point is not that people should optimize (let alone micro optimize) the hell out of their code.
His point is just to not write code that is obviously going to be slow (not to mention very slow!).
SIMDJson is an example of what I would consider micro optimization.
The real optimization would be to use a data exchange format that does not cause massive waste in RAM and CPU cycles. Serialize your data in binary and attach some tag to indicate type and version if necessary.
We can be 100 times faster only when they release m69 cpu.
Fucking ironic that I was using my M1 Macbook as I started to watch this video and it crashed then restarted (panic). One coincidence in a million.
it's an SoC not a CPU and yes, if coders could code like they did back in the 1980s
I wish I knew how to write my own Software
@championchap Computer: gotcha, "crash randomly".
It's not particularly difficult.
Just start to learn programming.
Makes me think, in a sci fi scenario (think something similar to the tv series revolution) if an unknown force all over the world suddenly downgraded our hardware, then we would get better at software.
Jonathan is such a cool dude to listen to. Any other experienced people talking about programming I should check out ?
John Carmack has some interesting talks
Casey Muratori is my favorite, because of Handmade Hero, ComputerEnhance, his API-design talk, the invention of IMGUI and his walking sim algorithm. And if you haven't seen Mike Acton's 2014 talk that was also great
My i5 3.0Ghz Linux desktop starts up and shuts down faster than an M1 Mac.
Absolutely L take. No one cares about performance. It's the efficiency.
Lol the end is so perfect. "Where do you download the fast software?" Exactly. Jon said we're not trading speed for productivity? Well that's what I do when i download software instead of writing it.
So if the program can be x100 faster and m1 chip is x40 times faster then it could be x4000 times faster with both!
x1.4 times faster, not x40
It amazes me just how utterly disposable technology is simply due to pushing out more desirable and powerful hardware specs.
I'm curious what would happen if some zillionaire gave John whatever money was necessary to start the perfect software company. Would he take it? What would he create?
Same argument with memory consumption and web development XD website are so bloated with useless tracker software and shit that website load so slow whilst it's completely not necessary.
Yeap. You can bet they will kill that performance gain bitching around with abstraction layers in OOP.
But it’s also more efficient….draws way less power, less heat, etc. You should use one.
Actually most programmer don't really care to make it faster.
It's not new, the pi did this years ago. It's just that apple has the software and hardware to make it work with specialization
You mean...I can download more RAM?
"I'm sure M2 will be even faster" well that aged worse.
I'm worried he's going to go full Terry Davis.
Its correct to say that it's easier and cheaper to just buy the hardware than rewrite bad code to make it faster. Yes and it seems to be getting harder to do that with every passing day. It's not really the coders the motivations are just not put in place .
> It's not really the coders the motivations are just not put in place.
A.K.A, get fired for doing anything other than the status-quo.
I have 32gb of ram on my machine and it runs out of memory very frequently, for the same reasons. People spam with garbage electron apps.
Andy and Bill's law · New software consumes any increase in computing power that new hardware can provide.
Fast CPUs do matter though. Not in the way most people think, but even if the lazy and incompetent programmers make their programs more sloppily because computer hardware is so fast, they do not wait for you to buy a new computer, they do it when the faster CPUs are out. That means that if you DON'T buy a new M1 laptop, your old one will feel slower and slower as programming adjusts to the new "fast".
Personally, I bought my M1 Air for other reasons: lighter, longer battery life, cooler, noiseless. My old 13" Pro was noisy, heavy and hot, and would not last a whole day.
We’re optimizing the wrong thing.
Who's we? Hardware engineers aren't making websites, "we" isn't a defined group of people that either do one thing or another, thinking of Humanity as a defined group of people that should somehow align their goals is naive and, frankly, juvenile.
Blow is caught in the Nirvana fallacy, Apple made M1 to solve their business problematic, it made their new laptop better by a significant margin in things that matters: lightweight, much better battery life, excellent performance, much less heating issues, the new Air don't even need fans, that's less moving parts and noise, it just transformed their laptop into a whole better product.
Blow is completely misguided on that, and the fans are eating the soup as always, and to think he's the one criticizing peoples' lack of critical thinking...
kinda agree with him in this one, since i ditched windows my potato laptop been running 10x times better with a minimal arch install BTW, i don't know if this a valid input or not but still my laptop with an 8th gen i3 cpu couldn't handle running windows without turning into a turbine
I didn't notice any difference and I'm using an ancient i7 cpu.
Anyone whose taken algorithms or data structures should know this, its basically explained in chapter 1 of most related texts.
Jon is such a archetypal grumpy old man, it sometimes makes me laugh! someone should ask him about Tik Tok ;)
Based
M1 is interesting in two cases. Specific bottlenecks, which only really matter for highly constrained tasks like compilation or video rendering. And Battery life. Well, AMD Zen 3 is honestly going to be nearly as good for compilation and in many cases better, especially performance for cost. How many top of the line M1s will actually be out there? Not many, and at a cost you should be embarrassed by.
Could you link the full streams with clips?
All I know is that video rendering is dramatically faster on dramatically faster hardware. Jon’s take here is misplaced.
Unfortunately you have to write the fast software😂
Found a small rant on Knowledge Transfer and apprencticeship at scale: ruclips.net/video/KWakJrZX7uU/видео.htmlm52s
Faster alone means nothing, same or better performance using less power is some we all are interested in, because we can have lighter laptop or laptop with batteries lasting longer, and less noise when using them because of less heating.
True enough, but the question was asked by a mostly programmer audience to an established programmer - contextually it's easy to see why Jon's response was geared in that direction. Moreover by the very nature of this video existing means it has something interesting to say, rather than Jon merely stating "yeah I think it's great because it extends battery life" (yada yada yada) which isn't useful or interesting to anyone
sure but if its 10x faster then it'll also run on my 6 watt laptop from five years ago with no issues
He seems... angrier than usual.
I remember this stream, he was kind of annoyed at chat throughout the whole thing
Funny because this is almost exactly what I think about Nvidia's RTX 3000 series. Who cares if it's 40% faster than my current GPU? Game developers will just find new ways to make it slow as always.
to me, the biggest point on M1 is the lower energy consumption.
you spend longer time waiting on a task to finish, staring at a bright screen which is what consumes the majority of power anyway. Apple fantard.
It's also ARM ig that's the big thing
This glosses over how important advances in chip designs are.
true
Speed doesn't matter but the efficiency of the processor does.
I really have much respect for Jon even though I don't like Braid or The Witness at all and he banned me for saying "It's interesting how one can working on a compiler while dealing with all kinds of stupid questions from twitch chat every day.", his skill is remarkable, but in my opinion he is always focusing on the wrong thing.
What do you mean by “he is always focusing on the wrong thing”? Maybe you could provide an example? I always thought the opposite.
Yeah, it's real easy to just say "X is wrong".
But don't a lot of game programmers if not general software developers spend excessive time getting the maximum from the hardware? To the point of going to very low level stuff? So aren't a lot of games good relflections of the hardware being maxed out? Or is he referring to all other types of software (photoshop etc.)
He is referring to all other types of software
The majority of programmers (and their managers) don't care about performance.
Most will respond with "use hashmap/dictionary!" to a talk about performance issues in some part of the codebase. It's especially funny when hashmap is the root of the performance issue in a particular case.
Jon have PC building advice?
remember that this is a fan-run channel, not Jonathan himself.
@@AshnSilvercorp same.
but if you ever see/hear one...
@@jayfolk I mean, my advice would be buy what you need that works for what you need without wasting money. Most would tell you to build your own. I bought something flexible. So I would guess, make sure the motherboard of the PC you buy can last a few CPU and GPU generations, that the CPU doesn't run hot, and that the power supply is outputting correct voltage, not underpowered voltage.
@@AshnSilvercorp Ok, thanks. And thanks for the videos.
@@jayfolk I don't run the channel. I just found this channel yesterday.
Not all software is bad. There have obviously been a lot of downgrades in certain programs that utilize web as their UI because it takes a lot more memory than it really needs and uses more CPU cycles on very simple tasks. However, there have been a lot of advancements in computing that warrant more speed, like VR, ray tracing, machine learning etc, so it's not all bad.
Faster the cpu gets, creates opportunities (related to total compute power) which are not-optimized, large scale and large feature applications. Optimization is a long term process, it can’t compete with the velocity of increasing compute power. Can we force all the devs to limiting their compute powers to creting efficent systems? Unrealistic. But rewriting some of components may help. (Starting from web :))
Ok
The main selling point for me is m1's energy efficiency and the fact that the exact same performance is availalbe on battery. The performance gains are also visible during daily tasks (JavaScript performance is great).
Last but nit least - at least Apple itself works quite heavilly on optimizing their own apps for this m1 and performance gains are even more visible here.
Completely out there but on efficiency. There was an exhaustive university study a year or two ago on programing language on efficiency. They took 20 or so programing languages and wrote several programs for each that output the same thing to get a well rounded sample size to compare them all. Then they ran all the programs, measured power draw at the outlet, how much memory it took for the program to run, and time it took to run. Get peak behind the curtain. JavaScript is horribly inefficient. Like, bad. On every level. It's slow, it uses too much memory, and the power draw is way more than it should be. It's really really bad. It's possible to get four times power efficiency just by not writing things in JavaScript. Speed is a multiplied by six. Memory usage is also 6. Want better battery efficiency? Don't write anything in JavaScript. That, all by itself, makes any gains the M1 has irrelevant. Just stop coding in that language and pick one that doesn't suck. Problem solved.
@@halycon404 your post is kinda irrelevant.
I'm seeing similar gains in compiling Go or Rust (and Rust compilation is in general kinda slow).
And js can be in fact pretty fast with all the v8's trickery.
It's usually a matter of picking a proper tool for the job.
But I agree that js is overused nowadays.
@@just_ppe It's not irrelevant. If you want battery life, don't write programs in languages that use more electricity to do the same thing. This is not an irrelevant thing. It's common sense. If it takes four times the electricity to do something in JS than C/C++ or Rust, twice the electricity of Java, and I'm really worried about my battery life, then I shouldn't ever be writing anything in JS. Your very first sentence is, and I quote, "The main selling point for me is m1's energy efficiency and the fact that the exact same performance is availalbe on battery.". JS is a very bad language to be writing in if you want those things. Less power draw, less memory usage, and faster. All from not using JS. JS was bad in the 90s, I avoided it at all costs then. It didn't get better, computers just got faster to hide how bad it is.
The question was deliberately left open ended but I think Jonathan's take was incomplete. The standout aspects of Apple silicon is that those CPUs are very energy efficient. That results in incredible battery life. And you can actually use the laptop.. on your lap. It won't overheat.
100x faster ?? Uhh, no. If everyone suddenly became John Carmack and all code magically became gorgeously concise C, no. There’s bad code but compilers do a tremendous job these days at extracting the best performance on specific CPU architectures and X86 has so freakin’ Dog Gamn so many, there’s no way to opt8mize for all of them.
The reason Apple Silicon is so great is it eliminates that f****ing bullshit. ARM64 is a clean, concise architecture,. No 30 years of processor extensions to accelerate certain media handling crap by 25%. Put that stuff in discrete autonomous cores that CAN work 100x (10000%) faster than CPU can at simultaneously decoding, color grading, encoding 60fps 4k HEVC with virtually ZERO CPU overhead.
100x is extremely conservative. I think 10,000x is closer to accurate. Most developers don't know how to profile their code properly, and I can't instruct you on this forum. But avg CPU: 3.6GHz × 6 cores × 2HT/core × 8 SIMD × 2 IPC = 691B effective instructions per second. And e.g. if division is a primary need in the algorithm, you can conservatively divide that by 10. So with a little knowledge of what it takes to build the inspected software, coupled with basic arithmetic, you can get a ballpark estimate of out how far off base the software is in a few minutes.
@@Muskar2 That's probably true. Wow, I wrote that comment 3 years ago and today Apple still seemingly hasn't put much effort into showcasing the considerable power of Apple Silicon's neural engine in tools like Final Cut or Logic. It's weird when it's users like the creator of llama.cpp showing what it's capable of running LLMs.
At least in runs JavaScript particularly well...
Yet he runs Windows, a closed source proprietary OS where speed is not a focus, leads to new hardware purchases, which sells more copies.
He is a game dev, which is mainly targeting Windows, so that's the only the reason he uses it.
I noticed you have your taskbar on the side, like any competent person would since 16:9 screens came out
he completely misses the point of why M1 is such a big deal. It's not performance but power efficiency. it barely needs a fan and runs off battery. all while *delivering* performance that is desktop-class. for the average person, because of the leap in battery life and better acoustics, while not *sacrificing* performance, the user experience becomes significantly better.
You completely missed his point then
@@psyberpirate i did not
Desktop class? We have different understanding of desktop class CPUs and CPU architecture then... Good luck lying to yourself.
@@Borgilian by desktop class i mean performant enough for the average person to use in a desktop environment
@@powerplayer75 "Average" person doesn't need that much processing power anyways
Have to disagree here. It’s not about how fast your software runs today but the new possibilities that are unlocked as processors get faster and can be embedded in smaller and smaller devices.
I'd like to take this opportunity to send waves of ehh let's say appreciation to front-end and mobile developers.
(Partially) front-end dev here, please send help. My JS development runtime literally crashes every few minutes because it runs out of memory, and then it takes another minute to restart. Not sure if this is real life or I'm in hell's IT department.
You're missing the point of the M1. It's arm based and its significantly faster and power efficient and runs cooler than anything we've seen. It doesn't sacrifice performance doing any of these jobs.
Your overused talking point about software needing to be optimised more is utter nonsense when you look at the applications that benefit the most from the M1. Content Creation type apps. These apps are heavily optimised and they're always improving.
If you just think about what you're saying for even a second you'll see how silly it is. Software is not getting slower as hardware improves in any way that is significant to anybody.
Ive been watching your videos and you probably appeal to a subset of programmers who love to feel like they're the next John Carmacks of our generation. In reality you repeat the same nonsense. Get a grip of the real world.
You miss the point of his point.
@@oleksandrnahnybida3167 Care to show me.
@@noahfletcher3019 nah sorry, there are a lot of John’s talks which explain his point of view much better then I can possibly explain in a comment.
@@oleksandrnahnybida3167 ok
If you think content creation apps are fast then you have no clue what you're talking about. I have to use Premiere Pro from like 10 years ago so that it doesn't crash and actually runs faster on my machine. PhotoShop's loading speed on any machine speaks for itself. You think these are *optimized*? What planet are you living on?
Well, if programmers aren't going to improve with their software then the hardware (CPU) can make up for it.
The CPU manufacturers can make up for "100x" slowdown? No, they clearly can't.
Sam seems not to have heard of Wirth's law.
The point he's trying to make is that computers have been very fast for a while, and that with faster cpus, we're only moving the bar for what is acceptable in terms of inefficiency. So from that perspective a faster cpu doesn't matter much.
But (and that's where I disagree) it actually is meaningful in many other ways. And I guess it's not that the M1 itself matters, it's that progress in chip design matters and benefits us. Go ask machine learning researchers and engineers if it matters, for example... Also, faster compilation times (granted many languages are designed in such a way that they're relatively slow to compile, not that I know anything about language design), faster graphics rendering, etc.
Machine learning chips uses a rediscovery of Analog computing to accelerate tasks.
Yes gpus runs highly parallerized code, but those analog chips ai accelerator are a low more power efficient for the performance they provide.
"Who cares if M1 is twice as fast?" Anybody doing CPU bound tasks. The BS rant about how software is inefficient doesn't mean anything, since that's what we're stuck with, so improving the CPUs is still a win. Running the same inefficient software N% faster, means your tasks end faster. If Jonathan is so much better than those other programmers he mocks, let him give us his "efficient" software. He can't even finish his programming language...
wow, If it's up to this guy, we all should basically stop innovating and chill.
somewhat on hardware, it seems he wants more assembly to high level programmers.
I've always thought assembly was bad (mind you I don't program just to create programs, just games, I'm not that skilled), but it seems some people I've been listening to recently have been making the good point that knowing machine level code is important to creating good base programs for operating systems. The less people know that, the more we will be dependent on the systems we have currently, as no one will be able to develop for possible future hardware competitors.
Of course he appreciates innovation and improvements to processors. However, his point was that these benefits are nullified if the programs that run on these processors continue to be written poorly.
I think he's just hating because he doesn't make AAA engines, were higher end simulation is important. The fan absence and overall performance/power characteristics is the most impressive thing about the M1 chip
Software developer thinks other developers' software is crap and therefore other engineers' hardware is stupid and pointless. Got it.
haven't you noticed how slow and shitty software is getting since the web took over?
please never become a developer
there's enough cancerous excuses for hiding mediocrity in the field already
@@La0bouchere Sorry, already a developer. I maintain kernels for embedded devices which have less than 0.1% of the CPU power of your smartphone. If you read my original post again, you might realise that I didn't disagree with Blow about the mediocrity of other programmers, only his implied derision of hardware developers.
@@sjwright2 Why do you interpret his rant as deriding the efforts of hardware engineers as "stupid and pointless"?
My read is that those hardware efforts are used as an excuse by software engineers to not put in their own effort to make efficient use of that hardware.
It's a stab specifically at SW people for not being as good as they could be.
He is a programmer trying to direct the attention of programmers back to their programming.
@@sjwright2 you do know he is talking about the Software developers who take the opposite correct direction that hardware devs take right?
The fact that Blow called the M1 chip a CPU rather than an SOC means that we can disregard this rant as irrelevant. The CPU isn't even the most interesting part about the M1. How about that GPU, bringing mid-tier Nvidia/AMD graphics performance into thin-and-light with all day battery life? How about a powerful machine learning co-processor becoming a standard component on millions of computers?
His point is that, however you regard the M1, the architectural, technological improvements are going to be wasted on current software practices. His point is not that you're wrong, but that you could be 100x more right than you currently are.
Speaking of ML, ever heard of TensorFire? The power is the sitting there, most are not using it.
The fact that you bring up a linguistic technicality instead of actually addressing the point being made in the video means we can disregard that comment as irrelevant. ;)
The point Jon made was about the speed of computers in general and the fact that it is being wasted on running bloatware.
actually, almost anything new happens on the hardware side of things can be ignored if software continues to bog us down, according to him. Not efficiency, not new ISA, not the unified SoC and memory design, nothing - nothing from the hardware advancements would save us if the software standards are gonna waste all of that anyway
Of course you can be happy about how the SoC design would be the end of repairability, upgradability, privacy (because you dont control the bootloader, even more so than x86), but not everyone have to
I understand the point Jon was trying to make and it's a perfectly good one, except that nobody-including him-has proposed a way of running software "100 times faster" while maintaining the many conveniences that the "bloat" affords us. As arguments go, it's of the "hey, look at this intractable problem over here" variety.
What's most likely, developpers around the world (re)writing 40% better code or CPUs getting 40% faster?
Yes, CPU speed matters. Never heard this guy before, not interested in hearing him again lol.
What will we do when there is no room for improvement on hardware side?( which is close)
Yeah, and because people like you with this shitty mindeset and programming skills, web is now a clusterfuck of garbage, and it becoming slower and slower with time, no matter how fast hardware is, and you need an actual supercomputer to run a browser smoothly already
I really don't get this rant 😂, M1 will literally make his compiler and games faster on macs
I think he’s saying hardware guys give us increasingly fast processors, while software guys keep giving us increasingly slower apps. So in away it doesn’t matter how fast m1 is the software will run like crap anyways. Also that your current computer could run “100x” faster if software wasn’t crap.
he's advocating for adapting code. If you write code that doesn't behave properly to the hardware its on, it will still work, but take ages to do so.
I'm still a beginner and I'm really just seeking to understand code from a game-standpoint. Even I won't get to Jon's level, so I'm stuck using other's base code.
By what I'm seeing programming works somewhat like this
1. CPU architecture and design
2. Assembly language programming
3. Compiler systems
4. High level programming
5. Desired program
Jon is making the point that if the lower level codes are inefficient, it will add all the way up to the end result program. We can improve speeds of processors as much as we want, but if the programming efficiency is poor, you can make an ultrafast computer run like its from 1995.
@@ybbor_exe yeah I get that point, but faster hardware gives more room for improvement for those devs that actually care about performance
@@fleaspoon Oh definitely! Unfortunately I think that’s the minority not the majority of devs and maybe that’s the problem.
@@fleaspoon Devs often don't get the time to care about performance. You gotta be fast, so you take some frameworks that use the hottest features and glue them together and hope for the best. And a week after the original deadline the result is a bloated abomination that almost developed a conscience by itself. But the clients don't give a shit, they just look at it and say "ew, that button has the wrong color and doesn't jump all over the screen when you move the mouse".
hes wrong, tell him to downgrade his CPU to a model that is from 2010 and see how fun it is to develop games on it.
- hes one of the few programmers that actually knows how to use the processor
- game programmers aren't shitters
- performance actually matters for games
- all other software that I run is basically as fast as it was in 2010 which is his point
- MS office is actually slower than it was in 2003
rip
@@La0bouchere "one of the few". meaning that we should only have a few people making games? its not easy nor economical to train everyone to be like him.
@powerplayer75 Do you have a problem, of the m3 nntal kind, that prohibits you from reading text and understanding it? Because I see most people nowadays have the same issue. You people are incapable of understanding context. It's like most of you are just badly programmed NPCs. The guy clearly said that Jon is one of the few people WHO'VE GOT THE EXPERIENCE NECESSARY TO PROPERLY USE THEIR PROCESSOR. He never said anything about having just a couple of people like Jon in the world. JESUS, YOU PEOPLE MAKE ME SO FKING ANGRY SOMETIMES! It's like most of you are the result of long generations of 1nn breads.
@@Borgilian OP challenged Blow to develop games on a 2010 processor. La0 responded by making the point that he is one of the few people who knows how to use the processor, implying that it won't be a good experience for the majority of developers. The topic of this argument isn't exclusively about Blow and if you think it was then YOU can't understand context. Its about developers as a whole. OP asking Blow to do it was just a stylistic choice to make an argument.
@@Borgilian calme down. No one give a shit about online opinions.
Chill.
Says the person who spends years making shit his own way.