Programmers don't know how fast their computers are

Поделиться
HTML-код
  • Опубликовано: 19 ноя 2024

Комментарии • 211

  • @DFsdf3443d
    @DFsdf3443d 4 года назад +321

    my webapp is actually bottlenecked by the speed of light

    • @metalstream
      @metalstream 4 года назад +40

      your name bottleneck the youtube comment section

    • @ypucandeleteit
      @ypucandeleteit 4 года назад

      I sa2 what you did there :D

    • @zacharyjones729
      @zacharyjones729 4 года назад

      So... Satellite internet?

    • @jonaskoelker
      @jonaskoelker 3 года назад +3

      Real programmers think of the speed of light in centimeters per clock cycle ;-)
      (10cm per 3 GHz cycle)

  • @drygordspellweaver8761
    @drygordspellweaver8761 2 года назад +30

    I opened a 420,000 line text file with windows Notepad and it took 20 seconds to load. Sublime loaded it instantly.

    • @brianviktor8212
      @brianviktor8212 7 месяцев назад +5

      That's what a difference between good code and bad code can do. Imagine you are a human with a functional body. You want to transport water to the village, and there is a river 5km away. Options:
      1. Every day you take 2 hours of walking and maybe 5 minutes to fill up your pouch and have some water.
      2. You bring a wheelbarrow with barrels, and you only have to do this once every 1 or 2 weeks.
      3. You create a mechanical system to relay the water source to your place/village, so that all you have to do is turn on the tap.
      Better computers do the following: For 1. it increases the amount of people doing that. For 2. it increases the wheelbarrow and barrel count. For 3. it makes the water go faster through the pipes.
      The optimization of algorithms is highly determinant of how well it works, and it can be made exponentially faster. Just something as simple as Dictionaries can make a huge difference. Using a database too. Optimizing communication. Imagine you send your object position in a game only once every 30 seconds instead of 10 times per second. Imagine you do not send a JSON text, but a concise set of bytes.
      New hardware can do what, +50%, +100%, even +200% performance? Better algorithms can do +10000%.

    • @plvr_strg
      @plvr_strg 3 месяца назад +1

      I still can't close 2 binary files that I opened in the Win11 notepad because when I try to do it the program just freezes.
      They added tabs for Notepad in Win11 but apparently old Notepad wasn't optimized for multiple tabs.

  • @AftercastGames
    @AftercastGames Год назад +20

    Another eye-opening experience I had several years ago. Working for a company that still used a mainframe-style machine to store and calculate all of their pricing information, we used a 3rd party terminal program that ran in Windows, and provided a programmatic API to essentially send keys and read screen characters. Paging through thousands of records, with the text based screen updating for each record, the updates to the screen were so fast that you could barely read them and register that they had changed before they displayed the next record. All across the network. It had to be sub 20ms response times.
    This was 10 years ago, when requesting an HTML page from IIS running on my local machine and displaying it took probably 100ms.
    When it comes to performance, modern developers have no clue what real performance looks like, or the effort that it requires.

  • @kiwec
    @kiwec 4 года назад +165

    Ah, the weekly "personal supercomputers" rant. Never gets old.

    • @deneb112
      @deneb112 4 года назад +37

      The "old man yells at cloud" of system programming, to be sure. Not to call the ageless Jon old, of course...

    • @forasago
      @forasago 4 года назад +15

      @@deneb112 I hear his 20th birthday is coming up.

    • @XenonG
      @XenonG 4 года назад +4

      @@forasago Age 20 in immortan Jon years of course

    • @StevenOBrien
      @StevenOBrien 3 года назад +3

      @@deneb112 Jon isn't old, he's just ahead of the times.

  • @KilgoreTroutAsf
    @KilgoreTroutAsf 3 года назад +26

    I've worked with HPC software, where you regulary try to squeeze 80% of peak floating point performance and move Tb of data around.
    I sincerely don't understand how running a shitty app or loading a website can take longer than simulating an aircraft turbine or a protein ligand interaction.

    • @rabbitcreative
      @rabbitcreative 2 года назад +18

      Frameworks, frameworks everywhere.

  • @ramennnoodle
    @ramennnoodle 4 года назад +105

    This is the exact problem I have with a lot of programming nowadays, especially for software. The absurd speed of even our slowest computers allows programmers to be lazy and write horrific unoptimized code. I find a ton people who code in Python to be especially guilty of this, with Python having a lot of neat tricks to quickly write a solution to a problem, but that solution being really inefficient. As someone who does competitive programming, it's mind boggling how these people don't understand how much bad code can actually slow things down.

    • @lupsik1
      @lupsik1 4 года назад +7

      See imho there’s always a „it depends”
      What I mean is, in many applications you’re perfectly fine to do that. Sometimes it turns out the slowest 1% of your customers PC’s still leave you with a 100x overhead despite a terribly unoptimised app.
      And most optimisations will increase cost because of more hours put into writing the app + can lead to less readability.
      So you go with easily readible, non-optimised code and importing whole modules that you will use 2-3 functions from, because thats what makes the most sense financially for the company.

    • @carlossegura403
      @carlossegura403 3 года назад +11

      People are the problem, not the tool. I've seen horrible unoptimized code in all languages.

    • @lubricustheslippery5028
      @lubricustheslippery5028 3 года назад +2

      Python is great for making tools you rarely use. Then your devtime is way longer than your run time. You can even let your work computer work over night.

    • @Avengerie
      @Avengerie 3 года назад +3

      @@carlossegura403 some languages make it very easy though. In C# or Java you'd need to do it on purpose.

    • @mettaursp309
      @mettaursp309 3 года назад +9

      ​@@lupsik1 Depends on what your specific use case is and what your idea of an optimization is. The kinds of optimizations that affect readability in any kind of significant way are usually the last ones you would do. No one is asking for bit fiddling hacks or Duff's device type of garbage that does significantly impact readability when there are major optimizations to be had by just being more conscientious about using an n log(n) algorithm vs an n^2, caching data instead of recomputing/reallocating it, or using a vector based approach over a linked list and removing redundant data so the whole container fits in cache more easily. A large chunk of the biggest optimizations you can make are algorithmic in nature and hardly impact readability at all.

  • @njsynthesis
    @njsynthesis 4 года назад +61

    The first topic brought up -- that games are limited by I/O and memory access -- has intrigued me since I first read about Factorio's optimization. The sheer fact that a game like that is possible on nearly any CPU of the past decade is a testament to the statement brought forth here even more than it is great programming.

    • @SkeleTonHammer
      @SkeleTonHammer 4 года назад +36

      Yep. There's also a... I'll say, "toxic" attitude that permeates all of software development, which is that you should never optimize until performance becomes a problem - essentially, wait until your software is slow as molasses from rapid, mostly unplanned coding, then take late action to get the software just barely back to a "passing grade" in performance and then resume.
      Factorio is evidence for why it's good to optimize EVERYTHING wherever you see the opportunity and not just slap your systems together haphazardly and be like "well it works... technically." Work can be distributed, rearranged, a single workload can distribute effects to thousands of receivers, etc. - there's almost always something that can be done to massively improve your code's performance from its original draft. Like, *massively* - by factors of tens or sometimes hundreds.
      Sure, Factorio devs could have said "it already runs fine, it already runs fine, it already runs fine..." until it didn't, then they'd be just barely beating back their frame rates into the "acceptable, but not great" range. Instead, they optimize mercilessly.
      It's INCREDIBLE how fast processors are these days. But now think about how, for example, enemy AI isn't any smarter than it was 15 years ago in your average FPS game. What are we doing with all this extra speed, and these dozen CPU cores? Often using it as an excuse to not optimize and not be clever. The CPUs are faster now, which we use in an abusive way to merely brute force through bad code.

    • @hannessteffenhagen61
      @hannessteffenhagen61 4 года назад +15

      @@SkeleTonHammer Well, basically there's the difference between premature optimisation and premature pessimisation. You're essentially talking about the latter, they're often confused. The advice to avoid premature optimisation is that you shouldn't spend a ton of time optimising a piece of code without even understanding what the performance tradeoffs are. This, on modern machines, essentially requires actually running the code because good luck trying to figure out how nontrivial code is going to perform once compiler, various caches, branch predictor etc had a go at it without actually running it. On the other hand, performance is part of functionality. You should always keep it in mind, and you should never write code you _know_ is going to perform badly just because you think it doesn’t matter.

    • @lubricustheslippery5028
      @lubricustheslippery5028 3 года назад +3

      And Wube have only reasently started do stuff that Jon is talking about! Factorio is mostly coded in an Object oriented way causing problem with cache misses and uses LUA as an scripting language for a lot of things. So I think Jon could get into a long rant after looking at the Factorio source. It's still the best game.

    • @lubricustheslippery5028
      @lubricustheslippery5028 3 года назад +1

      @@SkeleTonHammer Would an FPS game be better with smarter enemies? For DOOM they had to reverse the AI and let the monsters be stupid and stand in the open instead of taking cover for it to be be fun and feel good to play.
      For optimization I think we have to learn what should be done when. Optimiziation can make the code harder to read and maintain/change. The optimizations can make the code more fragile and hard to change and if it can be done later should be done later when most of the code is solidified. And some types of optimization would need almost and complete rewrite of the code and should be done at the start.

    • @miguelpereira9859
      @miguelpereira9859 2 года назад

      @@lubricustheslippery5028 I think a better point of comparison would be the number of interactive elements on screen, which haven't really gone up much in the past decade

  • @Avengerie
    @Avengerie 3 года назад +13

    I'm bottlenecked by cash.

  • @LagMasterSam
    @LagMasterSam 4 года назад +60

    I started experimenting with making a game that uses a 2048x2048 grid where every square contains 14 floats for status data (234MB). A full update of the grid runs through about 600K source code instructions and involves three passes (propagate, settle, colorize). With 4 threads in a release build, it takes only 55ms to complete all those instructions and go through all that data 3 times on my Ryzen 5 2600.
    You might say, "But 55ms is too long in a game." You're right, and that's why the update is done in chunks in the background every few frames with each chunked update taking less than 1ms. 4 updates per second is what I'm aiming for and that's easily possible, even on a much older CPU.
    So, I'm really confused now by these programs that chug along when they appear to be barely doing anything. I don't understand how it's possible for them to be so slow.

    • @dutchdykefinger
      @dutchdykefinger 4 года назад +9

      i bet a GPU goes through that data a lot faster though :)

    • @dfghj241
      @dfghj241 2 года назад +6

      my experience with unity is this constant question you posed here at the end of your comment.

    • @MidnightSt
      @MidnightSt 2 года назад +2

      @@dfghj241 that's not a unity problem, though. that's the problem of the dumb scripts people wrote into it for their game.
      and I bet that about 70% (at least) of the answer for all the dumb unity projects out there is going to be "GetComponent() calls in update() functions".

    • @KANJICODER
      @KANJICODER 2 года назад +1

      I am actually writing a webgl game where all of the game's data is in a 2048x2048 texture and I push up dirty sections of the texture to the GPU as needed. I don't know much about GPU bandwidth though so hoping that a constant every-frame push of 512x512 won't lag my renderer.

  • @casperes0912
    @casperes0912 3 года назад +23

    I wrote a compiler recently. I wanted to stress test how fast my compiler was (at runtime, not compile time) so I did a big mutually recursive calculation. Thought it was a massively big computation and I did a bunch of unnecessary work in the compiler, going through the stack a lot more than necessary because I didn’t use all the registers for simplicity, and I pushed all 6 argument passing registers on the stack every function call even for functions that only take one argument and such. It still finished so fast I couldn’t time it. Single-threaded.

    • @etodemerzel2627
      @etodemerzel2627 3 года назад +9

      I'm rewriting a core piece of functionality in my current project. That piece, if given the largest dataset, spends ~30 minutes number-crunching on it. My code does it in less than 90 milliseconds. Yeah, my dead-simple *unoptimized* code is 20,000 times faster. When I joined the project I was told that the code is fast, but the database is slow...

    • @rallokkcaz
      @rallokkcaz 2 года назад

      @@etodemerzel2627 just want to ask how? What database is it? How much data/how many rows? 30m to something in the frame of ms seems more like you cached your query the first time you ran it. Sus.
      I literally could not see that being any other case, the fools gold of DB caching. Try another table or set of params and see how it runs now.

    • @etodemerzel2627
      @etodemerzel2627 2 года назад +2

      @@rallokkcaz The particular piece is fully in-memory. The database has no play in it. The last sentence in my previous comment was about the project in general.

    • @codecaine
      @codecaine 5 месяцев назад

      We are so privilege these days compared to when I was programming in the 90s and early 2000s.

  • @nightyonetwothree
    @nightyonetwothree 4 года назад +15

    programmers don't know how slow my computer is

    • @Kniffel101
      @Kniffel101 3 года назад

      What CPU have you got?

    • @nightyonetwothree
      @nightyonetwothree 3 года назад

      @@Kniffel101 core 2 quad q8400

    • @Kniffel101
      @Kniffel101 3 года назад

      @@nightyonetwothree
      Well okay, that's fast-ish, but not today's kind of fast. ^^

    • @nightyonetwothree
      @nightyonetwothree 3 года назад +1

      @@Kniffel101 yes, it's pretty fast, yet nowadays' soft and games can kill it with just a loading screen. Or even just a launcher.

  • @dukereg
    @dukereg 3 года назад +24

    C programmer:"I am allocating new things on the heap each time in this tight loop. that is going to be bad performance" JavaScript programmer:"hang on... if I do this, is it allocating somewhere new on the heap each time? I think it depends on the particular version of the particular JavaScript engine being used. I just can't think about this."

    • @XDarkGreyX
      @XDarkGreyX 3 месяца назад +1

      Let's be honest, most JS devs don't think about the heap. They may not even know what it is. They may not even use actual JS directly and only layers of abstraction on top of it. I am a noob myself, but it had to be said....

  • @HexapoDD
    @HexapoDD 4 года назад +20

    "We have amazing computers. Anyone giving you excuses for software running like dog shit is not somebody you should learn from". So true, having an excuse is so much easier than refactoring your dog shit :D

  • @doug9000
    @doug9000 4 года назад +17

    If you watched Casey's Handmade Hero you seen how much a processor from years ago still a beast, when he managed to run 1080p though a bilinear sampled software renderer, i knew how much power a programmer has in hand if they want so. its insane!

    • @doug9000
      @doug9000 3 года назад +2

      @Czekot Is just a comment for anyone to read, and the majority of programmers did not watched handmadehero, neither know how powerful a computer really is.

    • @KANJICODER
      @KANJICODER 2 года назад +3

      @@doug9000 I know of handmade hero. Casey is pretty fucking smart.

  • @lewismassie
    @lewismassie 4 года назад +35

    I'm often shocked at how well old programs ran on almost no space, and it just occurred to me that it's probably because they had almost no erroneous stuff going on

    • @programaths
      @programaths 4 года назад +17

      Guys spent days and months on a single problem to squeeze as much as they can. It was also the time where you had to consider parallel arrays over structures, so you also revise a bit your expectations.
      As of today, one can really ask you to create a Web-VR app for the end of the week and it would be acceptable.
      I did a project at low level and I had to work with signals on the wire, which means handling train of 8 bits that signal someone want's to take over and that the next bits may be garbage etc. The very first "problem" I had to solve wasn't about dialoging with the AC or the Fan, but just abstract that communication away and make the abstraction super efficient. That means reading pages of specification and finding dirty tricks so I can spare instructions. I have a whiteboard, that's what I used for the biggest parts. Drawing rectangles, numbers and arrows here and there. Trying out stuff on the board. When I coded the solution (with an employee of the client), it was "easy" and totally unintelligible to the employee. But it was much more efficient, so much that the hardware it did connect to couldn't follow because it didn't limit itself! (and so, we broke two devices before realizing that it was literally burning itself trying to send to much and that the clock should be lowered). It didn't happen before because the previous solution was so crappy it did made the device WAIT ^^
      And then the question: "Are they that stupid or what ?". And my reply is that simple: "I have been trained to work in industrial computing, 1 µs can cost a arm or limb.".
      My main job is working on Web applications (not website, heavy applications with web front-end). There, I am at the millisecond and when I say to my boss: "Ouch, 1ms for the query, that's too slow" he find that funny. Not realizing from where I come!
      Thing is that we can't work the same way in all contexts.

  • @beragis3
    @beragis3 4 года назад +12

    Finally someone who is stating what I have been complaining about for over 30 year in my programming career. It's a combination of many factors. Many programmers not having a computer science or IT background, often entire teams without any computer degree. Far too many projects using the Agile methodology as an excuse to not do any design. A huge reliance of badly written libraries that call other badly written libraries that call other badly written libraries, etc. Use of web services in places it was never meant to be used, or having far too fined grained services, and wondering why a webpage is slow, when it makes half a dozen web service calls each of which that in turn calls other web services that in turn call other services for more information, when most often the information is already available in one or two places, and can be done in one or two web service calls, or accessed directly without a web service.

    • @JosifovGjorgi
      @JosifovGjorgi 4 года назад

      Yes, micro-services with network calls and the explanation is it's good because of separation of concerns, abstraction and reusability.
      One other thing is web app should be stateless. This leads to every request with state => bigger routers and switches (instead of 1Gb, you need 100Gb)
      Stateless = RouterFull
      The blame goes to consultants and thought leaders, because they explain the solution without details in the problem descriptions. Even if you have background in IT/CS you can fail for these scams.

    • @wadecodez
      @wadecodez 2 года назад

      Exactly. It's not like we want to write slow code. There is just so much bad code that our options are write code from scratch or write code in a realistic amount of time. Nobody has time for perfection unless you are in a creative industry like game dev.

  • @photonymous
    @photonymous 4 года назад +41

    Amen! And that doesn't even take into account SIMD instructions. This is what has motivated me to try to code with a low-level hardware-oriented mindset over the years. It's frigg'n amazing what a desktop computer can do. They're an embarrassment of riches that we barely take advantage of. As fast as they are, somehow we're still able to get software to run slowly on them. A smart guy once said "Software gets slower faster than hardware gets faster", but I try to make sure that only applies to other people's code ;-)

    • @azngoku666
      @azngoku666 3 года назад +5

      and what have you made with your hardware-oriented programming?

    • @justadude8716
      @justadude8716 Год назад +1

      @@azngoku666 a calculator app

  • @bonbon_ann2701
    @bonbon_ann2701 4 года назад +51

    Because programmers today don't care about hardware.

    • @wiktorwektor123
      @wiktorwektor123 4 года назад +14

      Because they learn programming in languages that isolate them from hardware they run on. Mostly with garbage collection and dynamic typing scripting languages that needs to be interpreted to mashine code to be executed. It's loop of idiocracy, what I call it. We invent and build more powerful hardware just for programmers to apply another layer of isolation so they can "conviniently" write in some scripting language computation heavy algorithm, ie. Python and Tensor Flow.

    • @bonbon_ann2701
      @bonbon_ann2701 4 года назад +5

      @@wiktorwektor123 I know, I'm 22 and I had to rewire my brain after what I learned out of textbooks. I've just bought an old Commodore 64 on eBay last month, and I'm having so much fun playing with it in assembly.

    • @wiktorwektor123
      @wiktorwektor123 4 года назад

      @@bonbon_ann2701 I'm learning Rust (new systems programming language) right now, but I think that every developer should at least know C, becuse almost every hardware working today is running some code written in that language. Drivers, kernels: Windows, MacOS, Linux, Android (which uses Linux kernel) most of it is written in C.
      Rust is very promissing as a next step and potential successor to C/C++ in field of low level programming with some very interesting solutions to look like high level language but without high level crap.

    • @josemaria_landa
      @josemaria_landa 4 года назад +2

      @@bonbon_ann2701 im 24 and really interested in computational topology. Honestly i thought Java was to high level lol. having to worry about indexing loops and nodes in a linked list is so 1960s. I still worry about time and space complexity of my algorithms tho, but now i use haskell. its amazing. so clean. besides its nice to see stuff i've studied like category theory implemented into the real world. I did take a course on computer architecture and operating systems in school but honestly aint nobody got time for that lol. at least not me. cheers

    • @programaths
      @programaths 4 года назад +6

      Because programmers do not work anymore from well established specifications, but from the last SMS they received during their lunch while they were reading more in an e-email message about the task they wrote in the Kanban Board because they were handed a post-it during the morning call about a last minute feature they were just made aware the day before when they overheard a conversation at the coffee machine between sales en inquired about it. Of course, that's the best case scenario, otherwise you just learn it by the user.

  • @danpearce4547
    @danpearce4547 4 года назад +15

    As a hardware control system engineer, i despair at the fluff there is between high level languages and hardware, and the ignorance that most programmers have for anything like real time performance and basic sequential and combinatorial logic.

    • @dutchdykefinger
      @dutchdykefinger 4 года назад

      i wasn't even a programmer when i saw the bloatware the early naughties brought
      computers have become so fast, and ram and storage have increased even more rampantly, people don't even care about taht shit anymore lol
      that said, isn't the security layers in CPU designs also a bit of a problem in slower context switching and bogging things down?
      i mean, if you're really serious about performance, go ring 0 right?

  • @samuellourenco1050
    @samuellourenco1050 4 года назад +13

    There are some people that assume they know all just by reading one article. That annoys me. Not only it reveals lack of curiosity, but reveals utter stupidity. It is always good to assume one knows nothing and investigate much further. Collect many points of view, if needed.
    A programmer who has no idea of how a computer works (as included in on how fast a computer is), can not be a good programmer. There are many routines, for example, where it is preferable to compile some document first to RAM before saving it, rather that compile and save it as it goes. Sadly, many choose the second option, so as not to allocate more RAM. It is stupid. RAM is cheap in terms of performance, and it is better than reads/writes to disk. In that subject, I like to treat disk I/O as if it was a Chernobyl reactor. Go in and out as fast as you can, and do the prep work outside.
    Essentially, open a file, load the entire file to RAM, close it. To save it, open the file again, dump the info in it (it must be processed at that point), and close it.

  • @BrandNewByxor
    @BrandNewByxor 4 года назад +5

    It's got to be fast! How else are all my windows processes gonna launch at startup?

  • @toffeethedev
    @toffeethedev 4 года назад +116

    "My game development speed is bottlenecked by the time it takes for Visual Studio to open." - Jon Blow probably. Make things sucka

    • @idkidk9204
      @idkidk9204 4 года назад +3

      Yeee Visual studio opens like 2minutes and unity even longer...

    • @stewartzayat7526
      @stewartzayat7526 3 года назад +3

      This is so true and it's so painful. I needed to very quickly debug a piece of code I was writing as an exercise for school and I foolishly decided to open it in visual studio. I stared at the loading screen for 2 minutes and was contemplating just closing it and debugging with printf and I think it would be faster. It's things like these that make me want to quit programming

    • @marusdod3685
      @marusdod3685 3 года назад +1

      @@stewartzayat7526 nigga just open up the terminal

    • @sisyphus_strives5463
      @sisyphus_strives5463 2 года назад +2

      @@stewartzayat7526 Use linux and vim instead, you don't have to stick to a single IDE or programming environment

    • @T0m1s
      @T0m1s 2 года назад +6

      @@sisyphus_strives5463 - Linux and vim suck when it comes to debugging. Yes, after a lot of effort and plugins you can set up something a bit more usable than GDB but come on, what is this, the dark ages? Out of the box I want a user friendly environment that will give me the most amount of useful information. And if you rely on plugins, vim offers a very inconsistent experience; when something goes wrong, it's not clear why. Linux and vim are not it. It's an embarrassment to Linux and OSS that it's 2022 and the best IDE for C and C++ is still Visual Studio on Windows.

  • @rabbitcreative
    @rabbitcreative 2 года назад +9

    I bench-marked an HTTP-version of 'hello-world' with several configurations. On my 2013 Macbook Air, the best was pure Nginx, with ~12,000 req-per-sec, and the worst was Ngninx -> PHP -> Magento with ... 2 req-per-sec. When I bring this up, people seem to only get mad at me. Like I shit in their cereal.

  • @VivekYadav-ds8oz
    @VivekYadav-ds8oz 4 года назад +10

    Software gets lazier as hardware gets better. That's why your computers don't feel as fast as they should when you used the crappy one in 2008 versus your new one in 2020

    • @nigeladams8321
      @nigeladams8321 7 месяцев назад

      I think you might be remembering 2008 loading times with rose tinted glasses

  • @bogganalseryd2324
    @bogganalseryd2324 4 года назад +5

    I remember programming the 68K, assembler , we optimized the shit out of our code

    • @dutchdykefinger
      @dutchdykefinger 4 года назад +1

      there's still great lessons to be learned from corner-cutting techniques like raymarching and all kinds of trickery to pull off the stunts they did

    • @iivarimokelainen
      @iivarimokelainen 3 года назад

      And it took years to develop something that takes a day now.

    • @bogganalseryd2324
      @bogganalseryd2324 3 года назад +1

      @@iivarimokelainen we worked as close to our hardware as humanly possible, my first assembler book was "programming 68000", those where the days

    • @lordlucan529
      @lordlucan529 2 года назад

      @@iivarimokelainen not really

  • @captainbanglawala2031
    @captainbanglawala2031 4 года назад +3

    Loading things from disk is probably really fast. Pre-processing verts, anims and all that other jazz takes a long time, probably involves n^2 algorithms that's buried deep inside code no-one knows about and is probably the reason GTA loads slowly. No-one can be bothered to profile and optimise that code because they're too afraid of breaking other things. It's also difficult and time consuming for a developer to do, that time will probably be spent on bug-fixing or adding some other feature.

    • @mettaursp309
      @mettaursp309 3 года назад

      A good chunk of those kinds of things can be often cached and baked into the file format itself. That kind of data is usually processed once, remains static across runs regardless of when or where it is loaded, and is never touched again by the CPU once it is loaded onto the GPU.
      If they are bottlenecking because they are doing that processing each time the data loads it sounds like insufficient tooling, but I highly doubt that is where the base game's shortcomings are. GTA:O is a buggy, unoptimized mess due to nearly a decade of ongoing development, but the base game was remarkably well made for its time.
      I recall the whole "designing traffic to intentionally slow players down to help with load times" bit coming with the context of the 360/PS3 gen attached and the streaming load times don't really feel that bad on modern machines where they added much faster and more maneuverable vehicles.

  • @mikecole2837
    @mikecole2837 4 года назад +29

    I agree that most programmers will squander hardware resources if given the opportunity. But I'll be honest with you, I don't think it's because "programmers don't know how fast their computers are". Seriously. What professional programmer doesn't understand clock speed and instructions per second? Or Amdahl's law. That's elementary, basic stuff.
    I think the crux of the issue really comes down two things:
    1) Unrealistic product development cycles that lead to corners being cut.
    2) the Dogmas of reusability, modularity, abstraction, and maintainability superseding the importance of efficiency in code. OO programmers will write a call stack 40 layers deep to represent something that could be a single function.

    • @tropingreenhorn
      @tropingreenhorn 4 года назад +4

      yah, I think there are severe optimization limits to OOP, but the challenge with optimization is that it can cause a spaghettiopolipse. I think idealy a highly optimized code base is totally possible it is jsut difficult to do with too many different programmers working on something at the same time.

    • @BlacKHellCaT78
      @BlacKHellCaT78 3 года назад +14

      I used to work at a company where they do web apps. Trust me when I say that majority of people that can write decent code still have no clue about clock speed and instructions per second. I'd say that's mostly due to the fact that they've only ever used high-level languages and optimization requirements they did in web apps are much much more lenient compared to games.

    • @outlander234
      @outlander234 2 года назад +2

      If I were just go by your comment alone it seems like you dont either. Clock speed and IPS are irrelevant, you cannot code better or worse to take advantage of them, they are there always for better or worse. For example a piece of unoptimized code runs at 100ms, on a 2x faster CPU its going to run in 50ms. Nut you optimize that code and it could run 10 times faster on slower CPU than non optimized code on faster CPU. That's the point. Its all about using cache and threads efficiently. People shy away from parallelization and data oriented programming because it requires actual knowledge of how CPUs work so they just stay in OOP world doing most basic optimizations.

    • @mikecole2837
      @mikecole2837 2 года назад +2

      @@outlander234 I'm an embedded audio engineer by trade. I have to know how computers work to perform my job...
      I was just saying that ignorance of the inner workings of computers is not the only reason that code might be unoptimized. some people work under time constraints.

    • @rabbitcreative
      @rabbitcreative 2 года назад +2

      > Amdahl's law
      Never heard of it. Been programming for 15+ years. Your perspective is not everyone's perspective.

  • @cinthiacampos8390
    @cinthiacampos8390 4 года назад +11

    programming with a single core pentium processor. soy devs out here are trapped in the matrix

  • @derrenmarcusturner408
    @derrenmarcusturner408 4 года назад +3

    2:56 "That's all I'm gunna say"
    Me: listens intenly for 3 whole minutes 🤣

  • @steve16384
    @steve16384 11 месяцев назад +1

    Web devs who think the internet is instant are the scourge of the web

    • @vytah
      @vytah 9 месяцев назад +1

      All webdevs should be forced to test their creations over 500ms ping.

  • @darkoneforce2
    @darkoneforce2 4 года назад +1

    So it's another case of living in a bubble.

  • @0xCAFEF00D
    @0xCAFEF00D 10 месяцев назад

    0:30
    Thats funny. Because it was bottlenecked by json parsing. And in particular strlen in the parser.

  • @nicholastheninth
    @nicholastheninth 3 месяца назад

    I made a 2D physics engine and it’s running in about 8 ms @ 60 Hz with 32,000 circles in a pile, on a Ryzen 7 5900HX.

  • @alpoole2057
    @alpoole2057 4 года назад +4

    Web programmers would improve greatly if they were restricted to data transfer as per ISDN and MODEM.

    • @ImperatorZed
      @ImperatorZed 2 года назад +2

      That would optimize for file size, not performance. You'd get all kinds of recursive slow as shit hacks.

  • @StefanReich
    @StefanReich 4 года назад +8

    5:33 "Do things run at 4 GHz still? I don't even know"
    Hmmm...

    • @dutchdykefinger
      @dutchdykefinger 4 года назад

      @bean it kind of never did from the point intel got competition on x86 back in the 80s.
      it kind of didn't even comparing intel to intel itself even when CPUs were only still single-core (p3 had WAY better IPC than p4)
      but yeah, ever since software started to parallelize better since 2014 orso, it's even more of a useless metric

  • @nexovec
    @nexovec 4 года назад +1

    If you can do it so that everything is a bottleneck at the same time without doing useless stuff, you've kind of solved programming

  • @danpearce4547
    @danpearce4547 4 года назад +1

    My crappy little Lenovo 110s would barely boot in Win10, let alone update itself with its poor little single core Atom....Then I put Bodhi Linux on it and now it goes like the crapper!

  • @nara4420
    @nara4420 4 года назад +1

    ... but everything today must be client/server and must be moved to a cloud, isn't it ?
    There are many known reasons why it is accepable to live with 20ms of latency for each call instead of beeing fast and local. Its nothing personal - just a matter of business.

  • @Optimus6128
    @Optimus6128 Год назад +1

    People indeed don't understand how much faster modern computers are. I wouldn't give it much thought either until I tried optimizing for retro computers or consoles. I remember someone wondering in stack overflow how the hell was it possible back in the days to do anything in 8bit computers with 64k ram and 1mhz, how did they fit graphics for games and so on. Of course there is the answer that data were in 2 bits per pixel and smaller resolutions, but still. When I do things for say an Amstrad CPC with z80 at 4mhz, we know how many cycles are available at 1/50th of a second. It's close to 80 thousand. Then because an instruction will take at least 4 cycles (even a NOP instruction) and on the CPC hardware, they take multiples of 4 cycles, we count them as NOP cycles and we know 19968 NOP cycles. So, you do the whole calculation to know your limits, like if you do a software rendered pixel per pixel effect in 64x64 little window, you have around 4.8 NOP cycles to spare for 50hz. It's very tight, maybe your plasma and rotozoomer shouldn't take 10 NOP cycles but push it less. It's a battle and you are conscious how many cycles per pixel and that some instruction will take 3 to 5 NOP cycles instead of 1 or 2. So there is big consciousness on how things are and what are your limits.
    As a side note, when I watched Casey's video on how many cycles are wasted on a Python add, I lost my mind. Meanwhile, I did similar calculations in my mind: 3.4Ghz in my machine, assume I run everything in single core and forget about SIMD, multicore or the fact that nowadays CPUs can execute many instructions in just 1 cycle. And divide 1920x1080 pixels for software rendering and there were enough cycles free (compared to the tights ones on z80 for a 64x64 poststamp rendering :) for software rendering in so many pixels. Of course, if you naively throw floating point, sines and cosines and atan2 per pixel you will still struggle. But porting some oldschool full integer/fixed-point plasma/rotozoomer effects I did from older PCs with precalced LUTs, I could easily get 200fps on 1080p and 50-70fps on 4k! And mind you, that code is single core, no SIMD or multicore used (which I need to learn as it opens potential for crazy optimizations). But of course conscious old code made to run fast on older machines, if I was naively doing floating point math or calling heavy instructions per pixel it wouldn't be the case. Of course who needs software rendering when GPUs can do very fast per pixel floating point math in shaders? Yeah,. but the point is, you can do things that people think should be slow in 4k software rendering in modern CPUs. And even if you went single core and ignored what else is available nowadays. Too many cycles per pixel for the CPU.

    • @thewhitefalcon8539
      @thewhitefalcon8539 Год назад

      There's an article called: a spellchecker used to be a major feat of software engineering

    • @Optimus6128
      @Optimus6128 7 месяцев назад +1

      @@thewhitefalcon8539 Yeah, I've read that.
      I think my previous rant might seem like I am obsessed with cycle counting.
      But it's more like if you know how much processing power you have per element (being a pixel, a screen character, an element from a list, whatever), it feels like you have an enormous impossible amount of cycles per element, for the app to not start instantly, for the menus and editor to not struggle for half a second.
      I just don't understand how things have turned so horrible slow, making the comparison of old vs new hardware (and actually living through it, I would be in heaven if in the 80s you told me "You have a million cycles per pixel, you can DO EVERYTHING". And now for 400 characters it struggles for 0.5 seconds?

    • @thewhitefalcon8539
      @thewhitefalcon8539 7 месяцев назад +1

      @@Optimus6128 we used all the processing power making abstraction layers on top of more abstraction layers

  • @BlowFan
    @BlowFan  4 года назад +32

    The thumbnail is pretty dumb.

    • @usestudent
      @usestudent 4 года назад +3

      its kinda cute

    • @ranger.1
      @ranger.1 4 года назад +1

      Keep the uploads coming

    • @nates9778
      @nates9778 4 года назад

      The thumbnail is fast af boiiii

    • @Hajdew
      @Hajdew 4 года назад

      But i clicked it

  • @Muskar2
    @Muskar2 7 месяцев назад

    3GHz * 4 cores * 8 SIMD * 2 IPC = 200B simple math ops per second. And that's not even high-end today. I wish more programmers knew this indeed.

  • @gzozulin
    @gzozulin 3 года назад +4

    Well, "web-developer" here :) What they mean when saying that latency does not matter much is that when you have a latency of 100s ms running over the wire it is not comparably important to shave a couple of milliseconds from computation comparing to games/simulations. That actually can easily result in premature optimization.
    Also, in most cases, the frontend/DB/etc are on different nodes of the web-app cluster and if you will improve local computation, that might not mean much, since you will still have to make a call to DB/cache/other node/etc. That is why you cannot create Facebook/RUclips/Twitter in your basement anymore: the infrastructure becomes a cornerstone.
    But! I absolutely agree with the general statement that modern software is complete bloat and can be orders of magnitude more reliable and fast. When everything is geared towards squeezing profits and not creating with passion, this is what we have.

    • @T0m1s
      @T0m1s 2 года назад +5

      What you're saying is mostly right, but when the web industry says "latency does not matter much", what it does is indistinguishable from "latency does not matter".
      Online games have to deal with latency as well, and they've been doing that successfully for 25 years (I'm counting since Quake, more or less). Whereas the web dev industry uses expressions as "programmer time is more important than machine time" as an excuse to not treat performance as a first class citizen. Which is why we mostly see the database and backend on different nodes, incurring at least one extra network call, when it doesn't have to be like that.

    • @zocker1600
      @zocker1600 Год назад +2

      @@T0m1s
      > incurring at least one extra network call, when it doesn't have to be like that.
      I know your comment is older and correct me if I am wrong, but isn't the database usually on a different node for security reasons?
      Having the back end and database on the same node is a big no-go, because an attack on the back end would also compromise the database as in the scenario of separate nodes the attacker has to break though another layer.

    • @T0m1s
      @T0m1s Год назад

      ​@@zocker1600 - to be precise, as far as I know, in AWS (most popular cloud provider) the database and the storage for the database are different nodes. So, in fact, it's likely at least 2 extra network calls, not just 1 like my comment said. Client -> backend -> db -> storage.
      You _may_ be right that security was the rationale, I don't have any sources to prove otherwise. Although, to the best of my knowledge, the reason is usually that "RDS does it like this", which was prompted by Bezos telling AWS that everything needs to be software-as-a-service, so databases were naturally enclosed in their own bubble.
      Personally I don't find the security argument too compelling. By following the "more layers is better" rationale, we should have each table (or better yet, any smallest piece of data) on a different node.
      I agree that sometimes this is needed. Sometimes clients demand you have a database just for them, so nobody else can access the data accidentally. Or if you're a bank, you prefer avoiding any risks. That's ok.
      But in the vast majority of cases, prioritizing a hypothetical "what if this machine is compromised" over the concrete drawback of "users are going to have to wait longer" doesn't seem like the correct trade-off.
      In fact, in recent years I've been hearing about cases where developers move from RDS to SQLite, which can give you orders of magnitude better performance - a tangible benefit.
      That being said, I'm not a security specialist, you may be right. But I suspect the correct solution is to find a way for the backend and DB to safely share a node, or at least to be in a very close physical proximity and find a way to reduce the cost of a network call.

    • @AftercastGames
      @AftercastGames Год назад +1

      Yes, because just like programmers don’t like to think about performance, they also don’t like to think about security. 😏

    • @zocker1600
      @zocker1600 Год назад

      ​@@T0m1s Thank you for the answer, that makes a lot of sense to me and I agree that the "more layers is better" approach is probably one major contributor to the performance problems.
      Actually IIRC it was Amazon themselves which admitted that having many small microservices can be really bad for performance due to latency adding up.
      I sadly cannot post links on RUclips, but if you search for "Amazon Dumps Microservices for Video Monitoring " you should be able to find articles about this from May this year.

  • @mikecole2837
    @mikecole2837 4 года назад +3

    Also, the web is asynchronous. latency does not necessarily sum.

  • @MrAbrazildo
    @MrAbrazildo Год назад

    5:10, I don't know if technology changed on this, but last time I checked, it uses to take some clocks per instruction, not the contrary. Am I out of date on this?

    • @AftercastGames
      @AftercastGames Год назад +1

      Yes. Thanks to multiple layers of caching, pipelines and out-of-order instruction execution, modern CPUs are able to “effectively” execute multiple instructions in a single clock cycle. But you do have to read the fine-print. There are a LOT of factors involved.

    • @AftercastGames
      @AftercastGames Год назад

      If you were to disable these features, or simply use an older processor without these features, then yes, single instructions could take sometimes dozens of clock cycles. Typically somewhere between 4 and 12. Just fetching the instruction itself from memory would cost 2 clock cycles.

    • @MrAbrazildo
      @MrAbrazildo Год назад

      ​@@AftercastGames 2 cycles on L1 cache only, of course.

  • @mockingbird3809
    @mockingbird3809 4 года назад +2

    Blow knows what he's talkin about

  • @igs4112
    @igs4112 4 года назад +7

    We got it! Cumputers are really fast! and many softwares are slow. Dope man :) Also i am glad you came back BlowFan, missing your vids dude.

  • @eloniusz
    @eloniusz 7 месяцев назад

    It should be required by law that programmers can run their own programs only on crappy 10 years old laptops.

  • @stunthumb
    @stunthumb 3 года назад +1

    They ain't fast enough! - I want to simulate water volume for example, multiple calculations over a 2D array... even on my decent PC on a core all to itself its tricky to get it fast enough. If only we could combine all these cores we have, split the work between 2 or more cores somehow.

  • @miserablepile
    @miserablepile 4 месяца назад

    To get Jon Blow to talk about a topic, say something about it that will make him want to insult you (this isn't difficult)

  • @null7936
    @null7936 3 дня назад

    Hopper (:

  • @foljs5858
    @foljs5858 3 года назад

    Ah, this "preached by your uncle at thanksgiving" vibe...

  • @SnakeEngine
    @SnakeEngine 2 года назад +1

    Yeah, but to be fair, old programmers understimate how complex software is today. So it's about maximzing productivity today, not the hardware.

    • @marcossidoruk8033
      @marcossidoruk8033 Год назад

      But that is his whole point. Do you think software being so complicated is a good thing? If the answer is yes you just proved jon right.

    • @SnakeEngine
      @SnakeEngine Год назад +2

      @@marcossidoruk8033 Do you want to get stuck at Atari/C64-level of programs, or do you want to use a browser to type your youtube comments?, which is a comlex program by necessety.

    • @nigeladams8321
      @nigeladams8321 7 месяцев назад

      ​@@SnakeEngineon top of that not having to target specific hardware opens up the amount of devices that can use your software. The reason Java took off the way it did was because that shit ran on everything, not always well but if it ran java it ran Java.
      Like I'm sure managing memory and really squeezing the shit out of hardware in C can make the program more efficient but which hardware are you going to be targeting?

  • @Mike.Garcia
    @Mike.Garcia 4 года назад

    The question sounds PS5 related

    • @XenonG
      @XenonG 4 года назад +1

      From the hardware components to the "close to metal" software layers, it is all arranged to get as low latency and high throughput as possible. They could go further, but the pricing would be uncompetitive (HBM2 on substrate, storage controller on SoC). Xbox Series X/S not too different as well though doesn't seem to come with their own in-house custom ASIC.

  • @trafficface
    @trafficface 4 года назад

    I can't refute your claim there are developers who don't get it yet. But single treaded synchronous JavaScript, I don't get all those cores, I do get the cpp behind that particular engine however. But if I create some poorly written algorithm implementation... That's is not fast. For example reactive defusion, I wrote this algorithm badly and it's slow as hell. 10 sec time to first meaningful paint. Hold my hands up it's my fault.

    • @Mark-kt5mh
      @Mark-kt5mh 4 года назад +1

      Only the event loop is single threaded. Microtasks, for example, execute in parallel on a seperate thread.

    • @Borgilian
      @Borgilian 4 года назад

      @@Mark-kt5mh On mozilla's documentation page for microtasks, it says this: Microtasks are another solution to this problem, providing a finer degree of access by making it possible to schedule code to run before the next iteration of the event loop begins, instead of having to wait until the next one.
      Soo... If I understand correctly, only web workers work on separate threads. Micro tasks are still singlethreaded.

    • @emofdo
      @emofdo 4 года назад

      What is "reactive defusion"? Did you mean reaction-diffusion?

    • @trafficface
      @trafficface 4 года назад +1

      @@emofdo dyslexia

    • @tropingreenhorn
      @tropingreenhorn 4 года назад

      This is why developing games with Javascript isn't ideal though... Multithreaded loading of code seems like a must if we are trying to get speed.

  • @Kyle1444
    @Kyle1444 4 года назад +4

    DOOM 2016 Developers would like to know your location xD that game is millisecond on millisecond, but praised everywhere on the net for being this wonder software that runs so many frames, but in reality have the responsiveness of a 10fps game. But nooo John Carmack was "wrong" in fixed tic approach since the beginning (no, lol)

    • @Optimus6128
      @Optimus6128 7 месяцев назад

      No more than 20 enemies at the same screen/area, e.g. there is a limitation when you use the editor to make maps and place enemies. Limit 20 compared to Doom Nuts WAD 100000

  • @cadetsparklez3300
    @cadetsparklez3300 4 года назад

    modern devs are just old devs and people who dont do work

  • @mikaelsyska
    @mikaelsyska 4 года назад +2

    I'm not sure who you talk to or listen to ... But you need new people to talk to.
    Maybe you attract people that don't care about performance. 🤣

  • @ar_xiv
    @ar_xiv 3 года назад

    doom and quake are still the gold standard. hell NES games are the gold standard. We can get back to that level of responsiveness

  • @alwinvillero4404
    @alwinvillero4404 4 года назад

    Me: 3bg ram pc made by dell from like idk 2012?
    Some guy playing minesweeper: custom rgb gaming rig

  • @iivarimokelainen
    @iivarimokelainen 3 года назад +1

    So weird to hear him talk about how people are poor at coding and optimizing and using computers power when The Witness ran like dogshit.

  • @azngoku666
    @azngoku666 3 года назад

    can somebody explain why he uses windows and rants about stuff like this?
    is it because your game has to work on windows to make money? (even though that means it has to run like dogshit and play ball with all the bad practices that got us here)

    • @Kniffel101
      @Kniffel101 3 года назад +2

      Yes and also because debuggers on Windows are far from being as terrible as on Linux.

    • @azngoku666
      @azngoku666 3 года назад

      @@Kniffel101 maybe casey muratori should write a debugger or something what do you think dude

    • @Kniffel101
      @Kniffel101 3 года назад

      @@azngoku666 I don't think he has the time for that. Maybe RemedyBG will have a Linux version at some point.

    • @T0m1s
      @T0m1s 2 года назад

      It's because Visual C++ is (sadly) still the best IDE on the market. And that's saying something, considering that the current iteration is far worse than Visual C++ 98 (launched last century).
      The reality that still hasn't sunk in for OSS fans is that paid user-facing proprietary software developed by teams who earn a decent salary will often beat the efforts of individual OSS developers. I don't know for sure why that happens; I think one factor is that UX/UI/graphic designers have lower salaries and aren't as motivated to dedicate their spare time to OSS, which is probably why software like OpenOffice looks so lame compared to its more professional looking not free counterparts.

    • @azngoku666
      @azngoku666 2 года назад

      @@T0m1s another way to look at it: openoffice is for people who are using linux but are still 'in the windows mindset' (so they're not even exploring what makes it different other than not costing money)
      it's a similar thing to assume that an IDE is needed, that the most superficial part of the tools is the one that matters, etc
      real unix heads use tools that aren't sexy and marketable, but they get stuff done faster and without most of what he's ranting about in this video.. they understand that an IDE is just a way of packaging together the compiler/debugger/build system and so on
      you don't need to use things that have "beaten" other things, you don't need to use whatever is the most popular - once you get out of that mindset you can get more serious about doing things the way you want to
      in other words... if jblow is so hardcore and understands all the flaws with this garbage windows stuff, why not go use the stuff that doesn't have those flaws?
      so then we're back to my original question - if it's because he wants to make games for windows so he can make money, then:
      - need to invest in better tools that don't have these problems
      OR
      - realize you can't because you're living in a rent seeking proprietary ghetto where nothing good can really be put together and sustained
      then you go make games for other platforms, and still sell them for money - it has to start somewhere, right? continuing to use windows 11/12/whatever while it gets worse and worse isn't a winning strategy

  • @EDC.EveryDayCode
    @EDC.EveryDayCode 4 года назад

    Blow fan lol😂. Interesting to hear, So if software is a shit show then how can we simplify it?

  • @REDACT3D
    @REDACT3D 4 года назад

    how long before programming is obsolete?
    I mean, It feels like we are close to AI that can write the code that you want with only minimal inputs. Eg. "Computer Build me a Game engine with the following properties"

    • @JavaJack59
      @JavaJack59 4 года назад

      gamesbyangelina.org

    • @pelic9608
      @pelic9608 3 года назад +1

      It won't be. Sure, the GPT 5 examples the other month were impressive; only on the surface, though. What the "AI" (machine learning, no intelligence!) is capable of doing is quite literally taking what it has learned from reading sites like stackoverflow. If that's what you call a programmer, then yes. Then programmers will be obsolete very soon. Luckily, that's not what a programmer is though.
      I actually wouldn't be opposed to having some voice-contolled system that does the implementation for me. Making the plan, solving the problems, i.e. the intelligence requiring parts, will stay in the human domain. At least for the next 50 years that I'll still be here.

    • @T0m1s
      @T0m1s 2 года назад +1

      @@pelic9608 - "taking what it has learned from reading sites"; sadly, it doesn't even do that. "Learned" implies some sort of thought process, critical thinking, reasoning. It's not the case here. What current "AI" does is produce an output that is statistically similar to some training data. Nothing magical here, just (mostly) boring linear algebra and a bunch of hardware.

    • @BoardGameMaker4108
      @BoardGameMaker4108 2 года назад

      Never going to happen, the problem is how do you describe the implementation? You could say "build me a game engine" and it builds a completely different game engine than what you wanted. If you provide all of the details, well now you are just programming. It's easier to just build it yourself when you have a really detailed design. Not to mention, It's also not as simple as "build me X solution", there's trade-offs with every decision. The AI might build a really fast game engine, but it's so complicated you can never hope to make anything in it.

    • @turolretar
      @turolretar 10 месяцев назад

      We need a programming language for ai then, regular language doesn’t cut it

  • @gordonmaxwell3998
    @gordonmaxwell3998 4 года назад +1

    Do you ever use linux?

    • @alwinvillero4404
      @alwinvillero4404 4 года назад +6

      *btw i use (insert relatively obscure distro)*
      time to start an argument in the reply section booiiis

    • @TimBell87
      @TimBell87 4 года назад +3

      @@alwinvillero4404 >Not using (insert slightly more pretentious, equally obscure distro)

    • @alwinvillero4404
      @alwinvillero4404 4 года назад +2

      @@TimBell87 i guess (insert random distro) has better performance than (insert another distro), so yeah, your argument is invalid

    • @mosesturner4018
      @mosesturner4018 4 года назад

      Yeah

    • @gordonmaxwell3998
      @gordonmaxwell3998 4 года назад

      @@alwinvillero4404 gentoo is best linux

  • @foljs5858
    @foljs5858 3 года назад

    "Anybody who hasn't done this hardcore [gaming programming] for their whole life they have no idea" As if Jonathan has? He wrote a couple of gameplay-driven games, hardly AAA stuff. John Carmack has done 100 times the work, and is far more humble and less down-putting...

  • @John-dl5pw
    @John-dl5pw 9 месяцев назад

    Lol IO problems if anyone wants to know exactly why GTA5 is slow to load look up this article "How I cut GTA Online loading times by 70%" Jon strikes again