Why Won’t My Program use More than 25% of the CPU?

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 71

  • @askleonotenboom
    @askleonotenboom  2 месяца назад

    ✅ Watch next ▶ Does CPU Speed Matter Any More? ▶ ruclips.net/video/-K7p-6NGyaU/видео.html

    • @Matlockization
      @Matlockization 2 месяца назад

      Well for example, I complained to Malwarebytes as to why they didn't use all that headroom above the typical CPU usage of 20% and it turned out that someone over there must have listened to me because now this app goes up to 100% on idle. It has an algorithm that scales how much CPU it uses depending on how busy your computer is in real time. So it is likely to happen, if you speak a fantastic CEO. Unfortunately, there are very few today who take onboard customer feedback.

  • @CaseJams
    @CaseJams 2 месяца назад +9

    Just discovered your channel, watched four or five videos. Great stuff. Very clear and well presented.

  • @markrosenthal9108
    @markrosenthal9108 2 месяца назад +1

    There's an old saying:
    "All CPUs, no matter how fast, wait at the same speed."

  • @KarlUppianoKarlU
    @KarlUppianoKarlU 2 месяца назад +16

    Excel will typically not attempt to split up a macro or visual basic task over multiple cores. It isn't trivial to divide up math-intensive calculations across multiple cores, especially not arbitrarily by the Excel runtime. The calculation may be sequential, requiring the results of one calculation before the next step can proceed. This can't be done in parallel. If you could divide up your pages so that some operations could be done in parallel (you'd have to code it that way yourself), then perhaps Excel could schedule them to run simultaneously, using more cores.

    • @factChecker01
      @factChecker01 2 месяца назад +3

      Yes. And it would require a large increase in complexity for a general utility like EXCEL to automatically recognize calculations that allow multi-threaded parallel execution. I am not surprised if it does not try.

    • @gruntaxeman3740
      @gruntaxeman3740 2 месяца назад

      Actually, math intensive calculations are easy to divide across multiple cores because they can be written to functions without side-effects. In spreadsheets it is not easy.
      There are tools that make divide load to different cores easy. Like Haskell or Go.

    • @factChecker01
      @factChecker01 2 месяца назад +1

      @@gruntaxeman3740 The side effects of functions is not the issue. If the result of functions must be calculated sequentially, it may not be possible to calculate them in parallel.

    • @gruntaxeman3740
      @gruntaxeman3740 2 месяца назад

      @@factChecker01
      Sure, but that is usually not the case when doing heavy number crunching. It is more like flow of data, having pipeline of stages where operations are done until you got output.
      That pipeline stages are sequential, speedup happens when stage parallelized, or each stages run different cores and there are constant flow of data.

    • @cigmorfil4101
      @cigmorfil4101 2 месяца назад

      But that doesn't explain why excel maxes out at around 12% when I have multiple workbooks open which have no links between them and can easily be processed in parallel (though why most should have to be recalculated when I make a change to one workbook has no links to the other workbooks open and all the other workbooks are recalculating on no changed data).

  • @acreguy3156
    @acreguy3156 2 месяца назад +7

    If only I had teachers in school like Leo!! Well done, sir 👍 as usual. Thanks.

    • @Ludak021
      @Ludak021 2 месяца назад

      They'd teach you things the wrong way, just like he is. He is wrong about several things, including basic terminology.

    • @acreguy3156
      @acreguy3156 2 месяца назад +1

      @@Ludak021 Examples please. I think Leo is mostly spot on. What's he doing wrong?

  • @paulmoffat9306
    @paulmoffat9306 2 месяца назад +6

    The other bottleneck, is the I/O to RAM memory. Things go a lot faster, if the active segments of your program fit entirely in the CPU Cache memory, then that CPU does not need to refer to the external memory continuously.

    • @mal2ksc
      @mal2ksc 2 месяца назад

      And mostly that's due to latency, not bandwidth. DDR4 is quite adequate at providing massive amounts of data, once it gets flowing. The bottleneck is in how long it takes to access any given bit of data and start the flow. This means the more you stop and start and load small chunks, the more RAM access will slow your program down.

    • @Peter2k84
      @Peter2k84 2 месяца назад

      Wouldn't worry about that much on a Q6600 though.

  • @joesterling4299
    @joesterling4299 2 месяца назад +4

    That last part is what I didn't know: The single-core task can bounce among the available cores due to Windows scheduling. That's the last puzzle piece I was missing. (Everything else I already understood.) Now I know why exactly 25% usage seems to be split among my 4 cores in some uneven spiky (in Process Explorer) fashion.

    • @bertblankenstein3738
      @bertblankenstein3738 2 месяца назад

      The task gets allocated a slice of cpu time and after that time the task is out back on the task list. There might be other factors that determine which cpu the task is assigned to. Perhaps assign it to the coolest cpu next.

  • @NoEgg4u
    @NoEgg4u 2 месяца назад +8

    Back in the day of single core CPUs, it was common to enable hyper-threading.
    That was good and bad.
    -- It was good, because if you had something hogging your single core CPU cycles, it could hog only half of your CPU cycles. So you still had a responsive computer.
    -- It was bad, because if you actually wanted a program to use all available CPU resources, it could not. Well, if the software was written to use multiple CPUs, then I believe that it would have used both halves of the hyper-threaded CPU. But few software tools, back then, were written to use multiple cores, because few folks had more than one CPU (or more than one core).
    And the operating system needed to support multi-core CPUs. So in 1993, when I had my Gateway 2000 computer, with a 66 mHz DX2 CPU, and DOS 5.0, it did not support multi-core operations (as far as I remember). At least that computer's motherboard certainly did not. I think that Windows XP might have been the first Microsoft operating system to support multiple cores and hyper-threading.
    Today, with nearly every CPU having 4+ cores (even $100 mini computers have two cores), I do not see any reason to enable hyper-threading. Is there still any reason to enable hyper-threading?

    • @benhetland576
      @benhetland576 2 месяца назад

      There was Microsoft Xenix long before 1993, but I don't know if it actually supported more than one CPU. Multi-CPU (cores) was very rare back then, except maybe for supercomputers. But when the first version of NT was released around the same time it did have multi-CPU support right from the start. The capability was restricted by licensing, however, and not cheap. IIRC there was also a hard limit that maxed out at something like 4 or 8 cores. The standard license was like 2 core/CPU support (hyperthreading wasn't invented yet). As you probably know XP was just a later version of NT.
      Hyperthreading appears and behaves to the OS as just another CPU. 2 virtual cores just share the same internal CPU resources (like the ALU or math unit) of the same physical core, but only one of them can still use each part at the same time. You get better utilisation of the circuit parts since each CPU rarely makes use of all of its parts at the same time. This fine-grained resource allocation is managed by the CPU internally, and not a concern of the OS. However, you don't get double performance, because if one thread is using a resource the other thread has to wait for it if it needs it. In practice I have measured around 50% performance increase, so the single 2-threaded hyperthreaded core is as fast as 1.5 "real" cores. If you have a CPU with 8 physical cores and enable hyperthreading, then the OS sees 16 cores, and might report 1600% usage, but actual performance is only like 12 cores. Still "better" than the 8 cores though!

    • @mal2ksc
      @mal2ksc 2 месяца назад

      Windows NT supported multiple processors, back when you physically had to install two processors.

    • @MrDomingo55
      @MrDomingo55 2 месяца назад

      ​@@mal2ksc Even before Windows NT, Microsoft and IBM created OS2 and before that Microsoft had Xenix, a version of Unix. Notable feature of all these three is that they were designed to use a 'Preemptive Scheduler'. In effect, the scheduler, which is hardware clock interrupt driven, upon taking control via interrupt signal, suspends currently running process and pick another process or task to execute. When one has hyper threading and or multiple cores, scheduler has greater flexibility in allocating tasks between different cores. One or more threads of execution (threads is what scheduler actually deals with) may exist within a process. A thread simply contains a store for a set of CPU registers and a block of memory where thread execution stack is situated. When scheduler decides to work with a different thread, it saves current thread state (CPU registers are saved) and upon selecting another thread, sets-up registers and passes control over to that thread. Note that a thread will also temporarily lose control of CPU if it has invoked some I/O operation that takes time or if it attempts to acquire a semaphore or mutex object, purpose of these being to prevent multiple threads accessing some critical resource.
      Compare this with 'cooperative scheduling' that was part of original Windows v1.0 to v3.1, Windows 95 and Windows98 and original Apple Macintosh, where executing code handled events ad/or messages in sequential manner. If execution of any message was to get stuck in a loop or has a task that is unusually time consuming, one has lost control and may have to restart the computer. Applications had to be written carefully to allow good user responsiveness and this was not always achieved.
      NOTE: WIN32 subsystem that handles GUI was originally designed to deal with application specific queue of messages and so the key part of code did a call to get next message from application message queue. If message was available, it was acted upon else code execution passed to some other application that by now may have a message in its message queue. As can be seen, any single app can make life difficult for the user if things go wrong. There was no way to bypass an app that is stuck in some endless loop.

  • @randyriegel8553
    @randyriegel8553 2 месяца назад +15

    I'm a software engineer and even little applications I write sometimes I force C# to use multiple cores. If my main application is doing something but then an SQL query needs to be ran I send the SQL to a different core so it can get back to me faster and doesn't hold my app up. Different Threads are awesome. But a lot of commercial don't use that capability which is sad.

    • @stevetodd7383
      @stevetodd7383 2 месяца назад

      You should go look up Amdahl’s Law. Making full use of multiple cores within a single app isn’t easy (running multiple apps at the same time can do it easily enough). Adding more cores to a system results in diminishing returns with even well coded multi-threaded applications.
      .NET has added some features to help make multi threading easier (like Async calls, parallel loops etc), but there are still limitations to how efficiently any given task can be broken down into parallel sections, and there’s overhead in getting data between these sections and preventing parallel tasks from clashing with each other. If you see code that isn’t using threads then it’s normally because the task wasn’t easy to break down into parallel parts or the devs didn’t have the skills to even attempt it (as I said, it’s not easy).

    • @user-rk9kb2sd9b
      @user-rk9kb2sd9b 2 месяца назад

      ...and it's quite understandable why they don't do that.

    • @gruntaxeman3740
      @gruntaxeman3740 2 месяца назад

      That is not obvious. Like online banking, I pay bill and press button to run query run background. That time account shows wrong information.

    • @stevetodd7383
      @stevetodd7383 2 месяца назад

      @@gruntaxeman3740 running a query asynchronously will, at worst, result in a time stamp that differs by a couple of seconds from when you clicked the button. This is neither here nor there for that type of app. The important thing from your point of view is that the application is immediately responsive and doesn’t freeze for those couple of seconds. In practice, for banking apps and the like, the app will be talking to an API service which may be talking to others before the database is finally reached. This is done for performance and security reasons (performance because data can be cached locally to the service, improving response time and reducing database load. Security because the application is restricted to only the actions that the service permits and is subject to extra security rules).

    • @mopar3502001
      @mopar3502001 2 месяца назад

      Same here. I always spin up a thread or three (as needed lol, not arbitrarily, and delegate as necessary) to handle SQL calls, or other functions such as search routines or sorting. For example, I may build a LINQ list and sort it with a secondary thread allowing the GUI thread to remain unfettered in case something else is needed. I also agree that it is very sad that more engineers don't write to support multiple core execution. There really (unless prohibited by design limitations) is no excuse not to do it. This goes to you game programmers too! You know who you are! If you can't write your own libraries, you can find literally tons on Git, etc. I would never taint a codebase with 3rd party code, but for those of you who want to learn multi-threading it's a decent place to start. Just remember that copying and pasting doesn't make you a programmer. There are literally tons of nuances to multithreaded applications and many considerations to think about.

  • @thomaslechner1622
    @thomaslechner1622 2 месяца назад +1

    That is not only the case in Windows, it is same on other OSs too.

  • @bmiller949
    @bmiller949 2 месяца назад +14

    Drink a pot of coffee and use an abacus, it will seem faster. 🤣

  • @MrPir84free
    @MrPir84free 2 месяца назад +2

    I think you missed some points, some valid points that should also be highlighted.
    A computer can have multiple physical processors; however, in most consumer PC's, it's a single physical processor. That physical processor can have multiple cores inside of it- and the processors can be of different types, speeds and capabilities as well.
    But there are different types of architectures, sometimes mixed upon the same physical processor:
    - Performance cores, often ran at a specific ghz
    -Efficiency cores, usually ran at a slower rate, and has different capabilities, so to speak.
    -Performance cores with Hyper-threading enabled ( Performance wise, the added value of hyperthreading is not 2, but more like + 35% )
    Performance and efficiency cores tend to have a rating for a single core is being used, and a different speed when more than one core is being used; it tends to slow down or it runs at that peak speed for a very limited amount of time, then basically shifts the clock speeds down for the longer runs.
    Thermal throttling: Almost every processor I know slows down when the processor is overheating; this can be caused by dust, old thermal paste, clogged cooler, environmental smoke and other factors. Programs like Hardware Monitor can show you those temps, and a good cleaning every year or two can help maintain peak performance.
    So, if you have a program that seems to be not multi-threaded, it often is a benefit to have a processor with a FASTER single core speed.
    Architecture: I have two mini-PC's. One is an AMD 8C/16T ( 8 cores/ 16 threads ) @ 4.7 Ghz ; the other is an Intel 1260P - 4 performance cores (with hyperthreading) @4.7 Ghz, 8 efficiency cores, a total of 16 threads. Overall, the AMD system feels significantly faster than the Intel system, although on paper, the Intel should be "better"..
    As Leo mentioned, with a non-multi-threaded application, you may see performance limited, but your machine also has lots of other tasks that it does at the same time in the background; so it's not all wasted effort. However, knowing ahead of time, buying a faster processor can alleviate SOME of the misery of slower clock speeds. Buying low-end systems is almost a guarantee of slower performance overall, fwiw.
    Other factors:
    Memory: Generally speaking Windows tends to work better with more memory; I'd consider 16G RAM to be the minimum nowadays, often more depending upon the task at hand.
    Hard disk speeds impact performance; if you have a spinning hard disk, replace it with a SSD, preferably a NVME drive, with a built-in DRAM cache. It can help significantly for certain types of operations; a poorly designed drive can negatively impact performance as well.
    Some people might take this as a lot to take in; it is; but if you want to maximize performance, one should also understand the capabilities, and limitations, and other factors as well.

    • @mal2ksc
      @mal2ksc 2 месяца назад

      This is true, but there's not a huge difference between two single-core CPUs and one dual-core in practical usage. That's why the multi-CPU code from Windows NT and Windows 2000 handles two cores in one socket just fine. There are practical differences such as the increased heat density, but the OS doesn't really need to care. Similarly, the OS today doesn't really care that much whether you have two sockets with 8 cores each, or a single socket with 16 cores. It might (should) affect processor affinity so that crossing from one socket to the other is kept to a minimum, but that's about all.
      As for SSDs vs. spinning rust: have a decent sized SSD you use 95% of the time, and much larger spinning rust you can use to keep backup images of the SSD as well as hold large files you rarely access but still want to be "hot" when you need them. Make the SSD twice as big as what you actually intend to put on it, for the sake of keeping write cycles from burning out blocks. Obviously use NVMe if you've got a slot for it, and if you don't, PCIe x4 cards with one NVMe socket are really cheap (under $10). This is a perfect use case if you have two slots that can take video cards but only have one video card, as the second one tends to only be an x4 slot anyhow (despite accepting x16 connectors). In my case, that's 1 TB of NVMe and 3 TB of spinning rust, but I think I really underprovisioned my SSD and plan to either upgrade to a 2 TB or add a second 1 TB since I have an NVMe slot on board _and_ an unoccupied PCIe x4 slot. Adding a second or even a third 3 TB drive to the SATA bus is obviously pretty trivial, and I could load up with four if I wanted to ditch the DVD±R drive. (But I don't, I still burn movie DVDs and CDs full of MP3s for use with boom boxes and personal DVD players, because we still have them around.)

  • @ronin2963
    @ronin2963 2 месяца назад

    11years! Wow, that is some longevity

  • @jasonfreeman8022
    @jasonfreeman8022 2 месяца назад

    In the old days a CPU meant the integer and floating point processors, the instruction cache, memory bus and some other things. Modern CPUs are similar but share things like level 1, level 2 and occasionally L3 caches, a common memory bus, etc. Because they share things, they are no longer truly a complete CPU. So they use the term Core to refer to the non-shared parts and CPU for the package of cores and support controllers.

  • @glasslinger
    @glasslinger 2 месяца назад +4

    I notice this too when doing video post production when the mess of cuts are being made into the final video. I am using a fast I5 3ghz computer with plenty of memory. I never see it running even close to full cpu usage. (process explorer)

    • @mal2ksc
      @mal2ksc 2 месяца назад

      Are you using your GPU to do the number crunching? 100% GPU but only 50% CPU is quite typical of my transcoding sessions. Running them on an RTX 3060 is almost an order of magnitude faster than running them on an i5-8500, so all the CPU is doing is dispatching tasks and collecting results. (And I can still do 2D stuff like watch videos without a major impact on the GPU, it _might_ give 10% of the GPU to decoding the incoming stream.)
      ᴇᴅɪᴛ: This has completely flipped since I installed FLUX.1! Now it pins 100% CPU constantly and only "flicks" the GPU to 100% for a couple seconds at a time, because the prompt parser runs on the CPU. I can't do anything else during generation without regularly grinding to a halt. I need more system RAM.

  • @kdw75
    @kdw75 2 месяца назад

    I spend my days in Acrobat working on files and adding variable data to them. I have to leave it running overnight sometimes to finish large tasks and it rarely uses more than 15% of my CPU.

  • @thingsmymacdoes
    @thingsmymacdoes 2 месяца назад +4

    there appears to be a multi-threading option in Excel advanced settings. Maybe that could speed up things a bit ?

  • @ruben_balea
    @ruben_balea 2 месяца назад +2

    Most people would freak out if they ran a program or various programs capable of using 100% of every single core of their computers.
    Once I typed two keys simultaneously by mistake and ran 78 instances of GCC on my poor 4 core 8 threads CPU, between the CPU at 100% and 100% of the physical RAM in use and a few more GB of paging file, the computer felt like it was going to explode at any moment 😁 I had to press control+c hundreds of times during a few minutes before the computer got a tiny bit of free CPU time to process the keyboard data 😅

    • @mal2ksc
      @mal2ksc 2 месяца назад

      That's what it feels like running FLUX.1 on a machine with an RTX 3060 and only 16 GB of system RAM. I could hear the system screaming at all the page file writes, even though drive access no longer makes any noise. You want to see all of your CPU cores slammed at once? Run FLUX.1.

  • @kthwkr
    @kthwkr 2 месяца назад

    Great explanation. I sorta already understood but you brought it into full focus.

  • @qdllc
    @qdllc 2 месяца назад

    It was interesting to learn that a multiple CPU PC was faster and better than a single CPU (multi core) system, because the CPU gets only one address buffer for that CPU. So, a multi CPU machine has one address buffer for each CPU but a multi-core CPU has only one address buffer regardless of cores on the CPU.

    • @whophd
      @whophd 2 месяца назад

      Yeah I remember my first dual-CPU machine in 2002, and it was just “smoother” for all tasks, being able to put background things into another core. Going back to a single CPU was slower in a weird way. More amateur, less pro. I just fell in love with multi-core computing and never went back - resisting the trend to laptops. These days I’m actually running multithreaded stuff 24/7 like compression and video processing, so it still makes sense to get the biggest baddest desktop machines.

  • @52vepr
    @52vepr 2 месяца назад +1

    Is there a shortlist of mainstream applications that do support multiple cores? That might help people decide which software to get for heavy number crunching work.

    • @gruntaxeman3740
      @gruntaxeman3740 2 месяца назад

      Many programming languages supports to use threads. Heavy number crunching is better to do using programming languages that makes it easy because using parallel algorithms make easy to mess up and cause deadlocks or livelocks
      Haskell is functional language that try to minimize side-effects so it is good here but if imperative paradigm is required, Go is simple language.
      Amdahl's law is the hard limitation here to limit parallel code.

  • @INVAZOR33
    @INVAZOR33 2 месяца назад

    Using exel for this is like using a ✂ to cut industrial steel.

  • @lmk001
    @lmk001 2 месяца назад

    I didn't know they were separate CPUs. Very interesting

    • @joesterling4299
      @joesterling4299 2 месяца назад +1

      Technically, it's a single central-processing unit (CPU) comprised of several identical processors (4, in this example). These are called "cores." I found his explanation in this regard a bit confusing. Think a processing department with 4 mathematicians working on separate problems, who then hand the results of their work to the facility's supervisor.

  • @PercyNPC
    @PercyNPC 2 месяца назад

    When I do something in Excel with more than 100K value in cell, only 1 core using at a time and I have to wait few minutes sometime.
    Sometime it crashed. 😂

  • @nifftbatuff676
    @nifftbatuff676 2 месяца назад

    When a program don't use 100% of the cpu I am happy! 😂😂😂

  • @steveharper2857
    @steveharper2857 2 месяца назад

    Why does the right side of your head disappear when you move your head?. I concur that programs must be written to use the extra cores, so why don't we just stick to lower core higher frequency cpus?

    • @mal2ksc
      @mal2ksc 2 месяца назад

      Because we've pretty much hit a wall with clock speeds. I was running my 6-core Phenom II at 3.9 GHz in 2011. The i5-8500 I have now only boosts to 4.2 GHz. The newest, hottest i9-14900 only runs in the 5 to 6 GHz range.
      Some workloads parallelize well. Some don't. But adding more cores is pretty much all we've got left for speed boosts, other than using the GPU to do vector multiplication. This is part of why AMD releasing powerful APUs is a big deal -- once it becomes commonplace for CPUs to come with more-than-decent GPUs integrated, then all this heavy lifting can be offloaded, like 3D extensions taken to the logical extreme. It's also necessary for certain AI tasks like Stable Diffusion.
      Once upon a time, almost nobody had an FPU. The 8087, 80287, and 80387 were all extra chips you had to add on at significant cost, or deal with software emulated FP. The first Intel chip to include the FPU was the 486DX, and then they went and stomped all over their own roadmap by releasing 486sx chips that lacked the FPU. It wasn't until the Pentium generation that every CPU included an FPU.
      I think AMD including a potent GPU die in their APU package may end up doing the same thing and quickly go from a "nice to have" to a "how did we ever live without this".

  • @Chris-op7yt
    @Chris-op7yt 2 месяца назад

    4 core cpu and 4 gigs ram? that's ten years ago. anyhow, for any serious number crunching dont use something like excel, but a programming language like python/perl/etc.
    excel is ok for small to medium spreadsheets and not for serious number crunching.

  • @roberthuntley1090
    @roberthuntley1090 2 месяца назад +2

    OFF TOPIC - Interesting video, many thanks.
    Is there a way to make an mostly-idle machine more responsive to keyboard shortcuts. I use these a lot, and there is a distinct pause (0.5 to 1 seconds) after using the keyboard and the action starting. As an example, I just used Ctrl-Alt-7 to start up Word and it took a second for the initial screen logo to appear, about twice as long as opening a Word document. The problem affects all my shortcuts, its not unique to Word.

    • @mal2ksc
      @mal2ksc 2 месяца назад

      In the case of Word, you could have your shortcut open a blank document you have saved specifically for the purpose since that seems to be faster. You may occasionally end up overwriting your blank file with a real document if you forget to rename it on the first save, but that's hardly a deal breaker.

    • @roberthuntley1090
      @roberthuntley1090 2 месяца назад

      @@mal2ksc Thanks for reply. Its not unique to Word though.

  • @Ludak021
    @Ludak021 2 месяца назад

    Q6600 can be found at your local dumpster for free. If not, I am sure someone will sell you one for a dollar or two. PS. Modern CPU scheduler can spread the load over all cores/thread if needed. So, there's that. Of course, Q6600 is too old and can't do that, but a something a few years newer already can.

  • @stanleyreynolds7800
    @stanleyreynolds7800 2 месяца назад +1

    Yeah, when I am transcoding to h.265 with Handbrake, all 32 threads are very busy.

    • @stanleyreynolds7800
      @stanleyreynolds7800 2 месяца назад

      @@mal2ksc Interesting claim. Is there a way to make Handbrake only use one thread?

    • @stanleyreynolds7800
      @stanleyreynolds7800 2 месяца назад

      @@mal2ksc Is there a way to prevent it from using all 32 threads?

    • @mal2ksc
      @mal2ksc 2 месяца назад

      @@stanleyreynolds7800 Yes. Search "How to Manually Allocate CPU Cores to a Program on Windows 10". I'd post a link if it would let me but it won't so this is the workaround.

  • @paulfrancis8836
    @paulfrancis8836 2 месяца назад

    I've got 23 Cores, 23 Gig Video, and 256 Gig of RAM. Runs Notepad just fine.

  • @LSUfan
    @LSUfan 2 месяца назад

    Isn't there Core pc setting where you can set the computer to use all available cores?

    • @askleonotenboom
      @askleonotenboom  2 месяца назад +1

      That's always on.

    • @LSUfan
      @LSUfan 2 месяца назад

      @@askleonotenboom Sorry, I was referring to selecting the maximum Number of Processors option in the Boot Advance Options in the MSCONFIG program.

  • @LSUfan
    @LSUfan 2 месяца назад

    ruclips.net/video/QSHrK8aWz44/видео.htmlfeature=shared Setting to use all cores.

  • @BigMacIIx
    @BigMacIIx 2 месяца назад

    It’s a windows issues 😂

  • @pnachtwey
    @pnachtwey 2 месяца назад

    Handbrake will maxoutall the cores. However, python uses only one core. I am trying to spawn processes in python to take advantage of my 8 cores.