It just goes to show the complexity of what goes into PC components... There is a reason it takes massive companies with numerous engineers to make these things
Well are they really compeditors? Its not like you cant watch more videos a day than Ltt uploads, and the only competition that really exists is the subscriber amount. I just think it's a nice gesture under "industry colleague's"
Not really. Some of those names thoroughly test things, while others on there just say how much they like to use a phone or computer and don't run any tests at all. They all do tech content, yeah, but differently, Linus is in the middle.
The analogy I've always used, is that if CPUs were cars, comparing clock speed would be like comparing RPM. 10,000 RPM is quicker than 5,000 RPM, but gearing makes all the difference.
That's actually a really bad analogy, it would be like comparing a 4 cylinder from the 80 to today the hp jump is pretty insane, but they should pretty much pull the same shit.
Also, they didn't even put themselves as a reviewer. The obvious thinking is "well we know they review stuff, and we are on their channel" but other youtubers include themselves in lists they make a lot of the time.
Anti-Marketing 101 "The larger the font of a statistic the less it's likely to matter: the smaller the font, the more likely that stat is going to be a significant factor for smart shoppers"
In 2008 a professor in my engeneering degree proved with math using the light speed that the cpu coreclock won't be going nothing crazy in the following years, 14 year later, our home/pro cpu's really stayed around the same clocks...
As a computer engineer, this video was spot on. A lot of the topics mentioned in this video prefaced all the cpu architecture classes I took. Nice work LMG!
Very glad that branch prediction was mentioned. I personally would have considered touching upon the topic of Instruction Set Architectures though (eg. RISC)
@@Cryptix001 I think he is referring to the fact that Linus probably wrote the script himself, or, at least, was somehow involved in the writing of it.
It's funny how they're basically so complex that the easiest, most efficient way to calculate their performance is to just run them. And we can't even agree on what we're trying to measure 😅
i figure if i buy a cpu in a specific price bracket, and come back in 3 or 4 years and buy another cpu in the similar price bracket, its usually an upgrade. this is not accounting for the semiconductor shortage of course, adjust accordingly.
@@LordSevla The thing is that the actual performance improvement in a specific application cannot be predicted easily even if you knew exactly what was changed. You also need to know exactly what instructions some program uses, in which order, etc. Sure amd and intel can predict improvements, but until they actually run it (maybe in a simulation), they don't know exactly. A single core has many optimizations already built in. There is branch prediction (predicting the result of a conditional jump to another instruction) and with it speculative execution. Then there is pipelining, each instruction is split into for example 5 steps and a single core can then run each step at the same time. So up to 5 unrelated instructions can be executed at the same time. With that comes out-of-order execution, where a cpu can run an unrelated instruction a couple instructions later actually before to fill gaps when related instructions follow each other. Different applications have a different reliability on cache, memory latency, etc. Thats just what I remember out of my head. There are so much more things to consider. For example with Zen 3, they improced tje amount of loads and writes they can do per cycle etc. etc.
3:30 There’s also this pesky thing, the speed of light. At 10GHz, the distance light (and therefore the maximum anything can go) can travel in a single cycle is about an inch. That would really complicate design and require much smaller, denser, board designs.
Computers work with electrons which is not the same as light. Photons are the light particles and much faster than electricity (electrons). That’s why there are talks of making a paradigm shift towards light based computers
@@JoseDiaz-tf2ql The electrons themselves travel slower than the speed of light (slower than a person walking, even, for that matter), but the speed of the electrons isn't the speed of the electrical signal. The speed of the electrical signal is the speed of the elecromagnetic field fluctuations, which are close to the speed of light.
The world needed this. I’ve had arguments with people telling me an overclocked pentium would have better single core performance than a Zen 3 CPU running at a lower clock speed. A lot of people don’t know about stuff like IPC.
From benchmarks, my Ryzen 5 3550H (4c/8t, 3.7GHz on boost) actually performs worse than a i5-7300HQ (4c/4t, 3.5GHz on boost) on single core tasks. This 4-5 years old i5 can beat my 2 years old Ryzen 5 on single core tasks (on some games, my Ryzen 5 can only reach 60% of the i5 performance). But the Ryzen 5 can easily beat the i5 when it comes to multithreading.
I'll just leave a shout out here to the guys at Gamers Nexus and their unending dedication to push the envelope of proper testing methodology and journalistic conduct.
@@JKSSubstandard If only they would present their stuff in a more engaging way instead of reading numbers from graphs minute after minute in this monotone voice. That being said, in the reviewer list there are people like Steve and Roman (der8auer) and then iJustine who unboxes sponsored Apple hardware and goes "woooow it is so beautiful" - lmao. And for some reason the list cuts off at P, I guess bad luck for everyone past that :D
@@JKSSubstandard I think I can see the reasons why. The great thing here is that while one might not find him sympathetic, his numbers are always spot on. And in the end, that is the most important thing, the sweet sweet data.
Who doesn't like Steve? Gamers Nexus videos aren't always the most entertaining, but when you are actually looking to make a purchase their channel is the best for comparing similar products - especially cases
I've always used the toll booth analogy. I'm not sure how accurate I am, but I've always said clock speed is like how fast each car can get through a booth, your cores are how many booths you have, and your threads are how many lanes you have. Depending on the process that goes through, you could potentially only be able to utilize a couple lanes regardless of how many you have (like if only 2 lanes are open that day) so speed matters a lot more than if traffic was able to split up and utilize all the booths.
The miner analogy is great for overall subject matter. I have found that higher gigahertz sometimes can beat out IPC in situations where latency is a bigger factor. Like competitive shooters. If you think of where the miner has to take the ores to. Say the trucks outside. Having a faster worker who can take faster trips down and back up might be more beneficial than having more per pile. Because a truck can only carry so much in one load. This is also why that cache analogy matters. Bottlenecks happen on many levels in a computer. This is the difficulty of computer performance. Sometimes edge cases will forever be edge cases. But sometimes overall lapping in areas of an industry where a certain methodology of improvement is preferred. Might be better, because some areas develop slower than others.
in fact it often does in some codes. Real world is more complicated, including memory bandwidth. So I belive that the future will be about optimizing designs for specific tasks and having heterogenous cores. Chiplets can help in that sense. An interesting example is IBM Power10 it can run 8 treads per core. It's simple a throughput beast. Apple A15 is another case: let the integer resources the same, but invest a bunch of transistors on AI an increase graphics by 50%.
No, Gigahertz will never beat IPC. If you don't believe me, try a 5 GHz AMD Bulldozer CPU and see how that turns out. :-) Higher frequency can sometimes be an advantage if and only if the IPC is very close between the two CPUs, but the truth of the matter is that this is hard to tell. There are too many variables in a PC that can tip the scale one way or the other.
@@ruxandy in some codes it will. For example a fully pipelined loop operation with a small dataset will run faster in simpler CISC machine than in a superscalar one, since there is no horizontal parallelism to exploit. Clearly one can always go to extremes but within reason it will. I selected CISC because it has memory memory instructions. Without that we need to remember that the horizontal parallelism of memory access and operation was one of the most important reasons for superscalar designs.
@@ruxandy this is the conundrum I explain in my second paragraph. Many factors can interrupt true performance gains from practical to principle. Changing the architecture, changing the cache, changing the frequency. The term never is a strong statement to make. Especially when there are millions of programs that all perform differently. Sure, maybe higher frequency will never beat IPC in theory, but in practice there’s way to many variables to truly nail down 1 king of performance increases. High competitive shooters were the ones I tested for and found my results.
This is very simply not true. A cpu operates at multiple million clock cycles a second. Even a fluctuation of a million or two cycles per second won’t, on it’s own, ever result in perceivable latency. What matters is how many frames the cpu can process in per second. If a processor running at 4ghz can do 200FPS, and it’s competitor at 5ghz can do only 100FPS, then the one running at 4ghz has a 5ms lower latency than the one running at 5ghz. What matters is the end result, as I think this video quite accurately explains.
@@fringeanomaly9284 2:03 They started off the vid with underclocking two different CPUs side by side to show the effect of clock speed. Overclocking is the same thing in reverse but generates more heat.
To be honest, I have been watching Linus for like 5-6 years now from the time I built my first PC to today being a major in computer science, and the amount of knowledge I have gathered over the years from this channel is nowhere near what someone can learn in a decade or two. I am very grateful we have people like Linus and his crew nowadays teaching this information for FREE on the internet for people to have a broader mind and understand some things more than what is just advertised for the general public to know. With all due respect to Linus, the Canadian guy over here I understand just why he is so successful and how much work, information, and investments he put into this channel to make it comparable to real-world education.
Unrelated but My hypothesis is true that what you watch is how u shape your mind. Example if you watch videos like serial killers you'll likely to be one. But if you watch videos like this computers you'll likely end up with it. As for your comment just watching linus made/inspired you become a comsci major. I'll be watching relevant stuff now 😁
@@makisekurisu4674 An Apple Fangirl + Reviewer. Nothing says she can't be both lol. At least she's useful for honest opinions that other Apple fans will relate to
I remember back in the 1990's, MHz (and later GHz) was the big bragging point (HA! my i486-50MHz is way better than your i486-33MHz). Then somewhere around the early 2000's it just seemed to stop mattering, ads stopped promoting it and people stopped talking about it. I always wondered the reason it kind of became a non issue.
@@FLYSKY1 so you are walked on computer and lived on keyboards from the first moment of your life :))) I thought you are at least 60 years old one of the pops :)))
I know it would've taken a while to come up with a miner analogy, which came really close to the actual CPU. Little things matter a lot: L1, L2, L3 caches, inclusive and exclusive caching, branch prediction, interconnect bandwidth, IPC, clocks per cycle, and the architecture itself. These things are so complicated that it would take an average person 1 to 2 years of studying just to know how they actually work.
@@sonicboy678 I'm glad you mentioned this. I buy equipment for my company and I get so tired of dealing with this issue. I'm really good at my job and have 35 years experience in electronics. I do calculations and specify exactly what I want for each order. I check inventories and make substitutions if needed, so I'm aware of market conditions. Oftentimes the vendor substitutes equipment saying it's the same, due "market conditions" which always means they got a deal on bad equipment that smart IT guys have rejected. In one instance, they substituted Intel 4770 CPUs for Intel 4790s, stating that the differences were too small to worry about. However, when we deployed them we got a lot of complaints. We tested 5 machines in our environment using actual user workflows, some disk intensive, some cpu intensive, some multithreaded, some not, and found that the 4770s took 3 times longer to complete disk intensive workflows, and they always took at least double the time to compete any workflow. The tests ran for 7 days, giving consistent results. Even with that measurable evidence the vendor claimed they were basically the same system. The motherboards were altered slightly by Lenovo so they could fit them into cheaper cases, and substituted generic RAM for Crucial RAM, so I don't know the specs. The coolers were the same, The chipsets, video, nic were the same. I don't know if the performance difference was solely due to cpu or if multiple issues such as the board design, RAM, or some firmware changes may have also affected it.
software vs hardware, those initialism exist in different namespaces. In human programming language titling the video something hardware related is functionally equivalent to "using namespace hardware".
Russel Kasem the XP and x64 Athlons, bad chipsets killed them, too many bugs unable to keep up with the mighty Pentium 4HT giants, HOT! Core 2 just wiped AMD away... gone forever now.... TSMC now....
Yes, while the very first Athlons were numbered after their Mhz, the Athlon XP used a numbering scheme that reflects what similar performance for non-XP Athlons. For example, the Athlon XP 1500+ would be 1333 Mhz but it's suggested to run as fast as a regular Athlon if it was OC'd to 1500 Mhz.
@@lucasrem Those Nvidia NForce chipsets were pretty solid if I recall. But really my whole comment was regarding the campaign Intel had at the time when they used the clock frequency to differentiate themselves from AMD. On a box a Pentium 3 at 1.8Ghz was "better" than an AMD Athlon XP clocked at 1.4GHz (1800+) because in the eyes of consumers, clock frequency was everything.
For us car guys and girls, GHz = RPM IPC = torque Core count = amount of cylinders Logical cores = SOHC Vs DOHC (hyper threading enabled vs disabled CPU Cache = turbocharger Amount of CPU Cache = turbocharger boost pressure
Having a GPU from the last 18 months, owning a classic benz autocarriage. It will be worth more the less you have used it, and will only have resale value as a collectible after another 12 months lol. It will be like owning a stock Fiesta that you paid three times the stock Fiesta price for :D
Hey I think this was a great video! I think it could’ve been improved by directly calling out 2 computer architecture laws that were touched on by the mining analogy: 1. The Iron Law: performance = (cycles/instruction)*(time/cycle)*(instructions/program) 2. Amdahl’s Law: total_speedup% = 1/((1-portion_enhanced)+(portion_enhanced/speedup_enhanced))
What I know is a CPU architecture is going to have an effect on this. What I also know is there are many programs that deal with data processing that can break data up into chunks, and the faster you can clock the cores, usually the faster you can process the data, up to a point. If this functionality can be maintained in L1 - L3 cache so you don't take the IPC hits from going to RAM, then faster core speeds make a difference. Rendering video is a good example. In which case, the instructions/program has no bearing because the thing that takes up a long time is the actual render, which is running a small number of functions when it's doing that merge of video with whatever it is that's being added to the video, such as CGI. What I also know is the title of this video says I can clock a CPU at 1GHz or 5GHz and it doesn't matter. Except it does, so I didn't bother. I'm trying to stick to a low "sensationalist" diet right now, and simple logic says that every CPU architecture is going to hit a limit, to where some part of that architecture won't give better performance once the cores are clocked to a certain point, but for some architectures in the past that point couldn't be hit in gaming, because power consumption would become too great before that limit was hit.
@John Doh that's why so much work goes into lowering power consumption. they had a Pentium 1, or 2, and I forget what it was, overclocked to 5ghz! I think the power draw was 700 watts, just on processor. They are trying to scale alot better today like a balancing act.
The average watcher isn't going to have a good frame of reference for these laws. You'd have to spend a lot longer explaining what speedup actually is, big(O) etc. This is basically an accessible TLDR video.
Does CPU Ghz matters? Yes only when compared within their own family(for example comparing cpu's of intel family) You should never compare cpu's or gpu's Ghz which belongs to different family(like comparing a cpu of AMD with a cpu of intel).
@@anxiousearth680 IPC improves each generation and corecount increases as well so you can't really compare different generations And an i3 consumes less energy, and has less cores and is designed for other areas of application compared to an i7 for example so they're also not really comparable - even though they should perform the most similar when looking at a single core bench with the same clock speed
@@jodamin Yeah the number of cores is more important than clock speed for sure. You can get a dual core pentium or AMD processor pushing 3ghz and they're slower than an Intel I5 quad core running 2.7ghz like the one I've had for a long time.
@@Scnottaken and who is that? do you think it's possible to both be in the RUclips comments and also only watch fox as a source of news? CNN literally had employees say on hidden camera that they lie and yet here you are attacking fox which last I checked nobody from fox News has been caught on camera saying they lie and hold an incredible political bias
What Linus is suggesting is exactly what intel and AMD do now and it makes identifying how one cpu performs vs another. Back when CPU’s was called something like a 386/16 or a 486/25, everyone knew 386 or 486 was the processor’s class and 16 or 25 was the clock speed in MHz. CPU’s of this era also had the letters DX or SX added after the MHz to indicate whether the CPU had an integrated Math CoProcessor on the die or installed as a separate chip. Later, once intel introduced the Pentium class of processor, AMD and other competitors CPU architects became less of clone of intel and began to diverge performance wise. Rather than label their CPU with a lower MHz than intel, AMD and others began to use a number that was supposed to indicate what Intel CPU’s performance they matched, so a CPU may be named C6-266, it was only 166mhz, but the manufacturer would prefer you to ignore that fact and instead focus on the 266, so you can pretend like your CPU is actually 266MHz. Eventually, after intel ditched the Pentium moniker for their processors, they also began using an alternate set of numbers to indicate relative performance instead of MHZ. That is why we don’t have an intel CPU named Core2Quad2.4GHz, we have the Q6600. Same reason we have the AMD Phenom 8320 instead of the PhenomOctaCore3.5GHz. intel introducing the Core i series processors and AMD introducing the Ryzen line of CPUs has only made it more confusing.
Actually 386 never had integrated math processor. SX had 16 bit data bus and 24 bit address bus while DX had 32 bit o both. In both 386 and 486 the SX versions were introduced to kill competition that made faster versions of the older chips. Intel actually had 487SX math coprocessor which actually was a full grown CPU that disabled the original CPU.
This video is very timely. I was just having this discussion with a very knowledgeable person in the comments on another channel quite recently. Even though i learned a bit better about how subjective the term "IPC" is, i realised even more that all these leaks of "such and such product has X about more IPC" are meaningless because none of those leakers know what is being measured. It's almost a worthless metric except in direct, open comparisons - like in reviews.
Well in the end all the numbers are just there to please all the spec sheet warriors, as per usual. In the end, real life tests are what will prove true performance. It's just like when people buy cars off of reading about their sales paper describing how powerful they are, and then not giving a second thought to anything beyond that single-paged paper. As if the performance of a pc component could be summarized in a single piece of paper in the first place.
Yes exactly as put, a cpu manufacturer can say 200% increase in ipc, but that could just be for floating point performance, Fortunately often times ipc increase manufacturers claim are somewhat overall,
@@ignortotal360 I think spec sheet listing can be compared to puzzles that market themselves with the amount of pieces they have. Sure you get the big number, but you don't have an understanding of how said numbers interact, their individual complexities, nor the quality of the final picture once assembled. All you have is a singular metric on which you can compare with other products. No more, no less. Like with CPUs and GHz (or spec sheet numbers in general), unless you have a desicription of every single component and the capability to understand how they work together, then listing a singular metric for one of them doesn't do anything except allowing you to compare with products that are factually otherwise identical (like when overclocking). Same with core/thread count etc.
I loved the video, but I'd also really enjoy a companion video with more deep-dive-elbow-greasy-nitty-gritty details about how the current two current-gen competing high end architectures, well, specifically _how_ they process things differently. And then, in a few years, when architectures change significantly, another review to compare how processing is being done then, compared to the last review. Would be cool!
Deep dives is GamerNexu's territory. Steve can recite all 17 different memory timings by memory, he'd trounce the subject in a video about chip architecture differences between AMD and Intel.
@@lfla0179 Right you are, I didn't forget about them! Their content simply takes a somewhat different form, with their narrative more focused on technical details, whereas LTT focuses a fair amount on its own pah-zazz. And, hey, if you take a look around some of LTT's/LMG's videos, well... Some of them really are deep-divers as well, although how this is conveyed depends on that video's topic, not to mention that video's presenter. After all, Linus has a pretty competent bunch of people that he happens to be lucky enough to call his employees! Personally, I would enjoy it if either of them made such a video that I described above... Or BOTH! :D
@@inrevenant JayZtwocents is more of a IFIXIT kinda guy. Hands on. GamerNexus dissects the corpses. Butcher the vendors sometimes. (Gigabyte psu corpse rotting on the table) LTT is more: "if you want this functionality, buy this, if you want these other 3 things, get that another" kinda guy. And all 3 intersect.
Yes, IPC is a very big factor Take amd of 2012 for example, Even though the fx series had more cores and threads, it didnt outperform an i5 of that age because of poor IPC performance Same is when amd came back in 2017, at first ryzen was behind, but zen 2 and zen 3 made ryzen topple intel with its better ipc I wonder how intel compete with its 14nm superfish finz +++++
I know ltt is the largest tech channel out there but them listing a bunch of other creators out there to spread the love really shows why this channel deserves every sub.
@@SimonBauer7 Ironically, He was never the VIrgin...He literally was married from the start. It's actually Luke who was the real virgin at start despite having a beard...lol
@@hubertnnn Not exactly, no. If all you do is single threaded workloads like browsing the web and gaming, it doesn't matter if you are on a 6 core 5600x or a 16 core 5950x. The difference would be negligible, despite one having more than double the cores & threads. Number of cores has no relevance to real world performance in day to day applications, unless you are a power user into 3D modelling, compiling, rendering, video editing, etc. which is less than 1% of all computer users around the world. For most people, core count does not matter these days, especially as the lower-end CPUs start having 6 cores/12 threads and such.
@@jonny6702 It depends on the workload. But that is the property of that workload, not the CPU. The era or single threaded applications is already gone, so you can assume that other than some low budget indie games and old games everything will be multithreaded. Plus web browsing is multithreaded for last few years too.
@@hubertnnn Yeah, but web browsing still doesn't need more than a couple cores even for things like video playback. It doesn't matter if you have a 5600x or a 5950x for web browsing, and it wont for as long as that hardware is relevant. Also, it's very hard to make games multi-threaded which is why most use 6 or less threads. It's very niche for a game to actually be able to efficiently utilize a lot of threads. It's not feasible for most games due to the nature of how video games operate. There's a limit on how much you can reasonably multi-thread before it makes things less efficient. In other words, the benefit goes down as you increase thread count because you make it increasingly hard to synchronize all the work loads. Only stuff that doesn't need to be syncrhonized can easily be multi-threaded, and that isn't a lot of the games workload overall. That's why most games will have 1-4 threads that are really dominant, and maybe a few other ones that do light work. Gaming will be single-threaded dominant for as long as current hardware is relevant. My point was that the things that 99% of people use their PC for will run just as good on a modern 6 core as a modern 16 core. That includes gaming. Sure, there are a handful of niche games out there that have good enough multi threaded support to use more than 6 or so threads, but even then, there is probably less than 5 games on the market right now that will use more than 12 threads - all of them being simulation/RTS style games that have a gameplay that allows for multi-threading. A 6 core processor is fine for that. 99% of people don't need to look at core count on modern processors as it doesn't make a difference for what 99% of people are doing. Literally only power users use multi-threaded heavy workloads. That's it. Single threaded performance is all that matters for 99% of peoples real world use cases for Windows.
I like your miner analogy. When I used to sell tech to people I would describe it like this "imagine a three year old walking next to their mom. Each step a 'clock cycle'. The three year old takes a bunch of little fast steps to keep up with mom taking one or two large steps. If you're just counting steps then the three year old is faster, even though the mom is traveling a further distance with each step."
Not to mention SW optimizations and specialized accelerators on CPU. For example AVX instructions don't run at the advertised core frequency but can process specialized workloads much faster.
Video decoders and matrix accelerators are two that are entering the market at full speed. And yes SW optimization can lead to orders of magnitude performance gains in extreme cases.
@@DaroriDerEinzige now you miss the other part which is the transmission , in a traktor the transmission has low rotation but very strong so it can carry heavy stuff , go learn before arguing.
There are ways to calculate stuff - I learned it in uni and forgot again 🤣🤣🤣 But it also depends on what you are running: How much of the work can you do at the same time? = parallelism Imagine having workers and every worker has a speed (cpu frequency): You have some tasks they can all do at the same time and some tasks where they need to wait until another task has finished. So for example: You have alot of letters that needs stamps Every one of your workers (cpu core) can grab a letter and some stamps and get going. So now imagine you need to write the letter first. That takes a while. And only one of your workers can write the letter. So you can stamp Number of workers letters at once and are really fast in doing so, but you will still be super slow because you need to wait until the letter is actually written beforehand. So the speed in this case depends more on the speed of a single worker (cpu clock speed) In the case before (letter stamping) it depends more on how many workers you have than how fast they actually are. In real life CPUs will always have to do a mix of both. They will have independent processes that they can just give to a cpu core and forget about it and then they have stuff that depends on each other. For video games for example some video games are also not optimized to run in parallel Meaning: there are things that are independent of each other and could be paralelized but aren't. Meaning: this game will be heavy on clock speed Some games are highly build for parallelism, so they run smoother on CPUs that have alot of cores. So in short: it's not even possible to give a correct answer. Because it's not known hoch much parallelism vs single clock performance you actually need 🤷🏼♀️ You can give a generel guesstimate (more cores better, more frequency better) but you can't know for sure. And this is not even considering small differences between the CPUs where the manufacturers optimize certain things. Like Linus mentioned like branch prediction ect. But that's not even all: some CPUs are completely differently build. Think about your phones: they use arm processors because they use alot less power! They internally work extemely different than x86 Cpus! Long story short: it's complex
nope its simple. ghz more matter when we talk about intel and amd cpus as cisc. but for more than decade all cpus are risc that emulate cisc with microcode, so when you make faster microcode you get more performance. intel pentium was 1st that could run 2 simple instructions in one clock in parallel aka UV pipelines, pentium 3 had 3 pipelines or even 4. btw risc in 1997 already could run 7 instructions if they dont collide with each other. and now we come to exploits that allow to fool microcode, and when they started to patch those holes suddenly cpu lost 20-30% of power. modern risc cpus probably have more pipelines next today cpus have multiple cores, so if you split work to every cpu core, you can outperform faster cpu that has less cores. btw we have now 64bit cpus, but powerpc last pure risc used in xbox 360 and ps3 was 128bit. today risc i think already is with 256bit registers
I learned about this back in the day, when my old A8-7650K OCed to 4.3Ghz performs worse than my friends Core i5 4590 even it only got 3.7Ghz. But the overclocking of the A8-7650K from 3.8Ghz to 4.3Ghz is really beneficial, I got 15fps improvement in games back then. So its clear that Ghz is not the real measure here, but pushing it to the limits will give you more performance.
One of my friends once said that his Core2Quad CPU from 2006 should run like the Ryzen 5 1600 just cause he OC'ed his CPU to match the new chip Ryzen.. Probably one of the most hardest facepalms i've had. Just cause your CPU runs at 3.2Ghz or etc like the newer chip doesn't mean you'll get same fps as the new chip..
@@andreewert6576 I remember a few years back looking back and thinking AMD were better than Intel because more GHz and then realising how wrong I was later down the line.
Thanks for covering this. For decades this consumer industry has been so ignorant and arguing with each other, based on clock speeds to support their argument why their favorite device/hardware is better. Cpu and phone are by far the worst and “discussions” are insufferable.
I'm now 20 years old... I can't imagine how fast a CPU or a GPU will become in maybe 20-30 years from now! I think it must be crazy, but hopefully I will know it one day
Strangely, it's not guaranteed to improve that much compared to the previous 20 years. We've been in a plateau of slow increases for years now, because for most consumer tasks the vast majority of available hardware is now 'good enough'. The focus then shifts instead to efficiency. So in 20 years time they may not be terrifically faster, but be much smaller, run on *way* less power and produce very little heat. This is especially likely because there are very few mass market applications for more power but tonnes of mass market applications for reduced size and increased efficiency.
@@Void_Dweller7 Absolutely, yes. We have already seen this in the last few years; a modern budget smartphone, tablet or laptop is more powerful than the highest-end versions of each of those devices when they were first developed, and as scale of production increases, it gets cheaper and cheaper to make 'good enough' devices. You can now get a smart watch that does biometrics, fitness tracking, GPS, calls, etc. etc. for less than £100; that's a James Bond level of wizardry, for the cost of a week's food. It presents its own problems, however, and we're already seeing them. As tech gets cheaper, the likelihood of repair goes down, and the likelihood of replacement goes up; consequently, the amount of devices ending up as e-waste increases dramatically. This is already happening and will only get worse. Our generation's legacy may be a layer in the fossil record made entirely of smartphones and small gadgets.
I remember when we (at work) were doing a hardware replacement of our IBM 3090 mainframe with 3 CPUs which was being replaced by a IBM 9672 with a whopping 10 CPUs but barely half the speed. Our lead tech guy thought it was crazy and would never work. To say the least, there were some performance issues as we figured out how to tune the system to run on the new hardware. After we adjusted the number of read /write threads on the database (on the old hardware I think the number was 1), it performed wonderfully. So well in fact that our storage subsystem was now the bottle neck in getting maximum through put. A few years later, I remember trying to test out running UNIX on the mainframe and the Unix engineer I was working with was convinced that shared memory wouldn’t be enough-8 years later they were running virtual Unix machines just not on the mainframe. I find it fascinating that so many of the innovations that I saw 30 years ago with mainframes are now happening with home computers.
Thank you soo much guys, after the oversimplified video on "specs you should ignore", this was a very needed clarification, and the explanations are apropiatelly deep for anyone to understand why.
So informative! I had some questions about this subject recently and this answered just about all them. I totally get it now and know how to be skeptical of this spec going forward. Which is all you really need. Of all the things of learned from LTT and similar channels, the ability to finally understand spec lists is one of my favorite.
I think they recently solved what was really going on. It turns out that some of the previous recommended paste wasn’t that great as smaller form factor of cpus would turbo inconsistently due to the molecules of the paste not making enough contact in the very small channels that are on the surface on current CPUS. I believe Linus has a newer vid covering it.
This feels like a long format tech quicky which I would rather see on that channel. I also understand how this concept is also an important one to people newer to the space.
AISURU.TOKYO/machiko?[Making-love]💞 (◍•ᴗ•◍)✧.*18 years and over✿◉●•◦•°• *YOUTIBE: THIS IS FINE* *SOMEONE: SAYS "HECK"* *RUclips: BE GONE* #однако #я #люблю #таких #рыбаков #Интересно #забавно #девушка #смешная #垃圾
It doesn’t make any sense. Clock speeds are one of the only advertised metrics we have to use to compare. Especially for GPU’s. This was a useless video and I that comment was even worse.
3:27 Believing in Netburst was such an oof moment for everyone in the industry when it was new. Hindsight is always 20:20. It was a heck of a fun time watching the P6 architecture get a second wind with the original Pentium M platform, which morphed into the Core architecture, and onwards.
Yeah but those Netbursts temps and clocks were of the charts. IPC was very crappy though, the first batch of pentium 4's were slower than Pentium 3's even at higher clock speeds. And those 130W cedar mills/Prescott power hungry chips were freaking awful. Fun times though...
I like how people still think that Pentium 4 was hot like star and needed nuclear power plant, but it was still very cold and low consumption compared to today CPUs. 🙂
@@DimitriMoreira Prescott was not bad, but it was better to buy Celeron D, Pentium 4 was not worthy for that price. I am just testing Pentium 3 Tualatin and it's not that good as people say, later P4 CPUs are much much more powerfull, but it's probably done even by much faster rams, comparing SDRAM 133 MHz to 400 MHz DDR in dual channel is really difference. Tualatin had advantage in efficiency and low consumption, watt/performance ratio was much better, but wen you compare raw performance, it's not comparable with later P4s. But even P4 was still low consumption CPU compared to today CPUs where you need 2kg cooler on that.
I don’t really judge by Ghz as such I typically judge how the strength of the individual cores in benchmarks, as well as reviewers using the said processor to run the programs/games I’d wanna use. It’s not easy to determine the power of a cpu being low except for when it can’t keep up with your usage. Not really sure how else to go a out it tbh. But I just get the CPU I feel I’ll need personally. I could’ve bought an i7 8700K back in 2018 but I chose an i3 8100 because it was just pointless having the extra power I wouldn’t take advantage of
@@gamamew well that's what Austin Evans thought, but Digital foundry X Gamers Nexus did a proper analysis and found out it was perfectly fine and an improvement in some cases. Austin really just half asses his analysis, which is a shame.
he said what the guide is, just look at reviewers, comparisons.. More specifically in what you need, if you need a gaming CPU or a server cpu or for whatever reason, just watch which cpu performs better in THAT aspect and get it.
I know people who don't even base CPU performance in GHz. They do it based on whether it says i3, i5 or i7, no regards for the generation, and absolutely don't even know what AMD is.
Well. There are other bottlenecks. Some applications do well with multiple cores and others use only 1 or few. Then there is the instruction set. SIMD and MIMD instructions give great performance boosts for tasks that require them. Cache levels and their sizes are hugely relevant. Pipelining and prefetching. How many PCI lanes are on the board. Everything plays into performance.
Gigahertz is only part of a formula that determines the "speed" of your CPU. Imagine the frequency of a cpu as the speed a wheel is spinning, while the IPC is the size of the wheel. A bigger wheel spinning at the same speed will travel further. Edit: just finished watching the video, the mine analogy is way better. There's a reason Linus is the one making videos
I've been trying to use mechanical and automotive analogies to explain this situation to customers for years and I was also blown away by the miner analogy.
They think that way because that is the way it was 20 years ago... it's leftover from that. I remember having the first 1Ghz Athlon T bird chip... good times.
Whoever came up with the mining ⛏️ talking point definitely deserves a bonus the mining analogy really made everything makes sense especially to people that don't understand everything about PC
I can 100% agreed with this from a personal use case, I have 2700x(8core)/3070 combos in my Machine and I get 20% less performance in gaming than my friend who has a 5600x(6 core)/3070.
AISURU.TOKYO/machiko?[Making-love]💞 (◍•ᴗ•◍)✧.*18 years and over 🍎🍑 *YOUTIBE: THIS IS FINE* *SOMEONE: SAYS "HECK"* *RUclips: BE GONE* #однако #я #люблю #таких #рыбаков #Интересно #забавно #девушка #смешная #垃圾
Sentences like "the faster the gigahertz" and "gigahertz, also known as clock speed" should hurt your Comp Engineering brain. GigaHertz is literally a unit, like Celsius. No one would say "the higher the Celsius" or " Celsius, also known as temperature". And I'm 20 seconds into the video. Nitpick, but it's really lazy writing for such a highly-esteemed channel
@@eriklowney One thing is teach it to computer students. Other is to hobby and curiosity. I don't think some words would ruin the ideia, this video was great for someone with almost none idea of the matter.
@@DanielGT_93 You have the illusion that this is great, because you already know the underlining ideas. Essentially for the common everyday people you should just say that IPC is like multiplier and the clock frequency is the number that you multiply. The number you get is the performance. Then you can go on and explain that you pretty much cannot increase the clocks, as this video stated, because it would require too much power which would cause the chip to decrease or melt. Then explain what parts can increase IPC and why. For that purpose I think this video didn't explain it well. You would have to explain it more if you want people to understand at all why branch predictor is so important, how it allows out of order execution etc. and while caches are quite self explanatory I think more info would be in order in the same sequence where you explain branch predictor. I'm computer scientist and know quite well how CPU work and how to optimize code based on the architecture, and at least to me I didn't get this analogy. Mining overall is not the best analogy, for example chefs in massive restaurant would be better. As there's all of the processes, branch predictor, you can start to get certain ingredients close in the cache and/or start to cook things that haven't been ordered yet but which you expect will be. And you can save partially cooked things in the cache, instead of taking them into some hot/cold storage that is much farther away. That example would also allow parallelism by having more chefs in the kitchen (could be analogy for SMT), or more kitchens in the restaurant being analogy of more cores.
@@juzujuzu4555 Really liked the chefs in a big kitchen analogy, mutch better than the video! I agree with you, that for really understand this video have some bad mistakes. But in the other hand, most people are not computer people. They are here in Linus just to try and find the best PC to play games or edit photos. When i started with cameras and dlsr, i dindt knew a lot, so i watched some videos, read some blogs and foruns to buy my first used camera. Someone with better understanding could probably see mistakes in the videos of Tony & Chelsea channel about cameras, but i didnt, and the videos where preaty useful to buy my first camera and be an amateur. I'm going to dig deep into image sensors? maybe, maybe not. If the person really want to understand computers, study on some university, the correct way. RUclips is always superficial
@@DanielGT_93 In the case of superficial information, you just need the frequency X IPC information. If you try to elaborate it any further, you need to tell it better. The only info anyone really gets from this video is that "GHZ isn't the same as performance" as anything else really isn't explained in any way that improves the fundamental understanding of the CPU. That chef example I created for this comment, never heard it anywhere, so there obviously are lots of good examples and probably better ones that anyone could understand with common sense. But I get that this video wasn't really trying to educate people on what happened in the CPU so can't be too harsh. Though I have seen much better videos on the subject on RUclips, so anyone who actually gets interested certainly can find the information here.
When I first heard about this, my professor called it the "Megahertz Myth." Processors in the multiple GHz range had only come out a few years earlier.
Goodhart's law - "When a measure becomes a target, it ceases to be a good measure." Practically any formal test we care to design would result in products being tuned for optimal performance in that test, rather than over all.
Doesn't make it a bad measure though. Automakers for example certainly optimize their cars for specific crash test scenarios. That does not mean those crash tests cease to have meaning, because if you were in a crash that is similar (e.g., small overlap crash), then those optimizations will save your life. The key is to ensure your test scenarios represent real-life scenarios as much as possible.
@@johnhoo6707 good point. There is also an exception to every rule. Although taking crash test as an example; motorcycle helmets have a European and American standard and helmets struggle to pass both due to conflicting test methodology. Fort Nine has an excellent video on it.
I never knew much about how CPU´s work in deep detail, but with all those factors and parts that come together it somehow feels like a car to me. Basically a car is not complicated, but if you want a performance car you´ll see that almost every single component can be a universe on it´s own.
Exactly. A model T consists of a handful of components and can be assembled using only a wrench and a screwdriver, but... everything in it is just basic. Nowadays, you want power, comfort, fuel economy and many other helpful features. That's why today's cars are insanely complex, it's the only way to achieve all of that
Even worse, people that think "i3, i5, i7, etc" instantly means it's a good and modern CPU, but just about every single one of those has spanned over a decade with largely varying quality. It's like when my dad bought us a GeForce FX 5700 in the 2010s. It didn't even run Oblivion and was worse than our computer's on-board video.
Just talking to someone about this the other day. They thought an Intel chip was better than an AMD chip, simply because the Intel had a slightly higher clock.
Photons are quicker than electrons by about 100 times. Photons are the particles of light with no matter, electrons are the matter which reflect thelight. And light goes way faster than matter.... Photonic CPU's will be the future and we won't use electrons at all, fibre cable CPUs effectifely.
Electrons themselves usually move at speeds of a few millimeters per second. Luckily for us, the speed of the electrons matters a lot less than the speed of the electromagnetic field propagation. That moves at light speed. So we likely have a hard limit of ~6.4 Ghz for a 3cm x 3cm die. Assuming signals need to be able to travel between opposite corners in a single clock cycle, and a lot of other stuff
Another analogy, if a journey takes an hour by road or rail including loading and unloading, you can increase the number of passengers per hour by having a larger bus or train but the journey will still take an hour.
Does CPU GHz matter? No, but yes. Maybe.
Ay hi harry love you
Maybe
When is the new meet the Cores
да нет наверное
That could've been "long story short", but you forgot Nord Pass.
I love how any question related to PC components can be answered with "it depends"
It just goes to show the complexity of what goes into PC components... There is a reason it takes massive companies with numerous engineers to make these things
Well that depends
No, there could be the answer "wear depends" to someone saying the FX8380 is the fastest CPU for clock speed.
@@sniperlif3 well that depends
@@sniperlif3 you mean 9590
My wife says that size doesn't matter but I say you can do more work at a given frequency.
Lol.
Lmao
🤔
LMAO
69 likes 😏
Mad respect for sharing the names of literally every competitor at the end there.
Well are they really compeditors? Its not like you cant watch more videos a day than Ltt uploads, and the only competition that really exists is the subscriber amount.
I just think it's a nice gesture under "industry colleague's"
Why in the hell was ijustine there though 🤔
Why isn't OC3D TV not in the list?
Not really. Some of those names thoroughly test things, while others on there just say how much they like to use a phone or computer and don't run any tests at all. They all do tech content, yeah, but differently, Linus is in the middle.
Pauls Hardware got shafted tho
The analogy I've always used, is that if CPUs were cars, comparing clock speed would be like comparing RPM. 10,000 RPM is quicker than 5,000 RPM, but gearing makes all the difference.
...and 10k RPM on a single piston .5 liter engine is way less power than 5k RPM on a 7 liter V8
Great analogy!
I feel like that'd be really helpful if I was any good at cars, haha!
That's actually a really bad analogy, it would be like comparing a 4 cylinder from the 80 to today the hp jump is pretty insane, but they should pretty much pull the same shit.
Cores are like cylinders. Clockspeed is like RPM. IPC is the power produced by each detonation.
Putting that list of tech reviewers at the end just shows again why I love Linus!
Cos your gay?
Hahaha jk
You replied really fast... Guess I really triggered you huh?
@@jesus2621 don't you want a thick rtx 3090
Also, they didn't even put themselves as a reviewer. The obvious thinking is "well we know they review stuff, and we are on their channel" but other youtubers include themselves in lists they make a lot of the time.
Anti-Marketing 101
"The larger the font of a statistic the less it's likely to matter: the smaller the font, the more likely that stat is going to be a significant factor for smart shoppers"
That's why I always ignore the front page of an item site and go directly into detailed specs.
words of the wise
In 2008 a professor in my engeneering degree proved with math using the light speed that the cpu coreclock won't be going nothing crazy in the following years, 14 year later, our home/pro cpu's really stayed around the same clocks...
thank you that really helps me understand physics bottleneck in all computer components now
Thank you for including me in the list of reviewers. I didn't expect to see one of my channels listed. That made my day.
As a computer engineer, this video was spot on. A lot of the topics mentioned in this video prefaced all the cpu architecture classes I took. Nice work LMG!
Agreed. I like the fact that Linus obviously actually understands the topic and is *explaining it* not just reading off a script.
Very glad that branch prediction was mentioned. I personally would have considered touching upon the topic of Instruction Set Architectures though (eg. RISC)
@@alanevans403 They are obviously reading off a script. Why do you think that's bad? Something can be both explanatory and scripted.
@@Cryptix001 I think he is referring to the fact that Linus probably wrote the script himself, or, at least, was somehow involved in the writing of it.
Yea this is like 8th grade computer science I learned back in the early 90s, basic information any enthusiast should have learned on day one.
It's funny how they're basically so complex that the easiest, most efficient way to calculate their performance is to just run them. And we can't even agree on what we're trying to measure 😅
Yes. Summary of the video.
i figure if i buy a cpu in a specific price bracket, and come back in 3 or 4 years and buy another cpu in the similar price bracket, its usually an upgrade. this is not accounting for the semiconductor shortage of course, adjust accordingly.
Easier to us. The manufacturer sure knows how to measure them properly but this isn't good for the business so... here we are swimming blindly.
@@LordSevla The thing is that the actual performance improvement in a specific application cannot be predicted easily even if you knew exactly what was changed. You also need to know exactly what instructions some program uses, in which order, etc.
Sure amd and intel can predict improvements, but until they actually run it (maybe in a simulation), they don't know exactly.
A single core has many optimizations already built in.
There is branch prediction (predicting the result of a conditional jump to another instruction) and with it speculative execution. Then there is pipelining, each instruction is split into for example 5 steps and a single core can then run each step at the same time. So up to 5 unrelated instructions can be executed at the same time. With that comes out-of-order execution, where a cpu can run an unrelated instruction a couple instructions later actually before to fill gaps when related instructions follow each other.
Different applications have a different reliability on cache, memory latency, etc.
Thats just what I remember out of my head. There are so much more things to consider. For example with Zen 3, they improced tje amount of loads and writes they can do per cycle etc. etc.
@@LordSevla Of course proper measurement is good for buisness, that is absurd.
3:30 There’s also this pesky thing, the speed of light. At 10GHz, the distance light (and therefore the maximum anything can go) can travel in a single cycle is about an inch. That would really complicate design and require much smaller, denser, board designs.
Computers work with electrons which is not the same as light. Photons are the light particles and much faster than electricity (electrons). That’s why there are talks of making a paradigm shift towards light based computers
Oh.... who needs LEDS when you can do a submerged build and have a cherenkov blue! ;-)
@@JoseDiaz-tf2ql The electrons themselves travel slower than the speed of light (slower than a person walking, even, for that matter), but the speed of the electrons isn't the speed of the electrical signal. The speed of the electrical signal is the speed of the elecromagnetic field fluctuations, which are close to the speed of light.
@Muse Axe Backing Tracks amd supposedly has 4nm cpu's coming out end of the year with 3nm on track for 2022 and 2nm in development
@@GregHib Intel also has a chart where they call 2nm “the Angstrom era of CPUs”, as the improvements would be too small to be measured in nm
Just once I want to hear Linus call them "Jiggly-hertz"
That MEGA HURTS my feelings.
@@TurboGoth oh god no
Ayeeee
Jigglypuff?
Sounds like a pretty weird fetish.
The world needed this. I’ve had arguments with people telling me an overclocked pentium would have better single core performance than a Zen 3 CPU running at a lower clock speed. A lot of people don’t know about stuff like IPC.
You've had arguments with idiots.
I could see a very heavily OC'd Pentium G3258 beat a Zen 2 chip in single treaded performance no problem but not Zen 3 chip, that is asinine.
The funny thing is that these days I'd bet a Pentium would be bottlenecked by IO speeds before its IPC becomes an issue.
Well, typical clueless intel hardcore fans. What can we expect 😂
From benchmarks, my Ryzen 5 3550H (4c/8t, 3.7GHz on boost) actually performs worse than a i5-7300HQ (4c/4t, 3.5GHz on boost) on single core tasks.
This 4-5 years old i5 can beat my 2 years old Ryzen 5 on single core tasks (on some games, my Ryzen 5 can only reach 60% of the i5 performance).
But the Ryzen 5 can easily beat the i5 when it comes to multithreading.
I'll just leave a shout out here to the guys at Gamers Nexus and their unending dedication to push the envelope of proper testing methodology and journalistic conduct.
I really don't like Steve, but my god do I respect him and his team for what they do and how they do it.
@@JKSSubstandard If only they would present their stuff in a more engaging way instead of reading numbers from graphs minute after minute in this monotone voice. That being said, in the reviewer list there are people like Steve and Roman (der8auer) and then iJustine who unboxes sponsored Apple hardware and goes "woooow it is so beautiful" - lmao. And for some reason the list cuts off at P, I guess bad luck for everyone past that :D
@@JKSSubstandard I think I can see the reasons why. The great thing here is that while one might not find him sympathetic, his numbers are always spot on. And in the end, that is the most important thing, the sweet sweet data.
Who doesn't like Steve? Gamers Nexus videos aren't always the most entertaining, but when you are actually looking to make a purchase their channel is the best for comparing similar products - especially cases
@@ethanml33 yeah, but that content would probably be better delivered in an article and not a video, like e.g., Igors Lab does.
I've always used the toll booth analogy.
I'm not sure how accurate I am, but I've always said clock speed is like how fast each car can get through a booth, your cores are how many booths you have, and your threads are how many lanes you have.
Depending on the process that goes through, you could potentially only be able to utilize a couple lanes regardless of how many you have (like if only 2 lanes are open that day) so speed matters a lot more than if traffic was able to split up and utilize all the booths.
Can't you have cores that process more per clock than other cores?
@@jacobnunya808 that's threads, I believe. "Virtualized" cores.
I first thought the “retro” part of the retro tshirt was that no one can get a GPU, instead of the retro artwork style
you might've been overthinking about it too much
Me too brother.
Remember when you could go to the store and buy a GPU :(
Same. lol
They should have just included it as out of stock on their website :P
The miner analogy is great for overall subject matter. I have found that higher gigahertz sometimes can beat out IPC in situations where latency is a bigger factor. Like competitive shooters. If you think of where the miner has to take the ores to. Say the trucks outside. Having a faster worker who can take faster trips down and back up might be more beneficial than having more per pile. Because a truck can only carry so much in one load. This is also why that cache analogy matters. Bottlenecks happen on many levels in a computer.
This is the difficulty of computer performance. Sometimes edge cases will forever be edge cases. But sometimes overall lapping in areas of an industry where a certain methodology of improvement is preferred. Might be better, because some areas develop slower than others.
in fact it often does in some codes. Real world is more complicated, including memory bandwidth. So I belive that the future will be about optimizing designs for specific tasks and having heterogenous cores. Chiplets can help in that sense. An interesting example is IBM Power10 it can run 8 treads per core. It's simple a throughput beast. Apple A15 is another case: let the integer resources the same, but invest a bunch of transistors on AI an increase graphics by 50%.
No, Gigahertz will never beat IPC. If you don't believe me, try a 5 GHz AMD Bulldozer CPU and see how that turns out. :-)
Higher frequency can sometimes be an advantage if and only if the IPC is very close between the two CPUs, but the truth of the matter is that this is hard to tell. There are too many variables in a PC that can tip the scale one way or the other.
@@ruxandy in some codes it will. For example a fully pipelined loop operation with a small dataset will run faster in simpler CISC machine than in a superscalar one, since there is no horizontal parallelism to exploit. Clearly one can always go to extremes but within reason it will. I selected CISC because it has memory memory instructions. Without that we need to remember that the horizontal parallelism of memory access and operation was one of the most important reasons for superscalar designs.
@@ruxandy this is the conundrum I explain in my second paragraph. Many factors can interrupt true performance gains from practical to principle. Changing the architecture, changing the cache, changing the frequency. The term never is a strong statement to make. Especially when there are millions of programs that all perform differently. Sure, maybe higher frequency will never beat IPC in theory, but in practice there’s way to many variables to truly nail down 1 king of performance increases. High competitive shooters were the ones I tested for and found my results.
This is very simply not true. A cpu operates at multiple million clock cycles a second. Even a fluctuation of a million or two cycles per second won’t, on it’s own, ever result in perceivable latency. What matters is how many frames the cpu can process in per second. If a processor running at 4ghz can do 200FPS, and it’s competitor at 5ghz can do only 100FPS, then the one running at 4ghz has a 5ms lower latency than the one running at 5ghz. What matters is the end result, as I think this video quite accurately explains.
Overclockers be like:
"And I took that personally"
It's weird they didn't address tht in the vid
Does anyone know wtf is going on with these bots? either its one of these or those who spam that islam video.
@@Masa. my guess is posting replies to comments is an efficient way to mine traffic, since replies aren't modded as heavily as actual comments
Buildzoid: More caps, more better.
@@fringeanomaly9284 2:03 They started off the vid with underclocking two different CPUs side by side to show the effect of clock speed. Overclocking is the same thing in reverse but generates more heat.
To be honest, I have been watching Linus for like 5-6 years now from the time I built my first PC to today being a major in computer science, and the amount of knowledge I have gathered over the years from this channel is nowhere near what someone can learn in a decade or two. I am very grateful we have people like Linus and his crew nowadays teaching this information for FREE on the internet for people to have a broader mind and understand some things more than what is just advertised for the general public to know. With all due respect to Linus, the Canadian guy over here I understand just why he is so successful and how much work, information, and investments he put into this channel to make it comparable to real-world education.
Well said, but remember its not free. You've been brainwashed with ads in exchange for knowledge.
If Linus or others like him weren't in the space, Companies would have milked people their money for sub par performance rigs to sell
Unrelated but
My hypothesis is true that what you watch is how u shape your mind. Example if you watch videos like serial killers you'll likely to be one. But if you watch videos like this computers you'll likely end up with it. As for your comment just watching linus made/inspired you become a comsci major. I'll be watching relevant stuff now 😁
Thank you linus for telling me Clock speed doesn't matter, now i can finally be proud of my 1.6GHz Intel Atom N270 :D
I don’t think he said clock speed “doesn’t matter.”
My 0.79GHz Intel Pentium makes me cry
@@TheMrMeeks Video is literally titled: "Why CPU GHz Doesn’t Matter!"
@@Nauzhror1216 and yet he clarifies in the video that it still matters 🤔
@@TheMrMeeks Irrelevant, he still said it didn't matter. Saying two opposing things doesn't mean you didn't say one of them.
This was a techquickle but instead repurposed as a full video that goes in depth. Yes! more!
Such a big man that he's advertising other tech reviewers
Feel bad for my man Dave2D
And as experts! :)
Wait Ijustine is a reviewer?
Isn't she an Apple fangirl!!
@@makisekurisu4674 An Apple Fangirl + Reviewer. Nothing says she can't be both lol. At least she's useful for honest opinions that other Apple fans will relate to
@@SofiNabeel xd
I remember back in the 1990's, MHz (and later GHz) was the big bragging point (HA! my i486-50MHz is way better than your i486-33MHz). Then somewhere around the early 2000's it just seemed to stop mattering, ads stopped promoting it and people stopped talking about it. I always wondered the reason it kind of became a non issue.
My old 386 had 8Mhz but it came with a Turbo button to get 25Mhz and what can i say, games did fly with higher speed.
And how a 486 DX3/100Mhz was faster than a 486 DX4/100Mhz !
@@FLYSKY1 what generation you are
@@stateofdecay2210 1984
@@FLYSKY1 so you are walked on computer and lived on keyboards from the first moment of your life :))) I thought you are at least 60 years old one of the pops :)))
I know it would've taken a while to come up with a miner analogy, which came really close to the actual CPU. Little things matter a lot: L1, L2, L3 caches, inclusive and exclusive caching, branch prediction, interconnect bandwidth, IPC, clocks per cycle, and the architecture itself. These things are so complicated that it would take an average person 1 to 2 years of studying just to know how they actually work.
bet tony stark can do it over night XD
And that's _only for the CPU itself._ All of the other components can also affect CPU performance in varying ways, and that includes the enclosure.
@@sonicboy678 I'm glad you mentioned this. I buy equipment for my company and I get so tired of dealing with this issue. I'm really good at my job and have 35 years experience in electronics. I do calculations and specify exactly what I want for each order. I check inventories and make substitutions if needed, so I'm aware of market conditions. Oftentimes the vendor substitutes equipment saying it's the same, due "market conditions" which always means they got a deal on bad equipment that smart IT guys have rejected. In one instance, they substituted Intel 4770 CPUs for Intel 4790s, stating that the differences were too small to worry about. However, when we deployed them we got a lot of complaints. We tested 5 machines in our environment using actual user workflows, some disk intensive, some cpu intensive, some multithreaded, some not, and found that the 4770s took 3 times longer to complete disk intensive workflows, and they always took at least double the time to compete any workflow. The tests ran for 7 days, giving consistent results. Even with that measurable evidence the vendor claimed they were basically the same system. The motherboards were altered slightly by Lenovo so they could fit them into cheaper cases, and substituted generic RAM for Crucial RAM, so I don't know the specs. The coolers were the same, The chipsets, video, nic were the same. I don't know if the performance difference was solely due to cpu or if multiple issues such as the board design, RAM, or some firmware changes may have also affected it.
I only have 1-2 hours. Explain it like I'm 5.
@@JeffLeonard0 the only differences between those processors I know of are the paste under their heat spreader and the clock speeds.
"And that brings us to IPC."
"Oh, I know that, Inter-Process Communication."
"Instructions Per Clock."
"Screw you tech acronyms!"
Ha, I thought the exact same thing. I was wondering how it was going to be relevant
Fat Cat!
software vs hardware, those initialism exist in different namespaces. In human programming language titling the video something hardware related is functionally equivalent to "using namespace hardware".
@@BlastinRope This comment is so geeky and odd, it almost feels as if an A.I trained with programming books wrote it.
Same 😂
My core i3-4030U looking down at an i9-9900k : finally a worthy opponent.
Imagine buying Intel lmao couldn't be me
@@horacegentleman3296 congrats..?
@@horacegentleman3296 yeah nah, nor me. i couldnt possibly have bought intel
@Vinícius Felipe Posselt is amirite new mineral?
@@horacegentleman3296 I just bought Hp Omen with ryzen 5 4600h, 1650Ti and my father was telling me to buy the one with an Intel i7 9th gen🤣
Goodhart's Law is expressed simply as: “When a measure becomes a target, it ceases to be a good measure.”
The non-simple version: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."
@@rehhouari x2
Is this why more relaxed work environments are able to get better results?
@@rehhouari Ditto
@@oscarh.405 yeah pretty much
Talking about CPU's while having a GPU shirt on. Love it xD
I don't love it xD xD xD 😑
They are all processing units, what does it matter
@@sharul1709 wow can't believe you ain't gay and love Linus
what?
I don't get what is notable about that.
Haha so funny 👍👍
I’m so glad that Linus has made this video I’ve been telling all my friends for years that size doesn’t matter
"It Ain't Size That Matters, It's How You Use It"
- Serious Sam
In electronics, unlike some other areas of life, it's actually the smaller, the better. 😉
@@chrisdpratt not really
My wife says my 3.5ghz works just fine. She's a liar.
@@ThatShitGood wise man
This is a great conversation to have when comparing CPU's. This is exactly why AMD branded their Athlon XP chips the way they did in the early 2000's.
Russel Kasem
the XP and x64 Athlons, bad chipsets killed them, too many bugs
unable to keep up with the mighty Pentium 4HT giants, HOT!
Core 2 just wiped AMD away... gone forever now....
TSMC now....
Yes, while the very first Athlons were numbered after their Mhz, the Athlon XP used a numbering scheme that reflects what similar performance for non-XP Athlons. For example, the Athlon XP 1500+ would be 1333 Mhz but it's suggested to run as fast as a regular Athlon if it was OC'd to 1500 Mhz.
2003. ;)
@@lucasrem Those Nvidia NForce chipsets were pretty solid if I recall. But really my whole comment was regarding the campaign Intel had at the time when they used the clock frequency to differentiate themselves from AMD. On a box a Pentium 3 at 1.8Ghz was "better" than an AMD Athlon XP clocked at 1.4GHz (1800+) because in the eyes of consumers, clock frequency was everything.
For us car guys and girls,
GHz = RPM
IPC = torque
Core count = amount of cylinders
Logical cores = SOHC Vs DOHC (hyper threading enabled vs disabled
CPU Cache = turbocharger
Amount of CPU Cache = turbocharger boost pressure
Having a GPU from the last 18 months, owning a classic benz autocarriage. It will be worth more the less you have used it, and will only have resale value as a collectible after another 12 months lol.
It will be like owning a stock Fiesta that you paid three times the stock Fiesta price for :D
whoes a car guy?
So I have 14 cylinders running at 2000 rpm then! Seems legit.
RGB = RGB
so i have 4 cylinder running at 3400 rpm, looks normal
Hey I think this was a great video! I think it could’ve been improved by directly calling out 2 computer architecture laws that were touched on by the mining analogy:
1. The Iron Law: performance = (cycles/instruction)*(time/cycle)*(instructions/program)
2. Amdahl’s Law: total_speedup% = 1/((1-portion_enhanced)+(portion_enhanced/speedup_enhanced))
Thank you so much for this
What I know is a CPU architecture is going to have an effect on this. What I also know is there are many programs that deal with data processing that can break data up into chunks, and the faster you can clock the cores, usually the faster you can process the data, up to a point. If this functionality can be maintained in L1 - L3 cache so you don't take the IPC hits from going to RAM, then faster core speeds make a difference. Rendering video is a good example. In which case, the instructions/program has no bearing because the thing that takes up a long time is the actual render, which is running a small number of functions when it's doing that merge of video with whatever it is that's being added to the video, such as CGI.
What I also know is the title of this video says I can clock a CPU at 1GHz or 5GHz and it doesn't matter. Except it does, so I didn't bother. I'm trying to stick to a low "sensationalist" diet right now, and simple logic says that every CPU architecture is going to hit a limit, to where some part of that architecture won't give better performance once the cores are clocked to a certain point, but for some architectures in the past that point couldn't be hit in gaming, because power consumption would become too great before that limit was hit.
@John Doh that's why so much work goes into lowering power consumption. they had a Pentium 1, or 2, and I forget what it was, overclocked to 5ghz! I think the power draw was 700 watts, just on processor. They are trying to scale alot better today like a balancing act.
The average watcher isn't going to have a good frame of reference for these laws. You'd have to spend a lot longer explaining what speedup actually is, big(O) etc. This is basically an accessible TLDR video.
This video :
"Which is faster a spider ontank tracks or a tank on spiderlegs?" The answer is it depends on the race course...
Does CPU Ghz matters?
Yes only when compared within their own family(for example comparing cpu's of intel family)
You should never compare cpu's or gpu's Ghz which belongs to different family(like comparing a cpu of AMD with a cpu of intel).
So what about comparing i5s to i3s? Or generation to generation?
@@anxiousearth680 IPC improves each generation and corecount increases as well so you can't really compare different generations
And an i3 consumes less energy, and has less cores and is designed for other areas of application compared to an i7 for example so they're also not really comparable - even though they should perform the most similar when looking at a single core bench with the same clock speed
Furthermore efficiency and the layout of the chip improve from generation to generation
@@jodamin Yeah the number of cores is more important than clock speed for sure. You can get a dual core pentium or AMD processor pushing 3ghz and they're slower than an Intel I5 quad core running 2.7ghz like the one I've had for a long time.
@@gravemind6536 Unless you have games/applications that only support a single core.
Linus - "Why CPU GHz Doesn't Matter"
Also Linus - "GHz absolutely matter" 3:14
dude you cut like 1/4 of a sentence there - you are like a cheap tv station :D
@@IntruziX CNN
@@IntruziX he must work at cnn
@@justinkaufman495 -someone who watches exclusively fox
@@Scnottaken and who is that? do you think it's possible to both be in the RUclips comments and also only watch fox as a source of news? CNN literally had employees say on hidden camera that they lie and yet here you are attacking fox which last I checked nobody from fox News has been caught on camera saying they lie and hold an incredible political bias
What Linus is suggesting is exactly what intel and AMD do now and it makes identifying how one cpu performs vs another. Back when CPU’s was called something like a 386/16 or a 486/25, everyone knew 386 or 486 was the processor’s class and 16 or 25 was the clock speed in MHz. CPU’s of this era also had the letters DX or SX added after the MHz to indicate whether the CPU had an integrated Math CoProcessor on the die or installed as a separate chip.
Later, once intel introduced the Pentium class of processor, AMD and other competitors CPU architects became less of clone of intel and began to diverge performance wise. Rather than label their CPU with a lower MHz than intel, AMD and others began to use a number that was supposed to indicate what Intel CPU’s performance they matched, so a CPU may be named C6-266, it was only 166mhz, but the manufacturer would prefer you to ignore that fact and instead focus on the 266, so you can pretend like your CPU is actually 266MHz. Eventually, after intel ditched the Pentium moniker for their processors, they also began using an alternate set of numbers to indicate relative performance instead of MHZ. That is why we don’t have an intel CPU named Core2Quad2.4GHz, we have the Q6600. Same reason we have the AMD Phenom 8320 instead of the PhenomOctaCore3.5GHz. intel introducing the Core i series processors and AMD introducing the Ryzen line of CPUs has only made it more confusing.
They should just keep making the numbers high so we know which ones are better lol
Actually 386 never had integrated math processor. SX had 16 bit data bus and 24 bit address bus while DX had 32 bit o both. In both 386 and 486 the SX versions were introduced to kill competition that made faster versions of the older chips. Intel actually had 487SX math coprocessor which actually was a full grown CPU that disabled the original CPU.
This video is very timely. I was just having this discussion with a very knowledgeable person in the comments on another channel quite recently.
Even though i learned a bit better about how subjective the term "IPC" is, i realised even more that all these leaks of "such and such product has X about more IPC" are meaningless because none of those leakers know what is being measured.
It's almost a worthless metric except in direct, open comparisons - like in reviews.
Well in the end all the numbers are just there to please all the spec sheet warriors, as per usual. In the end, real life tests are what will prove true performance. It's just like when people buy cars off of reading about their sales paper describing how powerful they are, and then not giving a second thought to anything beyond that single-paged paper.
As if the performance of a pc component could be summarized in a single piece of paper in the first place.
Yes exactly as put, a cpu manufacturer can say 200% increase in ipc, but that could just be for floating point performance,
Fortunately often times ipc increase manufacturers claim are somewhat overall,
@@ignortotal360 I think spec sheet listing can be compared to puzzles that market themselves with the amount of pieces they have. Sure you get the big number, but you don't have an understanding of how said numbers interact, their individual complexities, nor the quality of the final picture once assembled. All you have is a singular metric on which you can compare with other products. No more, no less. Like with CPUs and GHz (or spec sheet numbers in general), unless you have a desicription of every single component and the capability to understand how they work together, then listing a singular metric for one of them doesn't do anything except allowing you to compare with products that are factually otherwise identical (like when overclocking). Same with core/thread count etc.
I loved the video, but I'd also really enjoy a companion video with more deep-dive-elbow-greasy-nitty-gritty details about how the current two current-gen competing high end architectures, well, specifically _how_ they process things differently.
And then, in a few years, when architectures change significantly, another review to compare how processing is being done then, compared to the last review. Would be cool!
Deep dives is GamerNexu's territory. Steve can recite all 17 different memory timings by memory, he'd trounce the subject in a video about chip architecture differences between AMD and Intel.
@@lfla0179 Right you are, I didn't forget about them! Their content simply takes a somewhat different form, with their narrative more focused on technical details, whereas LTT focuses a fair amount on its own pah-zazz. And, hey, if you take a look around some of LTT's/LMG's videos, well... Some of them really are deep-divers as well, although how this is conveyed depends on that video's topic, not to mention that video's presenter. After all, Linus has a pretty competent bunch of people that he happens to be lucky enough to call his employees!
Personally, I would enjoy it if either of them made such a video that I described above... Or BOTH! :D
@@inrevenant JayZtwocents is more of a IFIXIT kinda guy. Hands on.
GamerNexus dissects the corpses. Butcher the vendors sometimes. (Gigabyte psu corpse rotting on the table)
LTT is more: "if you want this functionality, buy this, if you want these other 3 things, get that another" kinda guy.
And all 3 intersect.
IPC is the main reason why AMD caught up and overtook Intel, clock speed ain't everything if the number of instructions per clock is very low.
using amd lmfao.
@@r3mxd woop woop woop lmfao
Yes, IPC is a very big factor
Take amd of 2012 for example,
Even though the fx series had more cores and threads, it didnt outperform an i5 of that age because of poor IPC performance
Same is when amd came back in 2017, at first ryzen was behind, but zen 2 and zen 3 made ryzen topple intel with its better ipc
I wonder how intel compete with its 14nm superfish finz +++++
I would say the main reason is that AMD was able to go to more power efficient and dense process nodes than Intel due to Intel's fab problems.
@@mcg6762 You can have a CPU that uses 1 watt of power an as dense as you want 1nm+++++++, if it has bad IPC then it will be trash, End of.
I know ltt is the largest tech channel out there but them listing a bunch of other creators out there to spread the love really shows why this channel deserves every sub.
This reminds me of Steve explaining the “megahertz myth” about 20 years ago. What’s old has become new again!
Conclusion: Steve's hair is timeless
How old r u?
Steve started the Star Citizen interviews 20years ago?
@@johnconnor4486 who is Steve?
@@daddyelon4577 if you asking this question, better give up computing now
It’s incredible how this guy had changed over the years, watching since NCIX.
Same, and out of all of it I'm just glad he doesn't look like someone Chester the Molester would be chasing anymore.
@@SimonBauer7 Ironically, He was never the VIrgin...He literally was married from the start.
It's actually Luke who was the real virgin at start despite having a beard...lol
Yup, he helped me overclock my i5-2500K.
Its been 84 years...
TLDR: IPC (instructions per clock) matters, and so does GHz. But not one without the other.
Overall CPU performance = clock speed x IPC x number of cores.
@@hubertnnn Not exactly, no. If all you do is single threaded workloads like browsing the web and gaming, it doesn't matter if you are on a 6 core 5600x or a 16 core 5950x. The difference would be negligible, despite one having more than double the cores & threads. Number of cores has no relevance to real world performance in day to day applications, unless you are a power user into 3D modelling, compiling, rendering, video editing, etc. which is less than 1% of all computer users around the world. For most people, core count does not matter these days, especially as the lower-end CPUs start having 6 cores/12 threads and such.
@@jonny6702 It depends on the workload.
But that is the property of that workload, not the CPU.
The era or single threaded applications is already gone, so you can assume that other than some low budget indie games and old games everything will be multithreaded.
Plus web browsing is multithreaded for last few years too.
@@hubertnnn Yeah, but web browsing still doesn't need more than a couple cores even for things like video playback.
It doesn't matter if you have a 5600x or a 5950x for web browsing, and it wont for as long as that hardware is relevant.
Also, it's very hard to make games multi-threaded which is why most use 6 or less threads. It's very niche for a game to actually be able to efficiently utilize a lot of threads. It's not feasible for most games due to the nature of how video games operate. There's a limit on how much you can reasonably multi-thread before it makes things less efficient. In other words, the benefit goes down as you increase thread count because you make it increasingly hard to synchronize all the work loads. Only stuff that doesn't need to be syncrhonized can easily be multi-threaded, and that isn't a lot of the games workload overall. That's why most games will have 1-4 threads that are really dominant, and maybe a few other ones that do light work. Gaming will be single-threaded dominant for as long as current hardware is relevant.
My point was that the things that 99% of people use their PC for will run just as good on a modern 6 core as a modern 16 core. That includes gaming. Sure, there are a handful of niche games out there that have good enough multi threaded support to use more than 6 or so threads, but even then, there is probably less than 5 games on the market right now that will use more than 12 threads - all of them being simulation/RTS style games that have a gameplay that allows for multi-threading. A 6 core processor is fine for that.
99% of people don't need to look at core count on modern processors as it doesn't make a difference for what 99% of people are doing. Literally only power users use multi-threaded heavy workloads. That's it. Single threaded performance is all that matters for 99% of peoples real world use cases for Windows.
I like your miner analogy. When I used to sell tech to people I would describe it like this "imagine a three year old walking next to their mom. Each step a 'clock cycle'. The three year old takes a bunch of little fast steps to keep up with mom taking one or two large steps. If you're just counting steps then the three year old is faster, even though the mom is traveling a further distance with each step."
Not to mention SW optimizations and specialized accelerators on CPU. For example AVX instructions don't run at the advertised core frequency but can process specialized workloads much faster.
Video decoders and matrix accelerators are two that are entering the market at full speed. And yes SW optimization can lead to orders of magnitude performance gains in extreme cases.
I'm happy to see Dawid (Dawid Does Tech Stuff) in the list of reviewers. He's a really great guy.
I always welcome Dawid getting more love. He is legit one of the wittiest tech RUclipsrs out there.
I thought he was a foreign Indian guy until I clicked one day and felt silly for missing out.
@@philtkaswahl2124 absolutely! And not only that, I also enjoy his sense of humor.
@@StayMadNobodycares I discovered him coincidentally and since then, I watched almost every single video he uploaded.
"Why CPU GHz doesn't matter!"
3 minutes later: "GHz absolutely matters."
3:14 lol
Yeah, this was a really long way to say, "architecture and boosting made a difference."
Why CPU Ghz doesn't matter (when comparing different CPU skus)
Ghz absolutely matters (for "a" CPU)
its a clickbait title...
@@Gromran Linus has admitted that his titles are click bait, just to increase people watching his videos.
So basically, CPU GHz is kinda like the engine size.
Nice to have, sounds good but not really only determines the performance.
Actually engine size matters, size=torque=acceleration , sound is a bonus.
@@alexalexftw That's the reason a Tractor accelerates faster than a F1 Car, right?
@@DaroriDerEinzige now you miss the other part which is the transmission , in a traktor the transmission has low rotation but very strong so it can carry heavy stuff , go learn before arguing.
@@alexalexftw So, therefore engine size doesn't really only determines the performance?
@@DaroriDerEinzige The worst F1 racecar is still faster than the best and most efficient tractor.
Shoutout to Machines and More, he definitely deserves a spot on the list
Overclockers are punching the air right now
I uploaded my Face Reveal........
I love it when the notification shows up 'someone liked your comment' and "you have a new subscriber"'' '' '' '
Ah yes, training.
@@LightningSquad I love it when you shut up
@@nevergonnagiveyouupnevergo4857 it's like you are reading my mind!
Shouts to Dawid Does Tech Stuff. Perfect example of the up and comer getting a mention - I love the wide range of tech youtubers!
There are ways to calculate stuff - I learned it in uni and forgot again 🤣🤣🤣
But it also depends on what you are running:
How much of the work can you do at the same time? = parallelism
Imagine having workers and every worker has a speed (cpu frequency):
You have some tasks they can all do at the same time and some tasks where they need to wait until another task has finished.
So for example:
You have alot of letters that needs stamps
Every one of your workers (cpu core) can grab a letter and some stamps and get going.
So now imagine you need to write the letter first. That takes a while. And only one of your workers can write the letter.
So you can stamp Number of workers letters at once and are really fast in doing so, but you will still be super slow because you need to wait until the letter is actually written beforehand.
So the speed in this case depends more on the speed of a single worker (cpu clock speed)
In the case before (letter stamping) it depends more on how many workers you have than how fast they actually are.
In real life CPUs will always have to do a mix of both. They will have independent processes that they can just give to a cpu core and forget about it and then they have stuff that depends on each other.
For video games for example some video games are also not optimized to run in parallel
Meaning: there are things that are independent of each other and could be paralelized but aren't.
Meaning: this game will be heavy on clock speed
Some games are highly build for parallelism, so they run smoother on CPUs that have alot of cores.
So in short: it's not even possible to give a correct answer. Because it's not known hoch much parallelism vs single clock performance you actually need 🤷🏼♀️
You can give a generel guesstimate (more cores better, more frequency better) but you can't know for sure.
And this is not even considering small differences between the CPUs where the manufacturers optimize certain things. Like Linus mentioned like branch prediction ect.
But that's not even all: some CPUs are completely differently build.
Think about your phones: they use arm processors because they use alot less power! They internally work extemely different than x86 Cpus!
Long story short: it's complex
from what i remember, throughput is probably the most reliable way of measuring individual component/algorithm effectiveness
nope its simple. ghz more matter when we talk about intel and amd cpus as cisc. but for more than decade all cpus are risc that emulate cisc with microcode, so when you make faster microcode you get more performance. intel pentium was 1st that could run 2 simple instructions in one clock in parallel aka UV pipelines, pentium 3 had 3 pipelines or even 4. btw risc in 1997 already could run 7 instructions if they dont collide with each other. and now we come to exploits that allow to fool microcode, and when they started to patch those holes suddenly cpu lost 20-30% of power. modern risc cpus probably have more pipelines next today cpus have multiple cores, so if you split work to every cpu core, you can outperform faster cpu that has less cores. btw we have now 64bit cpus, but powerpc last pure risc used in xbox 360 and ps3 was 128bit. today risc i think already is with 256bit registers
I didn’t know Pathfinder: Kingmaker character sheets had the mining skill.
That threw me off so hard like- WAIT I KNOW THIS SCREEN!!!
@@nikkiwilliams9808 Ditto... did a double take when I saw that screen. Guess someone's a fan at the office.. lol.
Might be from an Editor playing Wrath of the Righteous since it came out this month lol
0:12 also referred to as or . Dr Potato will chew you over this.
I learned about this back in the day, when my old A8-7650K OCed to 4.3Ghz performs worse than my friends Core i5 4590 even it only got 3.7Ghz. But the overclocking of the A8-7650K from 3.8Ghz to 4.3Ghz is really beneficial, I got 15fps improvement in games back then. So its clear that Ghz is not the real measure here, but pushing it to the limits will give you more performance.
One of my friends once said that his Core2Quad CPU from 2006 should run like the Ryzen 5 1600 just cause he OC'ed his CPU to match the new chip Ryzen.. Probably one of the most hardest facepalms i've had. Just cause your CPU runs at 3.2Ghz or etc like the newer chip doesn't mean you'll get same fps as the new chip..
Lol, you have great friend 👍
Seem it was me. Use core2quad Q9650 @3.6
If the AMD FX series has done anything, it has to be proving that GHz aren't comparable. We had stock 5GHz CPUs in 2014. Were they fast? No.
@@andreewert6576 I remember a few years back looking back and thinking AMD were better than Intel because more GHz and then realising how wrong I was later down the line.
Thanks for covering this. For decades this consumer industry has been so ignorant and arguing with each other, based on clock speeds to support their argument why their favorite device/hardware is better. Cpu and phone are by far the worst and “discussions” are insufferable.
I'm now 20 years old...
I can't imagine how fast a CPU or a GPU will become in maybe 20-30 years from now!
I think it must be crazy, but hopefully I will know it one day
For insight, just go back 20 years. Most people were just being introduced to dual core CPUs. Now high end desktops can have 64 cores.
Expanses though.....unpredictable
Strangely, it's not guaranteed to improve that much compared to the previous 20 years. We've been in a plateau of slow increases for years now, because for most consumer tasks the vast majority of available hardware is now 'good enough'. The focus then shifts instead to efficiency. So in 20 years time they may not be terrifically faster, but be much smaller, run on *way* less power and produce very little heat. This is especially likely because there are very few mass market applications for more power but tonnes of mass market applications for reduced size and increased efficiency.
@@boiledelephant In that case would it at least get cheaper? Or at least more affordable then it is now?
@@Void_Dweller7 Absolutely, yes. We have already seen this in the last few years; a modern budget smartphone, tablet or laptop is more powerful than the highest-end versions of each of those devices when they were first developed, and as scale of production increases, it gets cheaper and cheaper to make 'good enough' devices. You can now get a smart watch that does biometrics, fitness tracking, GPS, calls, etc. etc. for less than £100; that's a James Bond level of wizardry, for the cost of a week's food.
It presents its own problems, however, and we're already seeing them. As tech gets cheaper, the likelihood of repair goes down, and the likelihood of replacement goes up; consequently, the amount of devices ending up as e-waste increases dramatically. This is already happening and will only get worse. Our generation's legacy may be a layer in the fossil record made entirely of smartphones and small gadgets.
I really like the mining analogy. That was amazing.
I remember when we (at work) were doing a hardware replacement of our IBM 3090 mainframe with 3 CPUs which was being replaced by a IBM 9672 with a whopping 10 CPUs but barely half the speed. Our lead tech guy thought it was crazy and would never work. To say the least, there were some performance issues as we figured out how to tune the system to run on the new hardware. After we adjusted the number of read /write threads on the database (on the old hardware I think the number was 1), it performed wonderfully. So well in fact that our storage subsystem was now the bottle neck in getting maximum through put. A few years later, I remember trying to test out running UNIX on the mainframe and the Unix engineer I was working with was convinced that shared memory wouldn’t be enough-8 years later they were running virtual Unix machines just not on the mainframe. I find it fascinating that so many of the innovations that I saw 30 years ago with mainframes are now happening with home computers.
Thank you soo much guys, after the oversimplified video on "specs you should ignore", this was a very needed clarification, and the explanations are apropiatelly deep for anyone to understand why.
So informative! I had some questions about this subject recently and this answered just about all them. I totally get it now and know how to be skeptical of this spec going forward. Which is all you really need.
Of all the things of learned from LTT and similar channels, the ability to finally understand spec lists is one of my favorite.
Thank you for making this. While I knew that CPU speed alone didn't matter, I was always curious what actually increased IPC.
I think they recently solved what was really going on. It turns out that some of the previous recommended paste wasn’t that great as smaller form factor of cpus would turbo inconsistently due to the molecules of the paste not making enough contact in the very small channels that are on the surface on current CPUS. I believe Linus has a newer vid covering it.
3 videos in a row
All of them addressing the exact stuff i need
they know
This feels like a long format tech quicky which I would rather see on that channel. I also understand how this concept is also an important one to people newer to the space.
4:12 "Don't. Even. Bring It up."
That's the face of the man who saw it brought up way too many times.
AISURU.TOKYO/machiko?[Making-love]💞
(◍•ᴗ•◍)✧.*18 years and over✿◉●•◦•°•
*YOUTIBE: THIS IS FINE*
*SOMEONE: SAYS "HECK"*
*RUclips: BE GONE*
#однако #я #люблю #таких #рыбаков #Интересно #забавно #девушка #смешная #垃圾
It doesn’t make any sense. Clock speeds are one of the only advertised metrics we have to use to compare. Especially for GPU’s. This was a useless video and I that comment was even worse.
@@DJ3thenew23 lol so you think 1050ti 3.5ghz will outpeform 2 Ghz underclocked 3090 ?nice job buddy
@@supermegapowerg1188 Lmao wtf? 1050 ti 3.5GHz? Dude if you can do that you’d be unlocking god power so yeah. Also nice strawman
Thank you for the honesty, including the needed ADs, as always.
Happy to see Dawid Does Tech Stuff getting some recognition
3:27 Believing in Netburst was such an oof moment for everyone in the industry when it was new. Hindsight is always 20:20. It was a heck of a fun time watching the P6 architecture get a second wind with the original Pentium M platform, which morphed into the Core architecture, and onwards.
Yeah but those Netbursts temps and clocks were of the charts. IPC was very crappy though, the first batch of pentium 4's were slower than Pentium 3's even at higher clock speeds. And those 130W cedar mills/Prescott power hungry chips were freaking awful. Fun times though...
@@DimitriMoreira Even AMD Duron kicked P4.
I like how people still think that Pentium 4 was hot like star and needed nuclear power plant, but it was still very cold and low consumption compared to today CPUs. 🙂
@@DimitriMoreira Prescott was not bad, but it was better to buy Celeron D, Pentium 4 was not worthy for that price. I am just testing Pentium 3 Tualatin and it's not that good as people say, later P4 CPUs are much much more powerfull, but it's probably done even by much faster rams, comparing SDRAM 133 MHz to 400 MHz DDR in dual channel is really difference. Tualatin had advantage in efficiency and low consumption, watt/performance ratio was much better, but wen you compare raw performance, it's not comparable with later P4s. But even P4 was still low consumption CPU compared to today CPUs where you need 2kg cooler on that.
I started to feel proud of my 4 cores just to realize that Im like 6 years outdated
i said the same thing for years until i buy a Ryzen 😉
I'm still using my FX 2 9950 Black with 4 cores and 4 threads and still haven't had anything yet that gives me any trouble.
@@c0llym0re My i7-4700MQ at 3.4GHz is basically around 1st gen Ryzen or Ryzen 7 1700 in singlethread speed.
I used CPU-Z.
@@c0llym0re I think Zen 1 is roughtly around Haswell in IPC.
@@saricubra2867 4700MQ is a mobile chip at 22nm and that performance won't last long till it starts throttling
I don’t really judge by Ghz as such I typically judge how the strength of the individual cores in benchmarks, as well as reviewers using the said processor to run the programs/games I’d wanna use. It’s not easy to determine the power of a cpu being low except for when it can’t keep up with your usage. Not really sure how else to go a out it tbh. But I just get the CPU I feel I’ll need personally. I could’ve bought an i7 8700K back in 2018 but I chose an i3 8100 because it was just pointless having the extra power I wouldn’t take advantage of
OR WAS IT THE MONEY !!!
Woah this was unexpected! Thank you so much for mentioning reviewers 🙏 this really means a lot! RESPECT MAN, RESPECT!!!
Oh, Austin Evans is totally going to deliver a valid, accurate review.
@Cashanova Persona look up Austin Evans and PS5
Like the new PS5 is worse
@@gamamew well that's what Austin Evans thought, but Digital foundry X Gamers Nexus did a proper analysis and found out it was perfectly fine and an improvement in some cases. Austin really just half asses his analysis, which is a shame.
Lol
You should do a guide on how to choose PC pieces, with this kind of "what to look for in choosing the piece" vibe!
he said what the guide is, just look at reviewers, comparisons..
More specifically in what you need, if you need a gaming CPU or a server cpu or for whatever reason, just watch which cpu performs better in THAT aspect and get it.
For anyone wondering, at 6:20 is a character spreadsheet from Pathfinder: Kingmaker, a great crpg.
Title: GHz Doesn't Matter!
Linus: GHZ Absolutely Matters!
,,The lower the GHz goes, the slower the cpu is." That blew my mind!!!!
I know people who don't even base CPU performance in GHz. They do it based on whether it says i3, i5 or i7, no regards for the generation, and absolutely don't even know what AMD is.
I know a guy who wants to be an engineer but doesn’t even know what cpu or you his gaming laptop has
At all
He knows it’s an omen
And that it’s red
Well. There are other bottlenecks. Some applications do well with multiple cores and others use only 1 or few. Then there is the instruction set. SIMD and MIMD instructions give great performance boosts for tasks that require them. Cache levels and their sizes are hugely relevant. Pipelining and prefetching. How many PCI lanes are on the board. Everything plays into performance.
I thought that the first point he'd bring up would be SIMD.
This has to be one of the most information dense videos I've ever seen. Great job, Linus! Excellent explanation.
Gigahertz is only part of a formula that determines the "speed" of your CPU.
Imagine the frequency of a cpu as the speed a wheel is spinning, while the IPC is the size of the wheel. A bigger wheel spinning at the same speed will travel further.
Edit: just finished watching the video, the mine analogy is way better. There's a reason Linus is the one making videos
I've been trying to use mechanical and automotive analogies to explain this situation to customers for years and I was also blown away by the miner analogy.
They think that way because that is the way it was 20 years ago... it's leftover from that. I remember having the first 1Ghz Athlon T bird chip... good times.
And then intel screwed it all up with the pentium 4 with its longer pipeline
Whoever came up with the mining ⛏️ talking point definitely deserves a bonus the mining analogy really made everything makes sense especially to people that don't understand everything about PC
I can 100% agreed with this from a personal use case, I have 2700x(8core)/3070 combos in my Machine and I get 20% less performance in gaming than my friend who has a 5600x(6 core)/3070.
AISURU.TOKYO/machiko?[Making-love]💞
(◍•ᴗ•◍)✧.*18 years and over 🍎🍑
*YOUTIBE: THIS IS FINE*
*SOMEONE: SAYS "HECK"*
*RUclips: BE GONE*
#однако #я #люблю #таких #рыбаков #Интересно #забавно #девушка #смешная #垃圾
As an Computer Engineer, this was the best video of this channel. Very well explained, Will use some of the analogies.
Sentences like "the faster the gigahertz" and "gigahertz, also known as clock speed" should hurt your Comp Engineering brain.
GigaHertz is literally a unit, like Celsius. No one would say "the higher the Celsius" or " Celsius, also known as temperature". And I'm 20 seconds into the video.
Nitpick, but it's really lazy writing for such a highly-esteemed channel
@@eriklowney One thing is teach it to computer students. Other is to hobby and curiosity. I don't think some words would ruin the ideia, this video was great for someone with almost none idea of the matter.
@@DanielGT_93 You have the illusion that this is great, because you already know the underlining ideas. Essentially for the common everyday people you should just say that IPC is like multiplier and the clock frequency is the number that you multiply. The number you get is the performance.
Then you can go on and explain that you pretty much cannot increase the clocks, as this video stated, because it would require too much power which would cause the chip to decrease or melt. Then explain what parts can increase IPC and why. For that purpose I think this video didn't explain it well. You would have to explain it more if you want people to understand at all why branch predictor is so important, how it allows out of order execution etc. and while caches are quite self explanatory I think more info would be in order in the same sequence where you explain branch predictor.
I'm computer scientist and know quite well how CPU work and how to optimize code based on the architecture, and at least to me I didn't get this analogy. Mining overall is not the best analogy, for example chefs in massive restaurant would be better. As there's all of the processes, branch predictor, you can start to get certain ingredients close in the cache and/or start to cook things that haven't been ordered yet but which you expect will be. And you can save partially cooked things in the cache, instead of taking them into some hot/cold storage that is much farther away.
That example would also allow parallelism by having more chefs in the kitchen (could be analogy for SMT), or more kitchens in the restaurant being analogy of more cores.
@@juzujuzu4555 Really liked the chefs in a big kitchen analogy, mutch better than the video! I agree with you, that for really understand this video have some bad mistakes. But in the other hand, most people are not computer people. They are here in Linus just to try and find the best PC to play games or edit photos.
When i started with cameras and dlsr, i dindt knew a lot, so i watched some videos, read some blogs and foruns to buy my first used camera. Someone with better understanding could probably see mistakes in the videos of Tony & Chelsea channel about cameras, but i didnt, and the videos where preaty useful to buy my first camera and be an amateur. I'm going to dig deep into image sensors? maybe, maybe not.
If the person really want to understand computers, study on some university, the correct way. RUclips is always superficial
@@DanielGT_93 In the case of superficial information, you just need the frequency X IPC information. If you try to elaborate it any further, you need to tell it better.
The only info anyone really gets from this video is that "GHZ isn't the same as performance" as anything else really isn't explained in any way that improves the fundamental understanding of the CPU.
That chef example I created for this comment, never heard it anywhere, so there obviously are lots of good examples and probably better ones that anyone could understand with common sense.
But I get that this video wasn't really trying to educate people on what happened in the CPU so can't be too harsh. Though I have seen much better videos on the subject on RUclips, so anyone who actually gets interested certainly can find the information here.
Guys, just drop your bitcoins into the flirt invest platform and don't worry
When I first heard about this, my professor called it the "Megahertz Myth." Processors in the multiple GHz range had only come out a few years earlier.
Reviewers named after the letter P…. “Where we’re going, we don’t need those.”
Goodhart's law - "When a measure becomes a target, it ceases to be a good measure."
Practically any formal test we care to design would result in products being tuned for optimal performance in that test, rather than over all.
Doesn't make it a bad measure though.
Automakers for example certainly optimize their cars for specific crash test scenarios. That does not mean those crash tests cease to have meaning, because if you were in a crash that is similar (e.g., small overlap crash), then those optimizations will save your life.
The key is to ensure your test scenarios represent real-life scenarios as much as possible.
@@johnhoo6707 good point. There is also an exception to every rule.
Although taking crash test as an example; motorcycle helmets have a European and American standard and helmets struggle to pass both due to conflicting test methodology. Fort Nine has an excellent video on it.
I never knew much about how CPU´s work in deep detail, but with all those factors and parts that come together it somehow feels like a car to me. Basically a car is not complicated, but if you want a performance car you´ll see that almost every single component can be a universe on it´s own.
Exactly. A model T consists of a handful of components and can be assembled using only a wrench and a screwdriver, but... everything in it is just basic.
Nowadays, you want power, comfort, fuel economy and many other helpful features. That's why today's cars are insanely complex, it's the only way to achieve all of that
Even worse, people that think "i3, i5, i7, etc" instantly means it's a good and modern CPU, but just about every single one of those has spanned over a decade with largely varying quality.
It's like when my dad bought us a GeForce FX 5700 in the 2010s.
It didn't even run Oblivion and was worse than our computer's on-board video.
Just talking to someone about this the other day. They thought an Intel chip was better than an AMD chip, simply because the Intel had a slightly higher clock.
Best way to compare is pc user benchmark
jarrod tech did a excelent review of 11400 and 5600h it shows who wins in which arena
@@joeyvigil user benchmark is literally the worst way to compare a cpu...
@@joeyvigil the best way to compare is find a tech reviewer that tests your actual use case.
@@oxfordsparky It’s a joke
I've always had this doubt in my mind, thanks Linus coz of you its cleared now.🔥👍👍👍
I still rock my 1990 Commodore (!) 486 with 2MB of RAM and 20Mhz CPU. Runs everything I want it to, like DOS, Win 3.1 ect!
"why don't we just run them faster?"
I feel like we are going to hit the limit of just how fast electrons can travel inside the chip
Photons are quicker than electrons by about 100 times. Photons are the particles of light with no matter, electrons are the matter which reflect thelight. And light goes way faster than matter....
Photonic CPU's will be the future and we won't use electrons at all, fibre cable CPUs effectifely.
@@DailyCorvid however they used much more power
Electrons themselves usually move at speeds of a few millimeters per second.
Luckily for us, the speed of the electrons matters a lot less than the speed of the electromagnetic field propagation. That moves at light speed.
So we likely have a hard limit of ~6.4 Ghz for a 3cm x 3cm die. Assuming signals need to be able to travel between opposite corners in a single clock cycle, and a lot of other stuff
make a video teaching us how to get a sponsor on every video
Dude, you just summed up like two whole months of my 400 level Systems Architecture class in fifteen minutes. I love it.
That's quite worrying then
Another analogy, if a journey takes an hour by road or rail including loading and unloading, you can increase the number of passengers per hour by having a larger bus or train but the journey will still take an hour.
3:14 Video title: "Why CPU GHz Doesn’t Matter!" Linus: "Yes, GHz absolutely matters."