Yep - the 9800X3D is going to sell to the 5090 crowd even if its only 5-10 percent faster than a 7800X3D/14900K If Intel cared at all about professional gamers, they would be release a 12 or 16 P core chip to try to compete with the 9800X3D.
@@AshtonCoolman comparing the topspeed of two cars, one having a 6gear and one a 5gear transmission, testing both in fifth gear isn't valid and certainly not 'scientific'. If you want to compare 'easy xmp vs easy xmp' and you do 6000mt vs 7200mt, that is fine and reasonable. but testing one where it is easy xmp and the other way below it, is just as valid as 'cinematic 30fps' my dude
@@AshtonCoolman HUB will do an entire product release using one brand (MSI) and base their review on the performance of that one product only to find out that one product had a drastic performance issue which skewed the entire video's results. People made purchases based on that initial video. I'd say that's about as big a F up as it gets.
So they buffed the cinebench cores, removed ht, cut 80w of power compared to a 14900k, only to score higher in cinebench and lose in every other application. "BuT ItS MOre effiCieNnt" who gives a buck about less watts when these chips cost more than 500 dollars. A person who is throwing more than 500 dollars only on the cpu doesn't give two shits about a few dollars on the electricity bill. So disappointed, they can't even beat their own 10nm with the best from tsmc. Intel got scammed with that shit. It's wild
I run my 4090/13900k on solar panels, and battery banks, less power consumption would mean I can game longer, vs buying more panels, and batteries. Thousands of dollars speaking.
@@steezegod2768actually curious about your power situation lol I’m imagining you plugging into like a dedicated only for your pc anker F2000 and solar panel setup. Mind sharing what and why?
@@williammoore7978 Hey, thanks Will. So last year in our state electricity went up 4 times, and now twice this year, in cost. I'm near 3x the US average for reference point. I have outside 31 Bifacial Panels, and at peak I see around 430 Watts a panel, and a series of 200Ah batteries, about 12 of them. So, I'm on/off grid with inverters, and power boxes to send back, but I try to just get off whenever I get phone notifications I'm around 30% at night, which is frequent. I get that to most owners, they don't care, but my disposable income kinda teeter-totters around keeping my bills low. I purchased before the 7800X3D was out, and sometimes consider either upgrading the power, or decreasing the usage. I plug into what is a normal outlet, but the anker joke tickled me so I gave a real response, be safe man!
@steezegod2768 I have 20kw installed, and yes, it can make a difference, but a small one. I think if Intel did not go with tsmc, they wouldn't even have now the efficiency claims. Also, we can clearly see that all the 13th and 14th gen fiasco was created by them just to gimp the performance so DOA lake could look a bit better.
Have a 13700k myself. Tested P cores + E cores, HT disabled Tested P Cores + HT, eCores disabled P Cores + HT was far better in everything other than Cinebench scores. Even video editing was better with HT.
Basically the reason he does that is because games dont use e cores on games, and so for games disabling e cores gives the p cores more cache and allows them to run faster for games. BUT yes for anything but lightly threaded applications like games disabling e cores harms performance.
Some games benefit from HT, some benefit from eCores. Intel has been chasing Cinebench scores this release. Removing HT to run eCores at 5.2Ghz or whatever is insane.
@@nimbulan2020 Some believe the inter-core latency wasn't actually a problem but rather an issue with how the tools measured it. AMD has likely addressed this in the benchmarks, which now show more accurate data. So, instead of a fix, this seems to be more of a parameter tuning. Makes %1 difference on most cases
@@hhuseyinbaykal AMD did specifically say it was an issue and that the fix increases performance in some workloads. If there was no problem there would be no performance improvement for the fix either.
What’s surprising is the node shrink didn’t yield more cores. I thought 10nm to 3nm would yield greater transistor density and thus more P and E cores.
Why would you think that? They now use an advanced tile design.. And the whole nm production scale comparison hasn't been a thing in the real world for 5 + years, it's just marketing BS. This is what fills up the new Intel Node for the new Intel CPU. Where AMD in desktop CPUs uses two types of chiplets so far, a compute chiplet with the CPU cores and an i/o-chiplet with the rest of the logic. Arrow Lake consists of four active tiles, where the inactive filler tile and the base are not counted. None of those active tiles are from Intel’s own factories. The cpu-core compute chiplet is produced on TSMC's latest N3B process, the gpu-tile on TSMC N5P, and the soc and i/o-tiles at TSMC N6. Originally, the computetile at Intel’s 20A node was supposed to be baked, but it was officially cancelled last month. The basetile is produced on Intel's in-house 1227.1 process, and the Foveros' 3d packaging also takes place at Intel.
@@mr.needmoremhz4148 when Apple went from 5nm to 4nm to 3nm it added more cores! So if intel has transistor density / perf per watt advancements, why not use that extra transistor budget for more cores? When Intel used a more advanced node for Xeon (intel 7 to intel 3) it added more cores! Compare 128 core Granite Rapids to Sapphire/Emerald Rapids. So why not the same the Desktop Lake?
@@mr.needmoremhz4148when Apple went from 5nm to 4nm to 3nm it added more cores! So if intel has transistor density / perf per watt advancements, why not use that extra transistor budget for more cores? When Intel used a more advanced node for Xeon (intel 7 to intel 3) it added more cores! Compare 128 core Granite Rapids to Sapphire/Emerald Rapids. So why not the same the Desktop Lake?
Honestly, I don’t really see the problem - the new architecture seems to fix a lot of the issues from before. Sure, it might not overclock as well, but you can just stick with the 14900K and skip this iteration if that’s a dealbreaker. Still didn't upgrade my 5800XD because for me the advantage I need is not there yet, not every generation is a winner. #waitingforzen6
If the power draw is way less with similar performance, that to me means, less heat, less voltage, which in turn means, more OC headroom. I have a 13600k oc'd to 5.6ghz all p-cores, 4.5ghz all e-cores with a 4.7ghz ring cache at stock max vid which is 1.37v. My cpu is still great for now, I just hope it brings down prices on 14900/14700 prices so I can drop in an immediate upgrade. However, I am also interested in the 265k because of the NPU because if it uses the ram, like an igpu, you can have a 265k with 192gb of ram to run some rather large LLM's locally with decent inference speed for a fraction of the cost of trying to do the same with dedicated gpus. However, I think the next gen will be the one to upgrade from for anyone on the 12-14th gen platform.
"less heat, less voltage, which in turn means, more OC headroom" Heat can be managed with better cooling and delidding. But less voltage won't magically make the OC headroom better if the chip lacks a favourable voltage/frequency curve. We will have to wait for OC reviews and see how Arrow Lake chips scale in general.
Intel was using 6400mhz on the 285k, and i think 5600mhz on the 14900k. Also seems like the 285k wont have to fafo much with mem tuning? Ill just wait for your video release lol
When I saw ddr5 10k I was sitting there thinking "that number is insane but who tf is gonna buy it thinking it matters" so then I started laughing knowing ppl would actually pay a lot of money for something that doesn't really matter anymore.
What the 285K should have been "no HT, 2 additional E-cores to compensate for no HT and X3D equivalent cache for gaming. Could have kept the power down and clocks at 5.7Ghz max boost". That would have been a worthy processor for the masses.
My first real gaming rig was an Athlon XP 2000 all the way up to Athlon 64 3800. Jumped to Core 2 Duo all the way up to what I have now, an i7 9700K. I'm very likely going with 9800X3D for my next build.
So Der8aeur was drooling about the full chip voltage control, something about the voltage controller now being on chip instead of mobo and it having a full unlock in bios. I have a feeling that even though the slides say, ohh we aimed for same perf but 50% less juice makes me think that these chips can fly if you know how to tune and have the cooling headroom. Also, as far as I know APO doesn’t disable HT, these chips have no HT, so the fact it was within 3% and using 80w less than a 14900k is impressive. Once you figure out how to tweak it..Also, realistically unless you have a 4090 and play at 1440p/1080p none of this really makes a difference, you are GPU bound. COD bros are really SOL because they are really chasing a hardware bottleneck when really it’s the game engine keeping them from maxing out their 480hz monitor.
And how hard they need to drive the silicon… pushing pseudo 7nm as hard as they did with 13/14th gen, there is no margin for OC. Those CPUs are essentially pre-OC’d to match AMD parts.
I'm hoping it'll be like the old days of oc'ing, like how my i7 2600k stock boost clock was 3.6ghz, and I had it overclocked to 4.7ghz it's entire life, and it still functions to this day. Those kinds of gains would be AMAZING!
My understanding of APO is that it's basically manually scheduling threads across P and E to optimize performance, not parking the E-cores. That's why it only supports a short list of games, Intel has to do a bunch of manual debugging work on each game to set up its APO profile. Now how that compares to just disabling E-cores? No clue.
Do you think we have reached the max out of eight core cpu? If L1, L2, L3 cache is not significantly boosted the performance is gonna be comparable from now on.
Did you miss the video of GN 10 days ago coming back full circle on the 12V hpv connector ? Admitting there have never been this ratio and level of failures around any other power connector in the past ? That was hilarious.
@@bm373how is the resolution relevant here? You are saying that 9900k cannot do 120Hz on some games, and not 240Hz on some others. That may be true.... what games are you talking about?
I mean at this point, get rid of the damn E cores completely in future generations, what is the purpose at this point? it's just an optimized mess that doesn't work properly and hasn't since it first released.
I'm guessing it's one of two things: 1. It would increase their power consumption, thus making them less attractive to datacenters 2. It would increase their heat generation, thus making them less reliable for users They have a big architectural issue...
the purpose of E-cores is to make fake cinebench multi-core scores to match AMD make their product appear to not be complete dogshit. Dont buy that 16 P-core AMD 9000 CPU that has uniform design and actually works. Buy this peice of shit Intel CPU with similar Multi-thread scores on fake benchmarks but doesnt actually work in real programs.
@@dzello Or they could just have less cores that are only P cores like its use to be ? I mean I would much rather just have 16 cores or less that are all P Cores rather then this weird experimental crap they have been trying to do. Why do you think AMD hasn't increase the core count ? because they can't and its just pointless at this point. The E Core philosophy is just terrible and honestly I think it creates more issues for the OS and motherboard manufacturers because its a headache to deal with and optimize. I think more people need to start pushing the idea that intel needs to just abandon the E core shit all together. The 10900k was the last great CPU they made that just worked. Also they fixed the architectural issue right ? the problem was they where on a very old node and now their at 3NM so what is the point of E cores ? the purpose of it before was because they couldn't keep up with AMD because they they where on a much older node.
@@kai_121-y6k No, I think the entire structure of their processors is outdated to the point where they were just squeezing more power to get performance. They upgraded the architecture a bit, but since they're so far behind, that only allowed them to squeeze less power to get the same performance. I think they need major revamps.
@@dzello I was thinking it was due to the old node but I could be wrong. didn't they moved to TSMC ? Ether way if its just the overall design of the ship then they are screwed. but The E core I still stand by it needs to go away.
So Jufus what would be your better option for 96GB of ram for a proper work sation still A / B die ? Shit is really expensive really quickly vs normal fast Ish ram Not as fast but what if it's cheaper and better to hold all max core temps ? And for a work / Gamer station would you go with the 9950x personally ?
@@brugj03 For me to upgrade from my 13600k to say a 265k, I'd just need a new cpu+motherboard, as the 1700 socket coolers are said to work on 1851 as well, and I have a z790 ddr5 board, so I already have 64gb of ddr5 6000mhz. But, I'd rather buy a 14900k to drop in it's place as it would be cheaper than 265k (Currently 420$ pre-sale price at my local microcenter) + whatever the motherboard would cost, another 200-400$. But yea, you're right, but how many people don't buy new motherboard, ram, disk, case, etc when doing an upgrade? I don't upgrade often so by the time I do, it's typically time to upgrade everything.
The people that know how to tune properly will be enjoying their non-degraded LGA 1700 CPUs for a long time. No point in downgrading to a newer part that is weaker and functionaly inferior. It also doesn't help matters that all of the idiot motherboard manufacturers are still going to produce mostly 4-DIMM motherboards that suck at memory overclocking. Can't fix stupid.
I'm waiting for some reviews/numbers on CAMM2 and CUDIMM because of that. ASRock has their Taichi OCF 2 DIMM overclocking board that I'm interested in. But they also debuted an identical one at Computex that had CAMM2 instead. I think it's still too early to make any judgement calls. Reviews on initial launch is only 10 days away, and CAMM2 boards were slated to release by or before the end of 2024. I plan on using the RTX 5090 in my upcoming build, which won't be out until after CES anyways.. So I'm in no rush. I've got other things I can get for my build in the mean time, like the 4k TV, speakers/AVR, PSU, ect.
In the past, most motherboards (cheap ones) had only two slots due to heaping out. The percentage of the market that overclocks memory to unofficial levels (beyond XMP profiles) is tiny. Even consumers that have it set to XMP profiles is slightly less tiny. The vast majority of the people I know in my life, has no clue what XMP even is. If I had to guess, what happened was motherboard manufacturers learned through trail and error that 4 slot motherboards sell way better. So they adjusted their product skew to match demand. With only a few truly overclocking focused versions of motherboards (e.g. some unorthodox minimilist Evga mobo designed by Kingpin, etc) being made. For most people in the world the prefered solution would be to upgrade the memory controller and ram in order to run faster ram lol.
@@tylerdurden3722 That's one reason I'm kinda interested/excited for CAMM2. It's a single module, no worries about if it's going to play nice with your other DIMM's or if one of your sticks is DOA. It either works or doesn't, and that simplifies trouble shooting errors on a new build. That and with it being horizontally mounted it's less impedance for air coolers, and you can have a heftier heatsink over the component. The engineering and theory behind it is interesting at the least. I'll be curious to see if this eventually becomes the new standard in the future. But yeah, these days the vast majority certainly just want or use a plug-and-play solution. Back 10 to 15 years ago on gaming forums manual overclocking was more common place. Though back then console gaming was far more dominant, PC gamers were still relegated as nerds hehe. Though times have certainly changed with more games/titles requiring high performance hardware to achieve optimal or even minimum settings. Modding on PC titles has also become incredibly popular over the years too. Now the only issue is Microsoft and all the BS features they're constantly trying to cram into Windows.. like the whole Recall drama going on atm with Win 11. Linux is an option, but even it has it's issues with compatibility. A lot of software and/or games I play may not be optimized for it, modding games is more difficult if not impossible for some. It's just another common derailing feature of PC, it's not as simple as a console where you just plug it in and your off to the races.
@@IOIOI10101Intel doesn't give a shit what socket you're buying your CPU for. They just want to sell a CPU. There's a reason AMD keeps randomly adding chips to the am4 platform.
Just here to say love your music choice and the best fun is when you vibe in the games and not stressing about that stupid 5 more avg fps, lets be focused on stuff that makes us really happy and not stressed about hardware, hope you will point this out to everyone here ... As people seem to be not understanding still
Great video ❤️I’m a little confused on why A-die is so much better on latency? I have 2x24 7800 cl34 (my Imc sucks) everything else slammed. My tRRD_sg is 10 instead of 8 on A-die and my trfc is 620/410 instead of 500/380 ish that I see on A-die. Everything else is the same or even tighter than anything I’ve seen on A-die. Do those two timings really make that much of a difference?
I think we need to wait to test because Intel performance slides seem to be based off of the Performance power profile (PL1=PL2=250W). The question is, how does this thing perform with the Extreme (unlimited) power profile? And we know Lion Cove is faster than Raptor Cove (in terms of IPC). And Skymont is way ahead of Gracemont... I expect the Arrow to be ahead of the Raptor. But we shall see.
ram speed matters IF the engine utilizes it heavily. but that's hit and miss which is why you see some titles do better with faster ram and others not. In general ram speed isn't that important if you're meeting spec.
My chip got significant FPS from the windows update and significant latency improvement from the agesa update. I haven't gone wild tuning my rig though just a few little adjustmenets. So that's strange
I noticed HT helps a lot to alleviate micro stutters and improve .1% and 1% lows in a lot of games. I sometimes get higher average FPS with HT off, but almost always smoother gameplay with HT on. It can behave differently from game to game but that's been my general experience so far.
im on a 7950x and i havent really noticed much difference. I did try out of box settings and noticed the performance baseline did go up a good margin. but with my undervolt and OC, only seeing margin of error increases overall. Better 1% lows, but the pre-update gave me stutters and crashes in some games and applications. Like even browsing on opera watching youtube, i'd just randomly hardcrash (not even blue screen) with error 41 and slews of ram issues on event viewer. Thankfully the last whatever update windows released seemed to have staved that off. But im not super impressed. Much like this one commentor i can read while typing this; I care infinitely less on the brand of cpu and more on the stability and performance. AMD always has some drama, but intel has been fucking up, and now AMD is following intel, and both of them now have subpar cpus now that both have the same issues at opposite extremes. The only plus i can give either is they reduced power draw for the same level of performance we've had the past 2 years. Neither have any real improvements on performance other than AI... which they could just put AI cores on amd 7000 cpus or intels 12/13/14th gen cpus and they wouldnt be any more or less than the new line of processors they launched (or are launching) this year. Just get yourselves like a 4070 ti or 7800xt and pair it with a 58/7800x3d or 12/13700k and you wont have any issues. You'll play everything, and run any application just fine for years to come and can make great viable builds for WAY under 2 grand. My pc is a 3 grand machine, and is very much overkill. im speaking from buyer's remorse mountain here. I coulda had better groceries and the better health that goes along with that instead. New amd cpus arent worth unless you want the higher ram speeds, and the new intel chips are backtracks with less power draw and slightly better smt/multi tasking. no reason to get the new intel chip unless you specifically doing AI stuff. Lets hope 2 years from now they actually have something new and improved. For now, lets just enjoy what hardware we already have on the new games that thankfully arent getting more demanding yet.
Yea but an 8/8 CPU gives me 9700K throwbacks. Stuttering near downtown from 70-80% usage. Also a new platform makes this a worse look than Rocket Lake. This needed to be a move on 14th generation so they could lower the power draw have the 15th gen be the faster chips. Time will tell how this plays out and how much market cap Intel loses
Thanks to this channel and GOG, been sitting pretty on 13700k with B die. Eager to see your analysis on the new chips. They seem like a huge departure with some growing pains. Still interesting to see.b
Yea this is far worse than rocket lake if true. They are in 3nm and slower than raptor lake that’s pretty embarrassing tbh. I hope for their sake they can atleast match 4nm amd chips in most things otherwise intel may have dropped the worst generation in a long list of disappointing generations from them
i need advice im on am4 now but just recently upgraded graphics card to a 4080 super and its bottlenecking my GPU so im not seeing its full potential im not a AMD fan boy and i want to try a intel CPU what CPU should i get from intel to pair with the 4080 super and what kind of mother board i have a micro ATX build hopefully someone in comments can help me out thanks guys
I can get 52 ns and 127 GB/sec read in Aida 64 with 8000 36-47-47-47 TREFI 262 144 250 000% stable in Karhu on my 48 GB Hynix Mdie kit on my Apex Encore. Its really not that much slower than 32 GB Adie when tuned to the max.
I bet intel is opposite AMD, they claim lower performanse while i gues they will 100% be better then they just showed on graphs. its way better to say they are slower like 5% and in reality be like 5-10% faster then they promised. New memory controler + new e cores, more l2 cache, better p cores, this cpu gen gotta be the fastest in games we are yet to see!
Except they have never done that in the past. Always been masters at cherry picking. Like, it is more efficient when the game is GPU bound (and you get less FPS), it is faster at CB (but then the power draw is the same or higher). But you only see more efficient and faster, cause that's exactly what they want you to see
nope! you wrong man. a 14900k can only run the 6 ghz when you have a costum loop! and im sure the most people dont have it! so forgot all 14900k benchmarks! its simply unreal. Temps alone here are importand! and the new intels we can run OPEN with an AIO i think. so they will be faster! 5,8 Ghz + undervolting max is coolable more is >>💥
bartlett 12-p-core chips are the thing we are waiting for in 2025. those will be the best gaming chips not only because of the 4 extra pcores but also because they will be on the same socket as 14900k so if you have a good mobo and memory you just slot the new chip in and also because there are no shitty e-cores the scheduling problems in windows 11 is finally fixed at the hardware level and you dont have to stay on windows 10 forever
So your saying my LGA 1700 Apex board will work on a Bartlett CPU. The thing that’s bad about that is if you buy a new 5090 the PCIE lanes will be slow compared to a Gen 5 board.
Quite negative and depressing this prediction. I understand memory gives barely 5 fps more at 1080P in the best cases.. but I am still hoping that this 285K might still be faster or at least not slower than the 14900K if we overclock it just a little maybe even with intel’s overclocking auto ai overclocking and then add some CUDIMM 8000 96GB something.. and then max out the power settings in bios.. idk hope it’s better. Either way I don’t trust the 14900k anymore since mine burned out because of the microcode. I was planning on upgrading anyway but was quite disappointed at no performance increase…
And before anyone tells me to go AMD. I decided with the Z890 platform to give Intel one more chance let’s see how it goes with the 285K and maybe 385K 🤷♂️.
How long are we going to glorify 1080p gaming? Monster GPUs are on the horizon. I would not buy a flagship CPU and pair it with an impressive GPU just to play at potato quality 1080p low settings. These predictions are just for clicks. people game in 1440p and 4k gaming now.
@@mikelay5360 yeah I have been on 4K since GTX1080 days. Was just saying that like he said memory doesn’t really make a big difference and certainly not if u are using a higher resolution.
@@mikelay5360 You must have missed the point of this channel....its for people who play games competitively and want to compete. 1080p monitors are STILL king for esport FPS titles lmao.
Both AMD Zen5 and Intel 15th gen suck :) Looks like we are all waiting for Zen6 and intel 16th gen for some real world performance upgrade over the current Zen4 and Intel 14th Gen
Arrow Lake is dog shit. Still uses 300W, it's not efficient at all, just uses less than Raptor Lake. Intel NEVER directly compares ARL power draw to Ryzen.
The ultra9 285 has 112 kb l1 per cache and 3 mb per p core l2 cache and 4 mb per e core l2 cache. Why the fuck would it be even or slower. I think its more like a false marketing from intel to trick us or better surprise us …
I get the feeling that people are going to be pairing their 5090s with 9800X3D, in 2025.
A lot of people will for sure.
i think i will be. 3d cache is good for my use case.
Yep - the 9800X3D is going to sell to the 5090 crowd even if its only 5-10 percent faster than a 7800X3D/14900K
If Intel cared at all about professional gamers, they would be release a 12 or 16 P core chip to try to compete with the 9800X3D.
My 14900k will do just fine
9950X3D will be the king for gaming because it will have more 3D cache and will also launch later in Q1 2025
don't care about Intel vs AMD anymore, Jufus vs HUB is the real entertainment...
Jufus?
@@AshtonCoolman comparing the topspeed of two cars, one having a 6gear and one a 5gear transmission, testing both in fifth gear isn't valid and certainly not 'scientific'. If you want to compare 'easy xmp vs easy xmp' and you do 6000mt vs 7200mt, that is fine and reasonable. but testing one where it is easy xmp and the other way below it, is just as valid as 'cinematic 30fps' my dude
@@AshtonCoolman HUB will do an entire product release using one brand (MSI) and base their review on the performance of that one product only to find out that one product had a drastic performance issue which skewed the entire video's results. People made purchases based on that initial video. I'd say that's about as big a F up as it gets.
10 nm to 3 nm and no performance improvement. That's kinda wild.
It was never about that, they`re fooling you.
@brugj03 the node shrink is real but the performance isn't there , just more efficiency.
Performance comes due to architecture mainly, 3nm or 10 doesn't matter much if architecture is not good
Literally all efficiency gains in this instance. Makes sense since desktop is like 3rd fiddle to mobile and server.
N4P amd sucks against 10nm raptor lake. That's kinda wild.
Never forget that engineers and marketing teams are not enthusiast end users.
So they buffed the cinebench cores, removed ht, cut 80w of power compared to a 14900k, only to score higher in cinebench and lose in every other application. "BuT ItS MOre effiCieNnt" who gives a buck about less watts when these chips cost more than 500 dollars. A person who is throwing more than 500 dollars only on the cpu doesn't give two shits about a few dollars on the electricity bill. So disappointed, they can't even beat their own 10nm with the best from tsmc. Intel got scammed with that shit. It's wild
Intel and most other tech companies especially ones based in USA are infested with activists and laymen.
I run my 4090/13900k on solar panels, and battery banks, less power consumption would mean I can game longer, vs buying more panels, and batteries. Thousands of dollars speaking.
@@steezegod2768actually curious about your power situation lol I’m imagining you plugging into like a dedicated only for your pc anker F2000 and solar panel setup. Mind sharing what and why?
@@williammoore7978 Hey, thanks Will. So last year in our state electricity went up 4 times, and now twice this year, in cost. I'm near 3x the US average for reference point.
I have outside 31 Bifacial Panels, and at peak I see around 430 Watts a panel, and a series of 200Ah batteries, about 12 of them.
So, I'm on/off grid with inverters, and power boxes to send back, but I try to just get off whenever I get phone notifications I'm around 30% at night, which is frequent.
I get that to most owners, they don't care, but my disposable income kinda teeter-totters around keeping my bills low.
I purchased before the 7800X3D was out, and sometimes consider either upgrading the power, or decreasing the usage. I plug into what is a normal outlet, but the anker joke tickled me so I gave a real response, be safe man!
@steezegod2768 I have 20kw installed, and yes, it can make a difference, but a small one. I think if Intel did not go with tsmc, they wouldn't even have now the efficiency claims. Also, we can clearly see that all the 13th and 14th gen fiasco was created by them just to gimp the performance so DOA lake could look a bit better.
Alder lake = Sandy Bridge 2.0. 12900k/12700k was a great buy when you look at the 285k.
12700K is still a great buy in 2024.
@@saricubra2867 13700k is best cpu for gaming price/performance 12gen is pretty slow
Guess I'll be keeping my 13600K+B660+DDR4-4400 for awhile. I was sort of looking forward to upgrading to something a little bit spiffier.
Jim keller was working on a new architecture that was going to 2x and then 4x single thread on CPU's but the intel execs canceled the project.
@@HoldinContemptThat 2X and 4X thing is definetly false, it would be at least 19-20% like Zen 3 and Alder Lake.
Im chilling on my 9900K + 4400 b die.
Arrow lake looks like it just won’t cook. Feels bad man..
it's still using 300W so it will cook. it's literally only a bit less than Raptor Lake.
The E cores on the 285K are way better compared to RPL, why the fark would you disable them if 285K has no HT? 🤦
Have a 13700k myself.
Tested P cores + E cores, HT disabled
Tested P Cores + HT, eCores disabled
P Cores + HT was far better in everything other than Cinebench scores. Even video editing was better with HT.
@@griffin1366 Yes, but 285k has no HT so the more comparable test would be 14900KS 8 P Cores + HT with the 285K P Cores + E cores.
Basically the reason he does that is because games dont use e cores on games, and so for games disabling e cores gives the p cores more cache and allows them to run faster for games. BUT yes for anything but lightly threaded applications like games disabling e cores harms performance.
Some games benefit from HT, some benefit from eCores.
Intel has been chasing Cinebench scores this release. Removing HT to run eCores at 5.2Ghz or whatever is insane.
@@Rabieh-jr5gb you have no idea what you are talking about lmao thats crazy how you are on this channel
Agesa was about correcting readings for latency benchmarks. Read it like that somewhere. No change on latency. Just correcting reporting for apps
No they confirmed that there was a real latency problem. Something to do with power management if I remember right.
@@nimbulan2020 Some believe the inter-core latency wasn't actually a problem but rather an issue with how the tools measured it. AMD has likely addressed this in the benchmarks, which now show more accurate data. So, instead of a fix, this seems to be more of a parameter tuning. Makes %1 difference on most cases
@@hhuseyinbaykal AMD did specifically say it was an issue and that the fix increases performance in some workloads. If there was no problem there would be no performance improvement for the fix either.
@@nimbulan2020 you are right
What’s surprising is the node shrink didn’t yield more cores. I thought 10nm to 3nm would yield greater transistor density and thus more P and E cores.
it would on TSMC but intel's in house fab process is dogshit. They are producing worse silicon.
Why would you think that? They now use an advanced tile design.. And the whole nm production scale comparison hasn't been a thing in the real world for 5 + years, it's just marketing BS.
This is what fills up the new Intel Node for the new Intel CPU. Where AMD in desktop CPUs uses two types of chiplets so far, a compute chiplet with the CPU cores and an i/o-chiplet with the rest of the logic. Arrow Lake consists of four active tiles, where the inactive filler tile and the base are not counted. None of those active tiles are from Intel’s own factories. The cpu-core compute chiplet is produced on TSMC's latest N3B process, the gpu-tile on TSMC N5P, and the soc and i/o-tiles at TSMC N6.
Originally, the computetile at Intel’s 20A node was supposed to be baked, but it was officially cancelled last month.
The basetile is produced on Intel's in-house 1227.1 process, and the Foveros' 3d packaging also takes place at Intel.
Most tiles are made on by and on TSMC's most advanced processes 🙄
@@mr.needmoremhz4148 when Apple went from 5nm to 4nm to 3nm it added more cores! So if intel has transistor density / perf per watt advancements, why not use that extra transistor budget for more cores? When Intel used a more advanced node for Xeon (intel 7 to intel 3) it added more cores! Compare 128 core Granite Rapids to Sapphire/Emerald Rapids. So why not the same the Desktop Lake?
@@mr.needmoremhz4148when Apple went from 5nm to 4nm to 3nm it added more cores! So if intel has transistor density / perf per watt advancements, why not use that extra transistor budget for more cores? When Intel used a more advanced node for Xeon (intel 7 to intel 3) it added more cores! Compare 128 core Granite Rapids to Sapphire/Emerald Rapids. So why not the same the Desktop Lake?
Honestly, I don’t really see the problem - the new architecture seems to fix a lot of the issues from before.
Sure, it might not overclock as well, but you can just stick with the 14900K and skip this iteration if that’s a dealbreaker.
Still didn't upgrade my 5800XD because for me the advantage I need is not there yet, not every generation is a winner.
#waitingforzen6
what problems does this fix? a lot more latency with ram and interconnect? no hyperthreading? what exactly does this bring except efficiency gains?
@@mikeh6423 Yeah my comment didn't age that well, I didn't expect it to be that much of a regression 🫤
If the power draw is way less with similar performance, that to me means, less heat, less voltage, which in turn means, more OC headroom. I have a 13600k oc'd to 5.6ghz all p-cores, 4.5ghz all e-cores with a 4.7ghz ring cache at stock max vid which is 1.37v. My cpu is still great for now, I just hope it brings down prices on 14900/14700 prices so I can drop in an immediate upgrade. However, I am also interested in the 265k because of the NPU because if it uses the ram, like an igpu, you can have a 265k with 192gb of ram to run some rather large LLM's locally with decent inference speed for a fraction of the cost of trying to do the same with dedicated gpus. However, I think the next gen will be the one to upgrade from for anyone on the 12-14th gen platform.
Yea but all that potential headroom will be lost to the tile based system unfortunately.
@@PHANT0M410 Please explain
"less heat, less voltage, which in turn means, more OC headroom"
Heat can be managed with better cooling and delidding. But less voltage won't magically make the OC headroom better if the chip lacks a favourable voltage/frequency curve. We will have to wait for OC reviews and see how Arrow Lake chips scale in general.
Intel was using 6400mhz on the 285k, and i think 5600mhz on the 14900k. Also seems like the 285k wont have to fafo much with mem tuning? Ill just wait for your video release lol
I remember a video where he made fun of jayztwocents cuz he said RAM Speed doesn't matter.
When I saw ddr5 10k I was sitting there thinking "that number is insane but who tf is gonna buy it thinking it matters" so then I started laughing knowing ppl would actually pay a lot of money for something that doesn't really matter anymore.
Finally! I waited for your analysis
What the 285K should have been "no HT, 2 additional E-cores to compensate for no HT and X3D equivalent cache for gaming. Could have kept the power down and clocks at 5.7Ghz max boost". That would have been a worthy processor for the masses.
The fundamental flaw of Arrow Lake is no Ring Bus and monolithic. It hurted P-core IPC so much.
So is the ultra 295 a typo? Or is it happening?..supposed to be the new version of the i914900ks
My first real gaming rig was an Athlon XP 2000 all the way up to Athlon 64 3800. Jumped to Core 2 Duo all the way up to what I have now, an i7 9700K. I'm very likely going with 9800X3D for my next build.
So Der8aeur was drooling about the full chip voltage control, something about the voltage controller now being on chip instead of mobo and it having a full unlock in bios. I have a feeling that even though the slides say, ohh we aimed for same perf but 50% less juice makes me think that these chips can fly if you know how to tune and have the cooling headroom. Also, as far as I know APO doesn’t disable HT, these chips have no HT, so the fact it was within 3% and using 80w less than a 14900k is impressive. Once you figure out how to tweak it..Also, realistically unless you have a 4090 and play at 1440p/1080p none of this really makes a difference, you are GPU bound. COD bros are really SOL because they are really chasing a hardware bottleneck when really it’s the game engine keeping them from maxing out their 480hz monitor.
why are they even using TSMC chips when its not even faster than last gen, literally no reason to go through that struggle, no?
Out of fear. They screwed up the last gen.
It's a complete different architectural design, they are still packaging it. But they are not available to make the same node yet...
Power consumption. Only thing
And how hard they need to drive the silicon… pushing pseudo 7nm as hard as they did with 13/14th gen, there is no margin for OC. Those CPUs are essentially pre-OC’d to match AMD parts.
Intel 20A not cutting it!
There is no upgrade path for the core 285k
12th gen at minimum
Love your content and looking through every possible optimize possibilities etc and not just mainstream reviews as the rest.
i cant wait for you to get a hold of one of these, because i have a feeling its going to have more headroom than the last few generations.
I'm hoping it'll be like the old days of oc'ing, like how my i7 2600k stock boost clock was 3.6ghz, and I had it overclocked to 4.7ghz it's entire life, and it still functions to this day. Those kinds of gains would be AMAZING!
My understanding of APO is that it's basically manually scheduling threads across P and E to optimize performance, not parking the E-cores. That's why it only supports a short list of games, Intel has to do a bunch of manual debugging work on each game to set up its APO profile. Now how that compares to just disabling E-cores? No clue.
I bet $10 that the new Intel cpu will run better on Windows 10....
hahah
been waiting 12 days for it.
Do you think we have reached the max out of eight core cpu? If L1, L2, L3 cache is not significantly boosted the performance is gonna be comparable from now on.
Once again.
X3D = Games, 1% Lows
Intel = Everything Else
Amd = Power
Intel = Electric Bill
If the 12 core no e core actually comes out for lga 1700 it will be king.
How should inter CCD latency improve gaming? It cannot.
Did you miss the video of GN 10 days ago coming back full circle on the 12V hpv connector ? Admitting there have never been this ratio and level of failures around any other power connector in the past ? That was hilarious.
8k Hynix adie won’t be faster than 9k mdie. Not to mention 10k. ASRock qvl states there will be 9500ish mts adie support too. DYOR
intel selling you copy and paste garbage since the 9900k
9900K is the goat
About to replace my 9900k. It just can't do 4k 120hz or 1440 super ultra wide 240hz. Sad ✌️🇺🇲
@@juanme555It is the 12700K that is the goat. 9900K is 14nm++++
@@bm373how is the resolution relevant here? You are saying that 9900k cannot do 120Hz on some games, and not 240Hz on some others. That may be true.... what games are you talking about?
lol yeah they have no idea how their products work. It’s going to be way slower.
I mean at this point, get rid of the damn E cores completely in future generations, what is the purpose at this point? it's just an optimized mess that doesn't work properly and hasn't since it first released.
I'm guessing it's one of two things:
1. It would increase their power consumption, thus making them less attractive to datacenters
2. It would increase their heat generation, thus making them less reliable for users
They have a big architectural issue...
the purpose of E-cores is to make fake cinebench multi-core scores to match AMD make their product appear to not be complete dogshit.
Dont buy that 16 P-core AMD 9000 CPU that has uniform design and actually works. Buy this peice of shit Intel CPU with similar Multi-thread scores on fake benchmarks but doesnt actually work in real programs.
@@dzello Or they could just have less cores that are only P cores like its use to be ? I mean I would much rather just have 16 cores or less that are all P Cores rather then this weird experimental crap they have been trying to do. Why do you think AMD hasn't increase the core count ? because they can't and its just pointless at this point. The E Core philosophy is just terrible and honestly I think it creates more issues for the OS and motherboard manufacturers because its a headache to deal with and optimize. I think more people need to start pushing the idea that intel needs to just abandon the E core shit all together. The 10900k was the last great CPU they made that just worked.
Also they fixed the architectural issue right ? the problem was they where on a very old node and now their at 3NM so what is the point of E cores ? the purpose of it before was because they couldn't keep up with AMD because they they where on a much older node.
@@kai_121-y6k No, I think the entire structure of their processors is outdated to the point where they were just squeezing more power to get performance.
They upgraded the architecture a bit, but since they're so far behind, that only allowed them to squeeze less power to get the same performance.
I think they need major revamps.
@@dzello I was thinking it was due to the old node but I could be wrong. didn't they moved to TSMC ? Ether way if its just the overall design of the ship then they are screwed. but The E core I still stand by it needs to go away.
I want this question in all hardware content now. What's the goal of this information.
I love Subway: Departure
So Jufus what would be your better option for 96GB of ram for a proper work sation still A / B die ?
Shit is really expensive really quickly vs normal fast Ish ram
Not as fast but what if it's cheaper and better to hold all max core temps ?
And for a work / Gamer station would you go with the 9950x personally ?
i can see 265k with discount a no brainer. Imagine that chip for just 350 USD that would be a steal!
Exept you have to buy everything anew, MOBO, MEM, Cooler.
And when the new generation arrives, Nova Lake, everything anew again. There's no refresh.
There's a rumour on Panther Lake going to desktop from MLID on 1851
@@brugj03 For me to upgrade from my 13600k to say a 265k, I'd just need a new cpu+motherboard, as the 1700 socket coolers are said to work on 1851 as well, and I have a z790 ddr5 board, so I already have 64gb of ddr5 6000mhz. But, I'd rather buy a 14900k to drop in it's place as it would be cheaper than 265k (Currently 420$ pre-sale price at my local microcenter) + whatever the motherboard would cost, another 200-400$. But yea, you're right, but how many people don't buy new motherboard, ram, disk, case, etc when doing an upgrade? I don't upgrade often so by the time I do, it's typically time to upgrade everything.
sure, who's the thief though?
Great Title and Content, ty
The people that know how to tune properly will be enjoying their non-degraded LGA 1700 CPUs for a long time. No point in downgrading to a newer part that is weaker and functionaly inferior.
It also doesn't help matters that all of the idiot motherboard manufacturers are still going to produce mostly 4-DIMM motherboards that suck at memory overclocking. Can't fix stupid.
I'm waiting for some reviews/numbers on CAMM2 and CUDIMM because of that. ASRock has their Taichi OCF 2 DIMM overclocking board that I'm interested in. But they also debuted an identical one at Computex that had CAMM2 instead.
I think it's still too early to make any judgement calls. Reviews on initial launch is only 10 days away, and CAMM2 boards were slated to release by or before the end of 2024.
I plan on using the RTX 5090 in my upcoming build, which won't be out until after CES anyways.. So I'm in no rush. I've got other things I can get for my build in the mean time, like the 4k TV, speakers/AVR, PSU, ect.
In the past, most motherboards (cheap ones) had only two slots due to heaping out.
The percentage of the market that overclocks memory to unofficial levels (beyond XMP profiles) is tiny.
Even consumers that have it set to XMP profiles is slightly less tiny.
The vast majority of the people I know in my life, has no clue what XMP even is.
If I had to guess, what happened was motherboard manufacturers learned through trail and error that 4 slot motherboards sell way better. So they adjusted their product skew to match demand. With only a few truly overclocking focused versions of motherboards (e.g. some unorthodox minimilist Evga mobo designed by Kingpin, etc) being made.
For most people in the world the prefered solution would be to upgrade the memory controller and ram in order to run faster ram lol.
@@tylerdurden3722 the lowest common denominators always ruin things for us. We end up shackled by their ignorance far too often.
@@tylerdurden3722 That's one reason I'm kinda interested/excited for CAMM2. It's a single module, no worries about if it's going to play nice with your other DIMM's or if one of your sticks is DOA. It either works or doesn't, and that simplifies trouble shooting errors on a new build. That and with it being horizontally mounted it's less impedance for air coolers, and you can have a heftier heatsink over the component.
The engineering and theory behind it is interesting at the least. I'll be curious to see if this eventually becomes the new standard in the future.
But yeah, these days the vast majority certainly just want or use a plug-and-play solution. Back 10 to 15 years ago on gaming forums manual overclocking was more common place. Though back then console gaming was far more dominant, PC gamers were still relegated as nerds hehe. Though times have certainly changed with more games/titles requiring high performance hardware to achieve optimal or even minimum settings. Modding on PC titles has also become incredibly popular over the years too.
Now the only issue is Microsoft and all the BS features they're constantly trying to cram into Windows.. like the whole Recall drama going on atm with Win 11. Linux is an option, but even it has it's issues with compatibility. A lot of software and/or games I play may not be optimized for it, modding games is more difficult if not impossible for some. It's just another common derailing feature of PC, it's not as simple as a console where you just plug it in and your off to the races.
P Diddy RTX 5090 after parties
I have 13700k, I think am waiting on intel barlett lake S 12 P-cores socket 1700
Same, if it gets released it will be awsome
As far as I know that was a fake rumor.
If Intel will release 12 p-core cpu for lga1700 nobody buy Arrow lake then. No chance. Intel can do it only for new socket.
@@IOIOI10101not necessarily, their majority consumer is enterprise which who Arrow Lake is marketed for.
@@IOIOI10101Intel doesn't give a shit what socket you're buying your CPU for. They just want to sell a CPU. There's a reason AMD keeps randomly adding chips to the am4 platform.
Just here to say love your music choice and the best fun is when you vibe in the games and not stressing about that stupid 5 more avg fps, lets be focused on stuff that makes us really happy and not stressed about hardware, hope you will point this out to everyone here ... As people seem to be not understanding still
keep fighting the good fight man!
It would be so much better to go quad channel memory .
Great video ❤️I’m a little confused on why A-die is so much better on latency? I have 2x24 7800 cl34 (my Imc sucks) everything else slammed. My tRRD_sg is 10 instead of 8 on A-die and my trfc is 620/410 instead of 500/380 ish that I see on A-die. Everything else is the same or even tighter than anything I’ve seen on A-die. Do those two timings really make that much of a difference?
No Amd APU do benefit from 9000Mt/S ram because the onboard graphics depend on that . If you game with GPU + APu , then its pointless .
Buy it and tune the sh** out of it!
pls update on latest microcode gang?
I think we need to wait to test because Intel performance slides seem to be based off of the Performance power profile (PL1=PL2=250W). The question is, how does this thing perform with the Extreme (unlimited) power profile? And we know Lion Cove is faster than Raptor Cove (in terms of IPC). And Skymont is way ahead of Gracemont... I expect the Arrow to be ahead of the Raptor. But we shall see.
Clock per clock is better. But if clock is 500 Mhz less, the performance loss is higher than the IPC gain.
🚀lake dejavu
ram speed matters IF the engine utilizes it heavily. but that's hit and miss which is why you see some titles do better with faster ram and others not. In general ram speed isn't that important if you're meeting spec.
Desktop CPU kinda boring space atm, catch me tuning low power handhelds
My chip got significant FPS from the windows update and significant latency improvement from the agesa update. I haven't gone wild tuning my rig though just a few little adjustmenets. So that's strange
No hyper-threading, no buy! Need muh extra threads. I wonder if Intel can manage to bring back HT and AVX512 with Panther Lake?
I believe the 9700k had no hyper threading and I have it back one week after I got it and bought a 9900k
I noticed HT helps a lot to alleviate micro stutters and improve .1% and 1% lows in a lot of games. I sometimes get higher average FPS with HT off, but almost always smoother gameplay with HT on. It can behave differently from game to game but that's been my general experience so far.
im on a 7950x and i havent really noticed much difference. I did try out of box settings and noticed the performance baseline did go up a good margin. but with my undervolt and OC, only seeing margin of error increases overall. Better 1% lows, but the pre-update gave me stutters and crashes in some games and applications. Like even browsing on opera watching youtube, i'd just randomly hardcrash (not even blue screen) with error 41 and slews of ram issues on event viewer.
Thankfully the last whatever update windows released seemed to have staved that off. But im not super impressed. Much like this one commentor i can read while typing this; I care infinitely less on the brand of cpu and more on the stability and performance. AMD always has some drama, but intel has been fucking up, and now AMD is following intel, and both of them now have subpar cpus now that both have the same issues at opposite extremes.
The only plus i can give either is they reduced power draw for the same level of performance we've had the past 2 years. Neither have any real improvements on performance other than AI... which they could just put AI cores on amd 7000 cpus or intels 12/13/14th gen cpus and they wouldnt be any more or less than the new line of processors they launched (or are launching) this year.
Just get yourselves like a 4070 ti or 7800xt and pair it with a 58/7800x3d or 12/13700k and you wont have any issues. You'll play everything, and run any application just fine for years to come and can make great viable builds for WAY under 2 grand. My pc is a 3 grand machine, and is very much overkill. im speaking from buyer's remorse mountain here. I coulda had better groceries and the better health that goes along with that instead.
New amd cpus arent worth unless you want the higher ram speeds, and the new intel chips are backtracks with less power draw and slightly better smt/multi tasking. no reason to get the new intel chip unless you specifically doing AI stuff.
Lets hope 2 years from now they actually have something new and improved.
For now, lets just enjoy what hardware we already have on the new games that thankfully arent getting more demanding yet.
Is anyone else wondering what impact the 40Mb of L2 cache will have on gaming? That's more L2 cache than Threadripper (24Mb) but will it cook? 🤞
*Frustrating situation as i want a Amd pc and a Intel pc...i wanted a intel one now but ffs no improvements means no buying for me* 😒
Yea but an 8/8 CPU gives me 9700K throwbacks. Stuttering near downtown from 70-80% usage.
Also a new platform makes this a worse look than Rocket Lake. This needed to be a move on 14th generation so they could lower the power draw have the 15th gen be the faster chips. Time will tell how this plays out and how much market cap Intel loses
"ram speed doesnt matter". Literally, completely destroying the fps of people I know running 8000 xmp with my day 1 m die at 6600 😅.
Intel and AMD are old school. Cool kids use VIA.
how can the performance be so stagnate for so many years it just pathetic no point in buying a new cpu if it's the same as the old cpu
Thanks to this channel and GOG, been sitting pretty on 13700k with B die.
Eager to see your analysis on the new chips. They seem like a huge departure with some growing pains. Still interesting to see.b
14900K 10 year chip....
Yea this is far worse than rocket lake if true. They are in 3nm and slower than raptor lake that’s pretty embarrassing tbh. I hope for their sake they can atleast match 4nm amd chips in most things otherwise intel may have dropped the worst generation in a long list of disappointing generations from them
Yep same as the 10900K!!!! 10 year chip!
i need advice im on am4 now but just recently upgraded graphics card to a 4080 super and its bottlenecking my GPU so im not seeing its full potential im not a AMD fan boy and i want to try a intel CPU what CPU should i get from intel to pair with the 4080 super and what kind of mother board i have a micro ATX build hopefully someone in comments can help me out thanks guys
I can get 52 ns and 127 GB/sec read in Aida 64 with 8000 36-47-47-47 TREFI 262 144 250 000% stable in Karhu on my 48 GB Hynix Mdie kit on my Apex Encore. Its really not that much slower than 32 GB Adie when tuned to the max.
adie can do 8600
saw maybe ppl doing 8400/8533/8600
14900K needs to have binned imc tho
I bet intel is opposite AMD, they claim lower performanse while i gues they will 100% be better then they just showed on graphs. its way better to say they are slower like 5% and in reality be like 5-10% faster then they promised. New memory controler + new e cores, more l2 cache, better p cores, this cpu gen gotta be the fastest in games we are yet to see!
Except they have never done that in the past. Always been masters at cherry picking. Like, it is more efficient when the game is GPU bound (and you get less FPS), it is faster at CB (but then the power draw is the same or higher). But you only see more efficient and faster, cause that's exactly what they want you to see
285k will have no HT, don't forget
Also, adie will most likely do better on cudimm
It looks like i will keep my 13900kf max oc + 4090 for 2 more years for 1440p gaming... (160hz)
Bruh I still use the i7 2700k lol along with mine 1080ti lol great for 1080p gaming and 1440p if it’s a light game.
0:45...you think... I know that theyre don't know how they product is working.... No Ironie.. They really don't fucking know it
nope! you wrong man.
a 14900k can only run the 6 ghz when you have a costum loop!
and im sure the most people dont have it!
so forgot all 14900k benchmarks!
its simply unreal.
Temps alone here are importand!
and the new intels we can run OPEN with an AIO i think. so they will be faster!
5,8 Ghz + undervolting max is coolable more is >>💥
arrow FLOP
bartlett 12-p-core chips are the thing we are waiting for in 2025.
those will be the best gaming chips not only because of the 4 extra pcores
but also because they will be on the same socket as 14900k so if you have
a good mobo and memory you just slot the new chip in and also because
there are no shitty e-cores the scheduling problems in windows 11 is finally
fixed at the hardware level and you dont have to stay on windows 10 forever
So your saying my LGA 1700 Apex board will work on a Bartlett CPU. The thing that’s bad about that is if you buy a new 5090 the PCIE lanes will be slow compared to a Gen 5 board.
Frame chasers can you make more often videos or longer videos because every 2 week cycle is to long for 15 minutes.
Quite negative and depressing this prediction. I understand memory gives barely 5 fps more at 1080P in the best cases.. but I am still hoping that this 285K might still be faster or at least not slower than the 14900K if we overclock it just a little maybe even with intel’s overclocking auto ai overclocking and then add some CUDIMM 8000 96GB something.. and then max out the power settings in bios.. idk hope it’s better. Either way I don’t trust the 14900k anymore since mine burned out because of the microcode.
I was planning on upgrading anyway but was quite disappointed at no performance increase…
And before anyone tells me to go AMD.
I decided with the Z890 platform to give Intel one more chance let’s see how it goes with the 285K and maybe 385K 🤷♂️.
How long are we going to glorify 1080p gaming? Monster GPUs are on the horizon. I would not buy a flagship CPU and pair it with an impressive GPU just to play at potato quality 1080p low settings. These predictions are just for clicks. people game in 1440p and 4k gaming now.
@@mikelay5360 yeah I have been on 4K since GTX1080 days. Was just saying that like he said memory doesn’t really make a big difference and certainly not if u are using a higher resolution.
@@mikelay5360 You must have missed the point of this channel....its for people who play games competitively and want to compete. 1080p monitors are STILL king for esport FPS titles lmao.
@@thedeleted1337 No am dumping on the point of 1080p gaming in 2024. Very pointless. There's need to up the ante.
Both AMD Zen5 and Intel 15th gen suck :) Looks like we are all waiting for Zen6 and intel 16th gen for some real world performance upgrade over the current Zen4 and Intel 14th Gen
Is intel dead 💀 🤔?
Arrow Lake is dog shit. Still uses 300W, it's not efficient at all, just uses less than Raptor Lake. Intel NEVER directly compares ARL power draw to Ryzen.
False.
Not buying intel
intel is king & those processors will pwn
I heard they will be releasing a hyper threading version in a year!
Not sure if true, but that’d awesome! 👁️👄👁️
The ultra9 285 has 112 kb l1 per cache and 3 mb per p core l2 cache and 4 mb per e core l2 cache. Why the fuck would it be even or slower. I think its more like a false marketing from intel to trick us or better surprise us …
And now compare it to 14900 k….see?
I plan on upgrading to 4K gaming. None of this is for me. Good content though.
This new gen cpu is just sad.
On both intel and amd.
Blah, blah, blah. Americans seem to really love the sound of their own voices. Meh.
He's Canadian 😆
Flat earther or something of the kind.
Salam broda.
1st
10 year chip.......rubbish. They already degrading man. You didn`t do real benches so your just saying something.
Nothing is degrading if you tune your system properly
@@FrameChasersany silicone does degrade over time 😂 no matter OC or not
*2600k sips beer*
@@FrameChasers There was nothing to tune, it required a microcode update.
You`re a just a fanboy.
You don`t know anything.
@@brugj03 make a RUclips channel then 🤷♂️
Intel is so fucked and I'm eating popcorn. This is great.
Plz don't procreate...
@@heyguyslolGAMING ok rabbi
This is not Reddit.
@@mikelay5360 You have to go back.
@@mikelay5360 You have to go back.