GPUs with Expandable VRAM!

Поделиться
HTML-код
  • Опубликовано: 16 окт 2024

Комментарии • 2 тыс.

  • @Vathrex
    @Vathrex Год назад +2730

    This existed before. I read it was common in the early 90s. There would be slots for extra vram chips. The S3 Trio had it

    • @Kaillin
      @Kaillin Год назад +59

      I have one of these gpu's

    • @justinpatterson5291
      @justinpatterson5291 Год назад +32

      Yeah. I was just thinking that really old agp cards had that at one point.

    • @Vathrex
      @Vathrex Год назад +26

      ​@@KaillinWhat a chad 😎

    • @delleron1106
      @delleron1106 Год назад

      Well th

    • @Nick-123
      @Nick-123 Год назад +21

      They removed that because it was too slow so it would not work on todays GPUs

  • @cube5380
    @cube5380 Год назад +4944

    HEY WAIT THAT'S ME LMFAO

  • @exclar
    @exclar Год назад +1496

    The main reason they haven't done this is that it would make the RAM slower. As you can see, the RAM chips on the GPU are very close to the GPU chip itself to reduce latency. This is why this is not recommended.

    • @exclar
      @exclar Год назад +135

      If expandable memory were introduced, it could potentially lead to increased latency and hinder the GPU's performance.

    • @cardboard_boi
      @cardboard_boi Год назад +5

      Yes

    • @cardboard_boi
      @cardboard_boi Год назад +49

      This is why desktop ram can’t reach the same speed as vram

    • @Kaillin
      @Kaillin Год назад +49

      ​@@cardboard_boiit is more because cpu's do not need high frequency ram, but yes soldered ram is faster

    • @AMPS1
      @AMPS1 Год назад +17

      Yes, exactly. Even if they would have added "extra" ram slots besides the solderer vram, not only would it be a nightmare for the engineers and bios devs, the extra vram would most likely not really useful as it's slower and lowers the gpu performance.

  • @GhostyKingdom
    @GhostyKingdom Год назад +82

    Engineer here. Unfortunatly, this cant happen (yet) because ddr6 VRAM is highly specialized and not commercially available and tends to be built differently between GPU's.

    • @ozgurpeynirci
      @ozgurpeynirci Год назад +3

      Just like there is a JEDEC standard for RAM, we can have it for GPUs especially Intel coming out for competition, there could be an aggreement.

    • @Velkomme
      @Velkomme 2 дня назад

      I was actually thinking the same thing. Eventually some random reddit post will tell u how to do it 😂

  • @DigBipper188
    @DigBipper188 Год назад +59

    There are 3 core reasons why socketed / expandable VRAM and GPU dies aren't a thing.
    1: Cost
    It's more expensive to add a socket that the chips seat into than creating a BGA footprint and soldering the parts directly onto the board. It also takes more manufacturing time and adds complexity. Both drive up the end cost.
    2: Latency
    The further away an electrical signal must travel, the longer it takes for it to make it from point A to B. In RAM this causes latency. This increased latency reduces the end performance of the memory and therefore can slow the GPU if it is performing latency sensitive tasks (which it's always doing)
    3: Reliability
    Increased trace length, plus the existence of the socketed interface both introduce the potential for the introduction of noise on a signal. This noise has the potential to cause memory errors and therefore would increase the chances of corruption or general hardware instability. GDDR memory, especially newer standards such as GDDR5 and GDDR6 are both extremely high speed, high bandwidth components. This high speed and bandwidth makes them extremely susceptible to noise. It's better to keep the parts right next to the GPU package soldered directly to the board for these reasons. Soldering creates a stronger electrical connection than socketing and since you don't have added distance it reduces the chances of an interference induced error.

    • @bruhhh3-vz8dt
      @bruhhh3-vz8dt 20 дней назад +6

      ChatGPT ahh comment

    • @jakysilly
      @jakysilly 17 дней назад +1

      4. Money, it makes no sense profit wise to add expendable slots as existing card owners can simply upgrade their gpu with more vram, thereby eliminating the need to change gpu every year.

    • @hajimehinata5854
      @hajimehinata5854 15 дней назад

      ​only enthusiasts change GPU every year. I know that profit is a factor, but in this case, it's not that deep@@jakysilly

    • @jakysilly
      @jakysilly 15 дней назад

      @@hajimehinata5854 with only a small
      marginal increase in performance every year for nvidia gpus, it's quite evident that the profit motivation runs really deep.
      maybe less so for amd gpus.

  • @wesleyhalliwell
    @wesleyhalliwell Год назад +500

    They have done this, in fact it used to be quite common. The reason it stopped was the same reason why it’s difficult to find the highest speeds of DDR5 in laptops with slotted memory: it was becoming too difficult to increase the speed of the memory with the much longer traces that you need for slots. When the memory is soldered directly to the board, the traces can be much shorter because they’re able to put the chips right next to the GPU. That’s why you see a circle of memory chips around the GPU if you take off the heatsink of a modern graphics card, it allows them to all be as close as possible and the traces as short as possible.

    • @Farouk.khettab
      @Farouk.khettab Год назад +11

      I would love to see a combo of fast + slightly slower memory, just like how 3DV cache works with RAM.

    • @doggo_woo
      @doggo_woo Год назад +8

      ​@@Farouk.khettabThat would be sick. Fast vram for important or constantly used assets and slower expandable vram for the rest. Kind of like how we have swap for when we run out of actual ram.

    • @stevy2
      @stevy2 Год назад +1

      @@Farouk.khettab Basically the GTX 970. People exaggerated but that last 512mb was still faster than swapping to RAM.

    • @anthonypolanc0
      @anthonypolanc0 Год назад +1

      Dude watches the tech tips

    • @narwahlssb
      @narwahlssb Год назад

      I was about to lay this down but you beat me to it.
      I'm fairly sure they had some older cars in the 90s that did have expansion memory.

  • @GabeDaBabe930
    @GabeDaBabe930 Год назад +261

    Balls

  • @dan8t669
    @dan8t669 Год назад +730

    Expendable VRAM was a thing on workstation cards a long time ago. Welcome to Tech, Zach.

    • @LabraDork-uj7ib
      @LabraDork-uj7ib Год назад +33

      Not at the speeds we see memory running today

    • @OriginalName707
      @OriginalName707 Год назад +82

      "How come the consumer grade electronics guy doesn't know about enterprise level hardware"

    • @Dr.W.Krueger
      @Dr.W.Krueger Год назад +4

      A long time ago indeed.

    • @diamonshade7484
      @diamonshade7484 Год назад +1

      ​@@OriginalName707probably forgot

    • @nikoheino3927
      @nikoheino3927 Год назад +12

      ​@@OriginalName707well, but he doesnt even understand the technicalities of consumer products. its simply impossible to have upgradeable vram unless its in the form of soldering new chips yourself. modern vram operates at very high speeds and requires perfect signal integrity, which requires the chips to be soldered right next to the gpu die itself.

  • @Tempestan
    @Tempestan Год назад +409

    As others have posted, yes, they had this feature back in the late 90s and early 2000s. There is a reason why you do not see them on cards today.

    • @AgentK-im8ke
      @AgentK-im8ke Год назад +25

      Money ?

    • @Tempestan
      @Tempestan Год назад +102

      @@AgentK-im8ke terrible performance, ok, not exactly terrible, but it preforms subpar to direct soldered chips on the card, and it is harder/costlier to manufacture. So, yes, money in part.

    • @張彥暉-v8p
      @張彥暉-v8p Год назад

      @@diamonshade7484 Ram need to be physically close to the chip to be fast, thats why Dram is ultra slow comparing to cache and Vram.

    • @petrosarvanitis2800
      @petrosarvanitis2800 Год назад +18

      A guy added an extra 8 gb of vram on a 3070. I'm sure if companies wanted to, they could develop gpus with "empty memory slots"

    • @張彥暉-v8p
      @張彥暉-v8p Год назад +23

      @petrosarvanitis2800 Let average stupid customers get a chance to disassemble the cooler, break the card, and then blame the company?
      I won't do that if I run the company lol.

  • @PursuerOfTruth
    @PursuerOfTruth 25 дней назад +11

    No... I do wanna see ram slots sticking out of the top. It'd be like those fins on a race car and I think with the right design it could be sick.

  • @hippityhop9522
    @hippityhop9522 Год назад +171

    While normal RAM isn't really affected by distance from CPU, VRAM is very dependant on the distance between it and the core of GPU.

    • @arne_666
      @arne_666 Год назад +21

      Normal RAM is actually affected, for example laptops using DDR5 had some problems with the speed and so are now soldering it to the board directly.

    • @sarowie
      @sarowie Год назад +6

      @@arne_666 let us phrase it differently: A modern CPU and operating system has such a complex caching schema, that the average user does not notice the speed difference of a socketed CPU and socketed RAM while using chrome.
      (we are using real world applications, not benchmarks) Meanwhile in gaming... yeah: Gaming is gaming and benchmarks are aqurate. There might be a difference in AI, Mining and Gaming performanc depending on...

    • @wolvreigns
      @wolvreigns Год назад +4

      ​@@arne_666also why most high end over the top overclocking motherboards only have 2 ram modules. They are closer to the CPU and of course 2 ram sticks is more stable then 4

    • @ARCAD3BLOOD
      @ARCAD3BLOOD Год назад +2

      both rams are affected. Vram and ram are pretty similiar things.
      If your prediction would be true, then why they don't mount ram slots anywhere but close to the cpu?

    • @josephdias3968
      @josephdias3968 Год назад +2

      Just make detachable vram chiplets that clip on and off right next to the gpu core

  • @ossiaaltola4578
    @ossiaaltola4578 Год назад +72

    The reason that they don't have expandable VRAM is that it would make the traces from the VRAM to the gpu die too long. The closer the VRAM is to the GPU the faster you can make the ram. And also they want your money.

    • @kilgarragh
      @kilgarragh Год назад

      you put them on the back so the slots are right next to the die

    • @Om_namah_shivay------108---s1h
      @Om_namah_shivay------108---s1h Год назад

      ​@@kilgarraghexactly 💯 I was thinking the same

    • @ossiaaltola4578
      @ossiaaltola4578 Год назад +4

      @@kilgarragh not quite how that works

    • @sarowie
      @sarowie Год назад

      @@kilgarragh very funny. Even just the fan out to and from the slot is a significant signal length. Even just the pins within in the slot. Yes, you can make the pitch (pin to pin distance) smaller... but did we not start out with a customer upgradeble slot? Now we make it so fragile, that I do not want to see it on the assembly floor?

    • @ARCAD3BLOOD
      @ARCAD3BLOOD Год назад

      then make it double.
      You can have faster and slower vram in the same system.

  • @LexyDaShmexxy
    @LexyDaShmexxy Год назад +79

    they used to do that but there was a limitation in the speed and bandwidth, it's just too slow to implement, that's why vram chips are so close to the gpu die

    • @walkinmn
      @walkinmn Год назад +3

      and that's why Apple put the ram chips so close to the SoC too and some other manufacturers are starting to do similar things and I'm worried this trend could catch on, I mean, yes, the speed and performance gains are cool but is it really that hard to make upgradeable ram and vram modules that can bring these upgrades without losing the advantages of letting us upgrade the ram? The issue is the industry probably has more interest in not exploring that idea and using the performance excuse to make all the chips non upgradeable than to give us something better and modular.

    • @zacker150
      @zacker150 Год назад +1

      ​@@walkinmnyes. It's not just hard. it's basically impossible. You are literally butting up against physical constraints like the speed of light

    • @walkinmn
      @walkinmn Год назад +2

      @@zacker150 do you have any sources to back that up? Because I actually know physics and I'm pretty sure that's not true

    • @zacker150
      @zacker150 Год назад

      It's common knowledge. Data has to physically move from the memory to the processor and can only move a a fraction of the speed of light divided by the material's dielectic constant.
      As you increase the frequency, information has less time to travel from the GPU and the VRAM. At a frequency of 2335Mhz (the frequency of the VRAM on the 4090), light can only travel 12.8 cm, so the VRAM must be at least 6 cm of the die.

    • @MiguelAngel-yr5zb
      @MiguelAngel-yr5zb Год назад

      You can search for transmission line theory and length matching for high speed pcb, I´m currently studying about that. Sorry in advance for misspeaking, I´m not native speaker@@walkinmn

  • @triynizzles
    @triynizzles Год назад +37

    They would have to completely redo, the memory controller architecture. Currently if you have a 64 bit bus(small number to make the example easier), then there’s two chips on the pcb. The only thing you could do is potentially clamshell to double size, but maintain speed or go from 18gbps to 19gbps (faster frequency) but the main thing that will give you more bandwidth and vram size is a larger bit bus, so there are more chips providing data simultaneously.

    • @ozgurpeynirci
      @ozgurpeynirci Год назад

      This doesn't make sense, if it requires 8 chips then there will be VRAMs sold with 8 chips in 1 unit.

    • @arkgaharandan5881
      @arkgaharandan5881 Год назад +1

      thank you, someone who is not dumb and has an idea wtf he is talking about, unlike this guy in the video.

  • @icerink239
    @icerink239 Год назад +2

    Being detachable increases physical dimensions of the connection which also increases the latency of the memory

  • @notisac3149
    @notisac3149 Год назад +79

    That would be amazing, especially as newer games require sooo much more vram. There would also be literal tons less e-waste as older cards would remain viable for possibly years longer!
    Edit: It seems this used to be a thing on older cards. The prevailing thought is that expanded vram is quite a bit slower. Not sure how much validity there is in that but I’m sure there’s at least some truth to it. Still, it’s fun to see people replace parts in older cards to install more vram themselves, just not as quick and easy as a little slot like an sd card or what have you.

    • @im4d3m0n_
      @im4d3m0n_ Год назад +5

      The issue isnt really that gpu companies want to make money, its that the further away the ram sits from the gpu, the higher the latency and the slower the data transfer, which is an impossible issue to result due to physics, we physically do not yet have a way to make the type of ram that gpus need expandable if not by soldering new ram chips on it or by significantly reduce the gpu performance due to incredibly slow ram

    • @im4d3m0n_
      @im4d3m0n_ Год назад +4

      ​@@filipionescu4543my friend, u point fingers but dont do research

    • @Dell-ol6hb
      @Dell-ol6hb Год назад +2

      @@filipionescu4543 that’s not really why, it’s more of a performance reason, graphics card still last years anyway, expandable vram isn’t a thing anymore because to achieve the speed and therefore performance we see in modern graphics cards you need to have the memory chips soldered to the board as close as possible to the gpu itself. This is why vram is way faster than normal ram you put in your motherboard, it’s because it’s literally right next to the processor

    • @haywagonbmwe46touring54
      @haywagonbmwe46touring54 Год назад +2

      @@filipionescu4543 Oh boy... not how any of this works

    • @ARCAD3BLOOD
      @ARCAD3BLOOD Год назад

      @@filipionescu4543 Performance of a card is not solelly dependant on vram.
      Amd is just better at matching their cards performance with corresponding vram values.
      They could also create their own branded sticks of vram, and nvidia could, as always, close off the market to only their vram sticks.

  • @kristofmadarasz3166
    @kristofmadarasz3166 Год назад +35

    Fun fact :Back in day's they did expandable vram on gpu's just they stoped using it because it was slow and expensive .

    • @sarowie
      @sarowie Год назад +1

      it was not slow. I mean: It was fast enough at the time, but with faster GPU and RAM speeds, the margin for unnecessary connections and traces is getting smaller and smaller.

  • @verhulstak
    @verhulstak Год назад +18

    They actually used to do that but a socket cant do the speeds for it

  • @bush2137
    @bush2137 Год назад +2

    They have done that, back in the days some gpu's had sodimm-like slots. That was discontinued due to the distance from the memory chips to the dye. Every additional milimiter can cause performance drops

  • @The_OG_Electro
    @The_OG_Electro 21 день назад +1

    Everything should change from buying premade GPUs to you get the parts to a GPU and assembling it like a computer to then put in your PC, so you choose how much VR you have what core you have at cooling and any extra features that I’m not aware about

  • @BF26595
    @BF26595 Год назад +7

    Expandable memory GPUs actually existed a while ago... the S3 Virge had 2MB on board and you could add a daughter board with another 2MB of memory :) I still have it in my DOS Stuff box 😇

  • @Lucas-k2x3r
    @Lucas-k2x3r 5 месяцев назад +3

    Because they want you to buy a new card instead of more vram

  • @LyronAguiar
    @LyronAguiar Год назад +20

    Linus did a video explaining why this would not work

    • @uh-nuh
      @uh-nuh Год назад +2

      it would work, just not gonna perform good

    • @uh-nuh
      @uh-nuh Год назад

      it would be like those render gpus, they are not fast ans thwy cant run games well but they has lots of vram

    • @arakwar
      @arakwar Год назад +2

      He probably screwed up that data too.

    • @RuyGedares_GuyRedares
      @RuyGedares_GuyRedares Год назад

      ​@@arakwarXD😂

  • @jmtradbr
    @jmtradbr Год назад +2

    They had in the past, the technical reason they don't have today is speed. The closer the ram is to the die, the better. Also all the vram chips needs to have the trails at same length. Adding removable vram will affect the solded vram.

  • @soju506
    @soju506 Год назад +1

    Lots of graphics cards a couple of decades ago had a design like that where there were slots on the graphics card similar to ram on your motherboard. The reason they don’t do this anymore is because of signal integrity. One of the main factors that makes vram so much faster than normal ram you slot into your motherboard is that it is closer to the die. The typical rule of thumb is that the faster you want something to be, you will have to move it closer to the die to maintain signal integrity.

  • @krishahirwar7326
    @krishahirwar7326 Год назад +4

    They haven't done it coz it will increase the response time .even just by some nano second but it will be significant 😊😊😊

    • @BlueSheep777
      @BlueSheep777 Год назад +4

      They didn't do it because:
      AMD: It's good enough that all the power is used.
      NVidia: They didn't do it so you would buy a better model. Also the 16gb 4060ti's vram is more expensive then the actual vram cost.

  • @IstyManame
    @IstyManame Год назад +4

    balls

  • @jamieswithenbank1813
    @jamieswithenbank1813 Год назад +1

    There is a reason they don't do this: Signaling latency - If you notice, these days the RAM on graphics cards is packed all around the main processor - this allows higher frequency ram at higher signaling rates. Adding the extra distance involved in a connector, plus the potential unreliability of the potentially dirty connections to the removable chips actually reduces the maximum possible speed possible for a given reliability. When you are working at the edge, a little bit of dust in your ram slot can cause headaches and slowdowns. I'm sure there are ways to solve it in the long term, but for now the companies simply solder the ram right next to the GPU processor, and use the proximity and reliability to squeeze out every last drop of performance.

  • @kaseyboles30
    @kaseyboles30 2 месяца назад +1

    They stoped doing expanable vram a long time ago to have very predictable connections. Bandwidth to ram depends on a lot of traces, that bus width is a parallel bus and the wider and fast such a bus gets the trickier keeping everything in sufficient lockstep to work gets. By having a fixed trace shape and length for every connection to each chip you can do it. Just the tiny variations in a dim kit is enough to screw the system when you're talking hundreds of lines. Though with cam2 style memory it might be doable, or at least as a secondary ram so a game could dumb all it's textures on the expansion ram and not only free up the pcie buss, but allow faster loading of textures and such not in main ram.

  • @hinterhaeltiger
    @hinterhaeltiger Год назад +1

    They actually did that. But as latency got more, they realized that expansion slots really eat away at latency. And since the graphics card is one of the most latency-critical components in your computer, it just isn't feasible anymore.

  • @GauravTG2706
    @GauravTG2706 20 дней назад

    The "One Up" sound in 0:23 really made me think I just received a notification...
    I USE THE SAME NOTIFICATION RINGTONE LOL.

  • @rtechie
    @rtechie Год назад +1

    Back during the VESA Local Bus days (1992) graphics cards had memory slots and you could add VRAM.
    The reason why this died is because the slots themselves actually slowed memory access and nowadays with PCI Express that effect would be even worse.

  • @direkta010
    @direkta010 Год назад

    The reason is because when the vram is closer is reduces the trails which interns gives you more FPS. If it was upgradable it would make the traces longer, increases costs and complexity of the trails.
    For the companies it would increase r&d costs and lower costumer satisfaction because it would need to be bulkier, uglier and it would be in sockets which could fail.
    Like if they made it a 8 vram GPU would go for 500usd.
    (I am not an engineer of sorts)

  • @Renagale
    @Renagale Год назад

    The ram in gpus used to not only be expandable but it was also not brand specific and if you're willing to modify your gpu and void warranty you can still expand your gpu. The problem is if they allow your to easily upgrade or add your own ram they can't push newer models on you. The 1080ti was a gpu that had 12gb of vram that when overclocked can still today compete if not pass cards in the 30series that only have 8gb of v ram. So manufacturers arnt likely to allow users to upgrade their gpus for $100 when they could sell you a new "better" gpu for another thousand dollars.

  • @dosanlol
    @dosanlol 2 месяца назад

    The memory traces have to be so precise at this point that it makes this almost a technical impossibility, it'd cause more hassle than it's worth so it's avoided

  • @cobaltretrotech
    @cobaltretrotech Год назад

    I love the idea of user replaceable ram, but manufacturers stopped doing this after about the 90s or so because it is a lot harder to push high speeds that today's graphics cards need on removable chips. There is a genuine speed advantage to soldered memory, or so the story goes

  • @samanthagriffinv2.08
    @samanthagriffinv2.08 6 дней назад

    Fun fact there used to be gpus in the 90’s-early 2000’s that did have upgradable ram

  • @GZGemingBoi
    @GZGemingBoi 24 дня назад +1

    imagine being called out LMAOO

  • @davidgrunga
    @davidgrunga Год назад

    Would be the most consumer friendly tech drop in years. Would love to see ut

  • @shijikori
    @shijikori Год назад +1

    the reason they don't put VRAM slots is because of latency. also, VRAM on the huge most of GPUs is only on the front side, not the backside. The arrangement of the VRAM chips on the GPU PCB is quite integral to the signal integrity and so the performance of the VRAM. Just like soldered DRAM can be as fast for less power or be faster or just be more stable because it makes it possible to optimise the traces and improve latency.

  • @uattias
    @uattias 5 дней назад +1

    I mean i wouldn't mind having 20 empty ram slots sticking out the back of my gpu.
    Would look pretty cool actually.
    Like the same way on server boards there's just a ton of slots next to each other and they look fascinating to me.

  • @kerfumblelumble
    @kerfumblelumble Год назад

    I've learned so much about PCs. I didn't even know GPUs had their own RAM. Wow.

  • @daredog509
    @daredog509 Год назад

    mad respect to the guy who commented "balls" and then got featured in the video

  • @hanro50
    @hanro50 Год назад

    This use to be a thing. Issue is that modern vram has to sit so close to the main GPU chip that this is no longer possible without sacrificing performance significantly

  • @realalphas
    @realalphas Год назад +1

    You cant make it because distance for electricity to flow through copper will increase and maintain same length for every dye is very impossible

  • @Te.le.
    @Te.le. 11 месяцев назад

    From an amateur computer engineer. It is possible but it hasnt been done yet as gpus weren't made to be customized internally. Its way more profitable to lock the vram to 2-3 models and you have to pay more for higher vram.

  • @josha6118
    @josha6118 4 месяца назад

    Cool idea and i like and i can maybe see them doing that bc they would probably gain money bc if like envidia also makes the expansions then people would definetly buy it but anyways, here a brownie recipe
    Ingredients:
    - 1 cup unsalted butter
    - 2 cups granulated sugar
    - 4 large eggs
    - 1 teaspoon vanilla extract
    - 1/2 cup all-purpose flour
    - 1/2 cup cocoa powder
    - 1/4 teaspoon salt
    - 1 cup chopped nuts or chocolate chips (optional)
    Instructions:
    1. Preheat oven to 350°F (175°C) and grease a baking pan.
    2. Melt 1 cup butter, mix with 2 cups sugar.
    3. Add 4 eggs and 1 tsp vanilla; mix well.
    4. Sift in 1/2 cup flour, 1/2 cup cocoa, and 1/4 tsp salt; stir.
    5. Optional: add 1 cup nuts or chocolate chips
    6. Pour into the pan, bake for 25-30 mins.
    7. Cool completely, cut into squares, and enjoy your brownies!

  • @techorgames
    @techorgames Год назад

    the main reason they haven't done it is because the vram would be a lot slower. if it is soldered on like it is in the present then that vram can be accessed by the gpu much more quickly and so can be used quicker, resulting in higher speeds. If they were to add slots than it would slow it down by a huge margin, although they could probably have some weird hybrid thing work somehow...

  • @applemirer3937
    @applemirer3937 4 месяца назад

    I seem to remember a card that you could add hundreds of more megabytes to, and it was cool for the time.
    They said that they stopped because of the number of traces needed for modern vram. HBM needs even more traces and is close to the gpu. So they're going in the opposite direction.

  • @ruebo2817
    @ruebo2817 Год назад +2

    Won’t happen because of performance. You can even see it in laptop with DDR5. The SODIMM versions aren’t as fast as the soldered ones. You’d have to somehow invent a slot that has next to no latency and be really close to the processor. You can’t really do it on the front of the GPU because of cooler designs and therefore it would have to be on the back. Thus adding more distance and more latency, leaving out the latency the slot would inherently introduce.
    I would be really surprised if we were to se such a GPU.

  • @rohanpatel-h8r
    @rohanpatel-h8r 7 месяцев назад +1

    Like CPU you can change 12th gen to 14th gen by only one lever like thing that connects thats make production cheap and reliable.

  • @commandershadow8117
    @commandershadow8117 Год назад

    The m.2 explanation was because of the extra PCIE lanes available from the slot not because of the GPU itself. Expandable storage on gaming cards leads to a lot of issues because of timing. Swappable memory is generally slower then soldered memory and the extra steps can cause worse fps because it can't send out data fastest enough or inconsistent frame times

  • @Johnny_C137
    @Johnny_C137 5 месяцев назад +1

    Expandable VRAM would likely be slower clock speeds than integrated VRAM, but if it allows you to go balls out overkill on VRAM, it might still be worth it.

  • @sonthebikeman
    @sonthebikeman Год назад

    Design aesthetic aside, the reason why we don't see the slots (and I think we won't in the future) is because the GPU chipset need to be as close to the memory modules as possible to prevent latency that can degrade the display experience of the GPU

  • @ninjasiren
    @ninjasiren Год назад

    Late 90s and Early 2000s GPUs and 3D Accelerator cards some of those has upgradable video memory slots.
    Matrox G200, some of the S3 GPUs
    Though obviously because of the distance and latency of using a daughter board or slotted VRAM back then was slower, and far more cost effective.
    But would love to see its return back

  • @InventionPR
    @InventionPR Год назад

    Who remember Cache Memory Sticks? Good old days

  • @prathamsingh4981
    @prathamsingh4981 2 месяца назад

    The reason why this does not happen on modern GPUs is because then the sales of the high end GPU's would be non existent and the companies may suffer from lack of sales ultimately

  • @emrekolmek6219
    @emrekolmek6219 21 день назад

    I hope they finally make the gpus on pre builts removable/replaceable.

  • @michaelsmithers3226
    @michaelsmithers3226 Год назад

    This is a really cool idea! It actually used to exist but they decided to do away with it because the memory speed was not fast enough or at least as fast as having all of the memory soldered

  • @Xibyth
    @Xibyth Год назад

    It would require a ton of design, engineering, a new slot, and some very unique RAM. DDR5 is the latest you can buy right now. Likely slots for individual chips where you could swap out lower capacity chips for higher ones. Latencies between chips would also be a concern.

  • @TLNik
    @TLNik Год назад

    I see some top comments and I think they're giving a pretty good ideas. Like, have a soldered VRAM near the chip for tasks that require low latency, and maybe add 1 or 2 SO-DIMM-like memory slots on the back of the GPU to add expandable memory for long-term cache purposes? If there's anything that GPU would require system RAM for, it would just store it on GPU SO-DIMM RAM instead, like say, model assets past the point of rendering? It would be slower than soldered VRAM, sure, but it still can be put relatively close to GPU and certainly way closer than constantly running to system RAM via CPU and back.

  • @brandonmartino1363
    @brandonmartino1363 Год назад

    GPU codes are hard written for the RAM type.
    An example is the 3080 that has few rams upgraded to the ones that were in the 4060.
    The graphics card was able to see all the ram but was unable to use all the ram available and hardcode linked out when it hit or topped over the ram that was originally set in the hardcode E prom.
    Well, gravis car could easily do this by fixing and opening up their bios. They are considered encrypted trade secrets by the manufacturer

  • @LabraDork-uj7ib
    @LabraDork-uj7ib Год назад

    Y’all need to read a white paper. The transfer times dramatically increase when you start putting memory on dimms or any transfer medium that isn’t directly soldered.

  • @SomeOneios
    @SomeOneios 5 месяцев назад

    "Upgradeable Graphics Processor Unit" the UGPU that sounds fairly nice. They should also add spreadable boards for more available v-ram. Like every new socket has a different motherboard for faster or slower ones for different UGPUs like a RTX 3060 that could go from 8 to 16 like you said and to upgrade you could pick a new 40 series board and die for a upgrade. I leave the cooling to the more imaginary able people

  • @daydream605
    @daydream605 Год назад +1

    two words to describe why this hasn't happened
    Cost
    Latency

  • @AnimilesYT
    @AnimilesYT Год назад

    I think visible expandable vram would add to the aesthetic

  • @blendpinexus1416
    @blendpinexus1416 Год назад

    the m.2 slots are because those gpus didn't use the full x16 (usually only x8) so tossing an m.2 on there to use up some of the other lanes is a good idea and for direct storage the m.2's data wouldn't even pass through the pcie slot

  • @alectrona6400
    @alectrona6400 Год назад

    I have an old iMac and a PC ATi Rage Pro that had expandable VRAM... it's nothing new but they haven't done it in over 20 years because it became impractical for most people down the line. I'd love to see it again, though. An SSD-equipped card like that one 4070 might function similar to what was once more common.

  • @o_sagui6583
    @o_sagui6583 Год назад

    sure you can increase capacity but that will cost your entire data bandwidth.
    a better idea would be to companies actually sell memory expansion kits or services, because the hard part of a VRAM upgrade isn't the soldering of the chip itself but rather the VBIOS required to make it run as intended

  • @idontknowagoodnam3
    @idontknowagoodnam3 Год назад +1

    They should add ram slots on ram

  • @Schabik1990
    @Schabik1990 Год назад

    easy, just use SODIMM RAM slots (those used in miniPCs, Laptops and AiO PCs) they will sit behind the backplate of a GPU, will sit flush with the GPU MB and the backplate will be used as a radiator. I'm talking about the aesthetic part.

  • @wishdankpods
    @wishdankpods Год назад

    the problem with it is turning it into a slot dramatically decreases speed, it's why soldered chips can get way higher speeds than socketed or slotted, it's adding excess length that even if it was right next to the chip would make performance much worse than if you had less vram with more speed, it's a great idea but until we can find a way to make slotted as fast as soldered, it's never gonna take off

  • @jasonsandhu8141
    @jasonsandhu8141 Год назад +1

    It would be a lot slower and more expensive

  • @ItsKuyaMarvx
    @ItsKuyaMarvx 2 месяца назад

    This is hard becaude VRam are positioned to be near to Gpu core as much as possible. The additional vram slot function will create unnecessary distance to the chip which will affect the overall gpu performance.

  • @ryankline8188
    @ryankline8188 Год назад

    Dude. Seeing a fully loaded gpu with all its slots filled, would look badass. Like that old sega genesis!😂🤣

  • @David-ms3zf
    @David-ms3zf 24 дня назад

    Honestly, instead of having NVMe SSDs in GPUs, we should have expansion boards that are all NAND and the GPU gets to be the controller for all of that NAND flash, allowing faster transfer between the controller and the GPU itself, because the controller would be directly in the GPU itself and not in the expansion board.
    Also, expansion slots for VRAM on GPUs would be really great as well.

  • @EyebrowsMahoney
    @EyebrowsMahoney Год назад

    As many others have said, niche cards of years past used to have this feature, but the main reason why they don't do this, is trace length and design can drastically affect the performance. Latency, signal interference, outside factors such as variations between various chip manufacturers or even binning can result in drastically different performance from card to card and RAM chip to RAM chip. You also run into the thermal limitations for current generation VRAM requiring it to be heatsinked more sophisticatedly than simply slapping a heat spreader on it.
    With the amount of limitations of board population (cards would have to be even larger), the amount of engineering to do so in order to not affect VRAM access performance is too cost prohibitive and leaves too many variables to do so reliably with current technology. It is likely possible, but packaging is the largest issue that isnt easily resolvable as well as designing a standard to ensure interoperability - otherwise you run into vendor locking and the issues that come with testing and validation.

  • @TheSmilePerson
    @TheSmilePerson Год назад

    Bro imagine if they put more vram for the 1080 ti. It would literally be a legend

    • @ARCAD3BLOOD
      @ARCAD3BLOOD Год назад

      ... for what you need more vram in 1080ti? even for new graphic cards standards, it is enough.
      It's barerly above rtx 3060, and rtx 3060 is perfectly fine with 12gb.
      you won't get more performance out of it by just adding ram.

  • @THEDRAGONGAMER
    @THEDRAGONGAMER Год назад

    AMD made a gpu which isn’t sold anymore which has the ability to use m.2 ssd’s as vram, Linus made a video about it. It was only made for a specific purpose though.. and is rare to even see in the wild since a a handful was made.

  • @gameshottwojastara3318
    @gameshottwojastara3318 Год назад

    Make special holes inside of a heatsink, where you can put special stick of vram, that is going to be hidden by a heatsink. But some expensive sticks would have a RGB on top, that is going to stick out from heatsink and glow your graphics card more.

  • @norliasmith
    @norliasmith Год назад +2

    Graphics cards in the past: *Whistling non-chalantly*

  • @mustardroshi418
    @mustardroshi418 Год назад

    as an electronics engineer. I'll just say it's gonna be slower less well performing than the build in once, similar to sd card vs internal storage type of problem

  • @deanchaffee6960
    @deanchaffee6960 Год назад

    Honestly, let’s just make GPUs like their own motherboard, with a cpu, ssd, and, of course, dedicated ram.

  • @skorpers
    @skorpers Год назад

    The reason this isn't a thing is because it will become the new printer cartridges. They'll be incentivized to release GPU's with artificially lower amounts of vram because consumers can just "upgrade it later"

  • @Weneedaplague
    @Weneedaplague Год назад

    Hear me out, sodimm sticks behind the dye package. Close for speed, accessible, and simple. Only issue is relocating transistors that's are back there

  • @thefallengaming9556
    @thefallengaming9556 Год назад

    They should also really fix the latency issues for people that way we don't see a wire running accross the floor because we like to maximize our gaming

  • @Albino.Monkey
    @Albino.Monkey Год назад

    Expandable ram on Gpu's did exist back then. There was a reason why it got removed and it was mostly because soldered on ram delivered better performance.

  • @troudbalos333
    @troudbalos333 Год назад +1

    Did he just call a base level gpu 8gb of ram!!?

  • @Deathbyfartz
    @Deathbyfartz Год назад

    i'm afraid you would have to probably design a entirely new standard for interfacing VRAM with a socket, it would probably end up like SLI did

  • @triandot
    @triandot Год назад +1

    "yeah that would be a pretty cool life-changing invention, but it has to be prettyyyyy"

  • @lachlandawson9737
    @lachlandawson9737 Год назад

    people are saying it would be a speed issue because of the distance but it would also be a cooling issue, because VRAM is normally cooled by the gpu cooler. so it would'nt be easily upgraded without disassembling the gpu

  • @professordraxon3982
    @professordraxon3982 Год назад

    with each connector (essentially in a daisy chain) adds time and length for it to work, they actually did have upgradable ddr ram on gpus but it didnt take off due to the connectors causing more harm than what was expected, so it quickly died, like not even a handful of gpus ever hit market with this design

  • @noxproductions6851
    @noxproductions6851 Год назад

    It would be a good idea for when the cards integrated vram is used up if the expandable secondary is that close to the gpu. It wouldn't work as a good as integrated vram but would definitely work better than when the gpu has to use the dram in the pc as vram

  • @Larniee_X
    @Larniee_X 11 дней назад +1

    Redyboost for gpus with m2 ssds attached. Imagine the possibilities with a 7000mbps read speed 😀

  • @sr_spangblop7047
    @sr_spangblop7047 Год назад

    It's a cool idea but V RAM timings are super delicate and something as simple as making the traces longer can throw off that timing enough to negatively impact your performance.

  • @geekshed1674
    @geekshed1674 Год назад

    They used to have expandable memory back in the nineties, i expanded the VRAM in my S3 trio

  • @kalliste23
    @kalliste23 2 месяца назад

    It would make GPU significantly more expensive to do it right which kind of defeats the object of the exercise.

  • @SignalJones
    @SignalJones 3 месяца назад

    I imagine something like camm memory modules would be awesome for this. I would definitely mount it directly on the exact opposite side of the card as the die though. You want really short data runs to the GPU.

  • @3sotErik
    @3sotErik Год назад

    Linus addressed this & it actually used to be a thing for engineering / architecture cards back in the day. VRAM needs to be as fast as possible & the added trace length & signal degradation from DIMM sockets isn't conducive to faster, more stable speeds. On that note, high end laptopa with soldered LPDDR5 is able to run faster & with tighter timings than socketed desktops.
    I want to go the other way & get a motherboard with super fast RAM soldered to the board, with 2 expansion slots for standard DIMMs but it would take an architectural change for a processor to use RAM of 2 different speeds without slowing both down. Perhaps treat the soldered as L5 cache?

  • @shotybumbati
    @shotybumbati Год назад

    I think the only issue being that RAM for your CPU is usually an entire generation behind what GPUs use. Right now we could theoretically pop in a stick DDR5, but your modern GPU is already using DDR6 if not some faster refresh of it. That's SIGNIFICANTLY slower in terms of chip vs chip of RAM. Also, I'ma call you out on hiding the RAM dimms, you can get matching really dope looking ones and have them RGB all out the top of the graphics card, would look great in vertical mount as well, if you matched the look of the card properly.