@@DocWolph It doesn't matter what the connection is, quantum computers simply are not meant to do the same things that classical computers do. That's what the guy meant by "are not a replacement". They are not better than classical computers for most of the tasks that the vast majority of people use computers for, like web browsing, text/image processing, gaming, etc. They have little to no values for consumers. They will likely forever remain in labs, datacenters, and industry for highly specialized tasks and research.
@@cebo494 Well at this point it is implied, at least, that you can not directly interface with a Q-Comp. That is, you need some kind of bridging software running from the C-Comp to the Q-Comp and back. This is regardless of application. Aside from scientific, engineering, security work, among other things, I can see Q-Comps being used for animation, rendering, or simulation (for example, I can see water sims taking only a few minutes rather than many hours or days for a FEW high quality frames with a Q-Comp) and that is not even the color of the tip of the iceberg of what is possible with Q-Comps. For most things people use Computers for, C-Comps will be at least enough.THIS is agreed. But there are things that are way bigger that even best current C-comp technology just is not adequate for. and Q-Comps at a price a small studio, or a very dedicated hobbyist, can buy to radically accelerate their sim work, animation, and rendering work (albeit the software may be coming a few years after the fact) is just what the Studio head asked for. And that is JUST animation. Anybody can become a "Doc Brown". pursue science, mathematics, engineering and MORE at home. Again, I generally agree with what you saying BUT you are not thinking broadly enough.
@@DunnickFayuro Yeah, it is dubbed. I am the guy who recorded the dub. I synced it the best I could to the original voice, but I could only do so much. And yes, I get it that it's annoying.
I love these dual language channels, I don't know Russian and would have never have been able to understand your phenomenal video otherwise. Thank you, truly.
Ah so that's why his disgusting face seemed so familiar and audio was out of sync! He is one of the idiots that supports war in my country. He is one of the people that thinks I should die... Yeah, hope at least somebody is going to see this comment, and stop watching this genocide supporter.
So glad RUclips recommended this video. Nicely done, great balance of information and presentation without coming off overly optimistic or pessimistic.
Quantum computers can't replace regular computers. Yes they could calculate some very complex equations at a fraction of the speed. But they suck at doing simple calculations.
You mentioned photonics use in communications but didn't mention photonics switching. They've been trying to make purely photonic chips for decades, and it's always just around the corner much like fusion. It's that research that led to photonics being integrated into silicon chips. It seems like that research was very close to yielding results but focus and funding got taken over by quantum computing. From all I've read purely photonic chips would run 100-1000 times faster than silicon at far lower power and heat. Hopefully as silicon reaches it's limitations there will be renewed funding and research for it.
It's a bit sad really... Quantum computing while exciting, is very far from producing anything useful in terms of actual computing. There has been no more than 10 algorithims made to run on quantum computers for example.
also worth considering in the mean time, their are computers that can run on trits (-1, 0, 1) which requires some fundamental changes, but could theoretically be more effective. This could even be extended further though it gets less practical the more you add.
Silicon photonics is what I have the most hope in for in the next couple of decades. Imo, transitioning from electricity to light is just the most logical step forward. It will set moores law back by quite a bit, but the insane clock rate of the processors will make up for it. Most modern keyboards already use light to transmit signals.
well light transmision exists for decades now unless you mean signals like hid interface that uses speed bandwidth like in hz its known that for ex most audio devices ps5 xbox etc have a light slot on the back to connect with optical fiber to send sound now a kb uses a microcontroller if its mechanical that works like microcomputers with speeds of some mhz so either at the usb there is a decoder that reads the signal or the computer itself can read it which i think its the first to be honestly but the photonic is promising concept but dont expect pcs out of the world sure there will be decent speeds we already have 6ghz on new gens cpu with 10+ cores and in reality the gains will be diminishing if for ex you play games you might just squeeze a bit more performance but thats it unless they make games with the most realistic graphics to look like real life which i think we have some decades to achieve such feat and at the end its also depend on gpus will they be made with the silicon photon ? and if yes when
@@HeLrAiSiNg1 Had a really hard time understanding what you typed out. The point is that the speed of light, being about 100x faster than electrons, will counteract the size increase of transistors and gates with its sheer speed. This would give Moore's law a good amount more headroom to keep progressing.
@@tony_T_ dream on what i wrote were about light transmision exists for decades unless as the original post states keyboards with light transmision the logical thing is the usb is a decoder and the keyboard an encoder that transmits light with the information of what is pressed anyway it sure will bring some improvements but nothing super super wow that will drop jaws
@@dinozaurpickupline4221 nope it cant ? You see a cpu has trillions of transistors gates etc the more small they make em to cramp more transistors everytime the less electricity they can stand if you see older cpus were able to hit 8ghz with nitro but newer is yet to hit anywhere that cause they cant moores law thing is thats apply everywhere not only pc cause overclock on cpu is by increasing voltages on cpu to gain speed that heats it if you for example put a 12v 2A the fan will be fine as long its 12v stable while its amperage draw is only 0.2A versus 2A but if voltage goes 12.1v it will heat a bit and slowly burn the same goes to cpus no mater what you do the only option is to give it more voltage for speed which means it need better cooling there is no magic programm the best you can improve is buy a rx 6900 and flash an rx 6900xt bios if it exist it will make it work a bit better cpu doesnt have anything stored that makes it work the way it does
Silicon carbide is used in power transitors, handles high temperatures and very high frequencies, well above silicon. It already has a supply chain, and looks like a candidate.
oh boy, quantum computers are even further out than tfets, optical compute, or memristors and you still need a von-neuman silicon computer to work with it. thanks so much for the great video, fantastic questions and background material. much appreciated
Not only are they far away from any practical implementation, but they will simply never replace your standard computers in the way people think. Quantum computers will be better at performing certain tasks, but those tasks are not in our every day usage, so for your personal needs, you won't use a quantum computer, you'll stick to the standard computer.
Well-Done, summary: SILICON: Above vs below a 5nm gate width silicon is discrete on eor off vs statisticial on&off (aka tunnel field effect) GERMANIUM: Germanium Has much better lab performance but silicon is extraordinarily more pragmatic/practical (availability/cost, heat dissipation/tolerance, oxidation, freq band, etc). However germanium might be modified to improve its characteristics (ex: Molybdenite in development, lab germanane, etc). CARBON nano-tubes (in development): Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lattice. A carbon nano-tube is a tube of graphene GALLIUM NITRIDE: has some better performance than silicon & can be manufactured with silicon based equipment/industry Better CMOS: design based on statisticial on&off (aka tunnel field effect) instead of discrete on eor off to lower power/heat. Only work for graphene & at super low temps. MEMRISTOR: A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power. OPTICAL COMPUTING: QUANTUM COMPUTING: In theory quantum computing can find least-worst solutions to problems no-matter the number of potential candidates (ex: NP problems like the traveling salement, decryption/password breaking, etc)
@@Luizfernando-dm2rf *I wasn't trying to show-off. These was just quick notes to remind me of technology tends I was fuzzy-on (thks to this good video). I'm a retired physicist & am already familar with optics & optical computing. *There is Trillion$ ... Trillion$ of semiconductor electronic infrastructure around the-world & comparatively optic/quantum infrastructures are very small. Quantum computer infrastructure is small but growing much faster than optic computing infrastructure. So we just might jump from semiconductor dominance to quantum dominance computing.
What I like about quantum computing is it could eventually be used to work the problems we face much faster, like disease cures, and silicon limitations.
No one wants to cure disease. That should be obvious. If even half a dozen major diseases were cured, it would result in Trillions of dollars in losses for the Medical profession, big Pharma and medical hardware industries. They only want to "treat" diseases because that is where the profits are.
4:56 Not quite true. I have some Germanium transistors here which were used at 10.7 MHz, and some other low power ones which were used at over 100 MHz. They were expensive, but very capable at low currents. Bipolar Silicon transistors have been made which would amplify at frequencies close to 20GHz (I made some of them, back in the 1980's.) For higher frequencies, JFET, then MOSFET, IGFET and other types are needed. But yes; Germanium transistors are definitely very limited at higher frequencies, in comparison to Silicon devices. Leakage currents are very problematic, and they lead to high noise levels. It would still be interesting to know how a Germanium MOSFET would perform at somewhat higher frequencies, however.
It's actually a dubbed video and we didn't have the orignal voice track without background music baked in, so yeah, doesn't look particularly stunning. We'll fix that with new vids. I mean the really new vids. Those that are already produced, will have to be dubbed the same way unfortunately.
Dear Ivan. At 9:05, there's one frame where the cmos construction steps are shrinking, and the diagram there is not translated from Russian. Please enjoy it!
Very good video, but I have two questions: -I wonder if it took around a half a century to reach this performance on silicon/CMOS doesn't it mean that it would take around the same time for others technologies to catch up or do improvements already discovered accelerate the rest? -After the huge rise of chip price in the last years can we expect before others technologies are catching up, let's say in the next decades that chip prices will decrease a lot because of the silicon limit that meanwhile we will see a comeback of dual CPU and SLI/crossfire for PC to keep improving performances?
amd chiplet design and intel vastly inferior big,little design is already a form of dual cpus. sli is very unlikely to ever come back due to how frustrating it was for everyone. for gpu its most likely gonna be software improvements.
The next frontier will likely be 3D chips. We already do build them with a few layers, especially memory chips which are dozens of cell layers thick now. We will need to do the same with processors, and also switch to more thermally efficient materials to help avoid the increased heat per sqr unit as a processor gets thicker (building heat exchange tubes into the design itself can also drastically help combat this). A 3D design also opens up pathways for much more optimized computation methods that a chiplet design does not really lend itself well to as it scales. Chiplets primarily communicate to eachother through side channels, whereas a 3D chip would simply be one single complex chip of interconnected logic in all directions. The potential performance gains as we add layers is insane to think about, when you consider how thin a given layer of a processor is. You could have hundreds of thousands of layers eventually. A single processor rivaling billion dollar super computer warehouses of today.
@@dylanhecker6686 Not really, it's only the cache, which is essentially on-die RAM that is stacked, which was already being done for your SYSRAM too. The core logic is still single layer.
I don't see quantum processors replacing processors but if they become cheap enough I can definitely see them becoming a new component of the computer. If they do replace anything it will probably be graphics cards. All in all, we probably won't see quantum computers hit mass market until probably decades. What is more likely to happen than most theories is the integration of supercomputers, powered by more rare metals, that use the internet to give you your computer as a service rather than owning a computer. The system we have will be a lot more stupid and only decode the sent information.
Yeah I feel like cloud computing will be big, but hopefully the next big "breakthrough" will be just moving away from the x86 architecture. This should buy us a couple more decades at least before we have to start replacing silicon, or whatever other new innovation occurs during that time
Overheating becomes less of an issue with the reduction of transistor size. The only reason modern transistors are emanating more heat than older models, is because manufacturers make use of that reduction in heat output, by increasing cores, core frequency, and core complexity (more transistors per core).
that's only happening because the transistors no longer use less power when Shrank at the same rate of the volume reduction which used to be the case now a 2x volume reduction only leads to 20-30% power improvement, which is not enough,
great research? most of the information given was a vast oversimplification or just outright wrong. I wish there wasn't so much dishonesty in tech channels. Being able to accurately depict advanced technology is difficult and it's much easier to make it sound fantastical and get more viewer retention
I just saw this is a dub channel of another. I didn’t know I was looking for content like this and I love it. I’ve only seen dub channels go to other languages from English so it’s really cool to see that it actually does work!
The biggest problem with Quantum computer is the lack of software. Writing code for them is so complicated it practically eliminates 99% if not more people who currently make their living by writing code. That means less developers and thus less development. Unless coding will become heavily AI assisted.
@@timothywilliams8530 Seeing how much of a mess it's been just to get quantum computers to the point they are now, I have doubts about all their pros in general.
@@timothywilliams8530 My mom worked with a computer that took an entire room before I was born. Her company actually had 2 of those. Size, price, power consumption do not matter if it has a purpose... a profitable use. Current quantum computer have no use whatsoever... not because they are bad but because there is no software that would make them better than standard computers for them. And there is no software because there is only a handful of people who can write some and they they probably call each other by their given names ;-)
i say the biggest problem is they barely exist....(YET). We still count qubits in the 100's and there quantum property dont last at all . Software is nothing i promise you as soon as quantum computers are truly viable software will be created asap. software has little to no effect on quantum computing development big corps like google are just throwing money on it like its just around the corner when its the furthest from that
We still got some ways to go with classical architecture, you can also gain some performance by improving how quickly we can access memory and how much of that fast memory we got, there are also architectural adjustments with concepts like RISC, bigLITTLE, vcash, new schedulers in OSs etc. so both on hardware and software level. With many new technologies being developed like Quantum, DNA-computing and Optical-computing, I don't think these will replace classic computer but with interconnects between them then they can work in tandem in the tasks they are best at.
According to some scientists, quantic computer may never become real fully capable computers and instead became at best some kind of accelerator or limited to very specific tasks.
Look at it this way. Human DNA is 2nm. To achieve human DNA size would be not only astounding but also open the potential for micro robots. At some point all manufacturers will have no choice but to develop more cores to handle more tasks. In addition to this, it might be possible to develop cybernetics and simulate human nerves.
I wholeheartedly disagree with the concluding section of this video. It insinuates that quantum-cloud-compute backend reliant hardware, which are about as functional as a storage device with some added connectivity, but mostly hollow toys which will serve as GUIs is the future of computers. I reject such a plane of existence, even if there is simplicity, or even necessity, in it; such a paradigm shift would basically mark the end of personal computers and ownership of anything related to computers. Quantum computers may as well exist in their own space and continue to get more sophisticated; they will still require close-to-0-Kelvin temperatures to even function, so an average person cannot carry one such device, let alone drive one in a home setting in today's society. I cannot visualise a future in which these devices become so compact, efficient, easy-to-manage and affordable as to be the de facto standard for computers because somehow, for some godforsaken reason, they are the only way going forward. How dreadful.
Ignoring the severe limitations Q-computing has as of today, the prospect of not being able to own a full computer is actual garbage. I'd rather stay with my classical slow piece of junk than having to trust big techs and cloud providers.
Years ago, I did a calculation to determine when the speed and power benefits of process scaling would no longer offset the process variation. I got 5nm for that number as well.
When it comes to silicon being a pillar of computing, that will never really go away. Why ? Mass computing. Basically, when wanting to do computing en masse, given that power usage is not a problem with enough silicon-based photovoltaics in mind, computing can still use mostly silicon-based computing for when computing something that requires so much computing power that using the more scarce resource based computing options become prohibitively expensive. It's like creating a very large computer that can be like a workhorse, while using the more scarce resource based computing option for small applications. The workhorse can be used as an off-site computing 'monster' to offload work on. This may seem rather inefficient, but given that electricity can easily be farmed with solar voltaics, the power requirements are a lesser drawback, while the mass computing becomes cheaper over time, and can yet benefit from increased performance. Once people go into space and build space stations, remote computing this way can be much cheaper than supplying the more scarce resource based computing options for all/everyone. The problem is not one about creating a few million small computing options that require scarce resources (and thus becoming expensive), but once the newer options need to be created by the billions, yearly. The amount of mining required to get enough of the scarcer resource would be bad for the environment in such a fashion that humanity would 'compute' itself to death. Since, computing to death does not compute, en masse computing with the cheaper silicon option makes more sense. Also notice that en masse computing is much more efficient. When a resource isn't used by someone, someone else may be allocated more computing resource aiding his or her task to complete more rapidly. When it comes to remote computing for things like gaming, this may also work well. Silicon will always stay the best and easiest resource for computing, since it's most abundant. Also, the new computing options will have their own size limitations, albeit somewhat smaller. This would be a fixed factor. 4:28 Yes, the cost of fabrication of non silicon processors is much higher due to the scarcity of the required materials. This will never change. Also, due to the limitation of availability the newer processors would be fabricated in much smaller quantity, while being much more expensive. Due to rising cost of scarce materials the cost over time would rise, not become lower. For regular silicon-based processors, these would become cheaper over time. Combined with photovoltaics the increased power usage would not be the limiting factor. 13:56 Photonics are best used for transfer of data, not computing. Extreme short relays, let's say between chip layers and external to chip pathways would be an option, since these can transfer data really fast, and yet use very little power, which results in less heat release. Photons travel light so to speak while electron travel is heavy. This means less heat release. There's one problem though, the creation of very small light emitters/receivers and very small fiber optics or such is much harder than a regular circuit. It's at this point in time at least not very useful. A similar thing is with the wattage per computing. If the wattage becomes low enough, further reduction of such becomes less useful, and with that only extreme high end computing purposes would be served by further reduction. Let's examine an example, where you can have a fully capable processor using a 10 cm by 10 cm solar panel for continuous usage. What would the difference make if it's then operable by an 8 x 8 cm solar panel instead, by increasing the cost tenfold ? It would mean less power used, yes, by as much as 33%, but then the tenfold cost ? So, at some point in time the resource cost will define the computing solution, rather than its capability, once again referring to the scarcity of materials. 14:46 Photonics will not remove the limitations of distance. Any signal will still be limited to the speed of light, whether electronic or photonic based. Latency stays the same no matter which of these mediums are used. Basically, the best way to make processors smaller and faster, currently, is to provide on-chip memory, with comparatively slow memory being used for the externals. On chip memory would increase processing by removing much of the latency of memory instructions, like cache does, but then used on a larger scale, with only one cache and larger memory area. Then you'd have 16 GB base memory on the chip, with like 4-8 megs in cache. When a large-scale external reference would occur (like writing to SSD) the memory controller would copy the on-chip memory to the external one, subsequently transferring it to SSD, while optimizing the transfer speed, between on-chip and external memory. On chip the memory could have the same latency as regular cache, but transfer between the on-chip memory and external (DDR) memory would still be many times greater than the transfer speed between regular memory and SSD. Also, when using stuff like virtual memory on an SSD, this could then be replaced by the external (DDR) memory, increasing computational output per unit by that much as well. Knowing that most of the CPU's and GPU's time is wasted on waiting for memory return, you know that this transfer of operations from external memory to on-chip memory can make a big difference, even when the computational output of the processor is yet smaller. Solving each bottleneck in turn is also a way to speed up computing, in some cases making a bigger difference than making it smaller.
GE is used in military industry that's why. + Gallium arsenide is used in military because chips made from that materials has lowest fail rate and extended temperature limits.
Check into computing with Ternary. It is mathematically more efficient than binary and requires just one extra voltage level. This could lead to more efficient and faster CPUs than we have today and some are already being built. All good wishes.
Thank you. That was very informative with a thought processing progression that was spot on without someone knocking on your door. I've been imagining crystal tech ever since I learned about piezoelectric when I was 5. Magnetic cooling, heating and generators are my insanity. It's all about switches and ball bearings. Space and Storage. I still don't know why they just didn't cut the 0 into an 8??? Cheers
If I were to guess which one would become the first step in a post-silicon CMOS world, it would probably be a room temperature tunnel field effect transistor.
I'd put my money on silicon/photonic hybrid processors. There are already ASICs out there using photonic processing which are hundreds of times more energy efficient than silicon equivalents. We already have the technology, it's just a matter of integrating them into a full CPU/GPU.
@@SupaKoopaTroopa64 photons are mass-less packets of energy and thus it is very difficult to make them interact between themselves and with any piece of matter we are made or our chips are made of. that's the reason why we still use CMOS transistors with gate delay of 30 picoseconds instead of using light and get attosecond "gate delays" with "light transistors", if they would exist. Everybody has it very clear, creating a chip purely using light will make it millions of times faster than silicon chips. The reason why light is so fast, it is because it has no mass. But you can't make a flip-flop out of photons. That is why accelerating CMOS chips with analog-style logic (implemented using interference, like some startups are doing right now) is a bad idea. It is like putting a F1 engine on a bicycle and expecting it is going to do speeds of 400 km/hour. The bearings of the bicycle wheels will simply melt because they aren't designed for such high RPM (not talking even about aerodynamics here). It would be a good idea for a college student working in his garage, but not for a company that rises 100 million or so for this kind of project. I suggest you to learn quantum physics and chip-design before you invest. The rate of bankrupcy of new companies is like 9 to 10, if not more. Only understanding the physics of the computation you can make a fair judgement of which company is going right direction or not. There are always tons of options to invest but only few of them are real opportunities.
@@absolute___zero I'm aware of these limitations. I'm just saying that production-ready photonics already exist, so they have a head start over many other technologies. Also, I don't plan on investing in any photonics startups, or even anything in the microprocessor industry right now, I was just saying that If I had to make a bet on which of these technologies would first appear in a consumer product, I'd go with photonics.
@@SupaKoopaTroopa64 photonics of course has future, but not as co-processors, rather a standalone computing logic with only a few CMOS. But there are many other computing methods that have been developed by scientists but not exploited yet. For example, there are mechanical integrated circuits made of mechanical relays (they achieve 10 nanosecond times to switch on/off), or there are gold transistors working in vacuum like vacuum tubes, there are DNA computers, and so on... the next big idea will be a new computing paradigm. There is also lots of work to do in software to gain performance, for example, development of massively parallel operating system.
16:58 Classic mistake, the qubits are not in every state at once, each can only have one state at any given time. They fluctuate really fast, though. The boundaries between reading 1 and 0's digitally limit these state's readings, while the qubit itself is factually an analog computing bit, with the state varying between 0 and x, where x means the highest possible potential contained in said qubit. 18:07 Well, I'd like to see quantum graphics cards, with a nice new Unreal Engine 6 or so, personally. Nanites + quantum => huge open worlds with extreme distant view capability.
@@paulssnfuture2752 which is very hard to do considering right now the base material is simply sand, and we got extremely good at making perfect crystals of Si, good luck beating either of those things. i think we will just have more and more silicon around the cpu ( more cache, integrated ram, accelerators etc) then we might get a cpu where one of the more important bits is a different faster material and the rest stays silicon
This thinking is what we humans do 🤔 and how we achieve it! Just by solving problems one by one. Then we wait for another or more problems and we solve them again, and again. That's the beauty in science and what our world represents.
Not really, as it messes with phase information. You can build simple gates, but as soon as you try to do something like an and gate either your 1+1 value is different amplitude from 1+0, or its phase is different. This is fine if you are just throwing it at a detector, but if you try to throw it into another gate built around interference it is not going to work right. There might be some quantum mechanical way to align the phases, but the naive approach does not work.
Even if Moore's law may not apply to Si applications, they are still very useful in solving many of our computational applications at 16 and even 32 nm technologies.
I think the "next thing" will be photonic cpu or/and ai neuromorphic chiplet, which both are extryemly fast. The issue is still a silicon as a substrate. I heard that specific "glass" can be used even more better than silicon
You forgot multi valued transistors. Non binary transistors which can switch between more than 2 values. A 4 state transistor can emulate 2 binary transistor, without miniaturization. So it can preserve Moore's law, without requiring smaller atoms, or getting into tunneling troubles.
The statement made that a Field effect transistor cannot be made under 5nm is not true. TSMC is actually making 2nm transistors that are GAA (gate all around). It’s not the traditional FET but still a FET.
I was watching how atom's are being controlled and manipulated on 2d materials. That would suggest we have a ways to go, regarding Moore's Laws future.
Re: "silicon being near the end of its life cycle," this is just sensationalistic journalism (or an uneducated remark). It's not going anywhere in our lifetimes due to the deeply entrenced and highly refined manufacturing process. More advanced materials will have a much higher and impractical cost of scale. The most certain outcome is slowly more advanced materials will be used in conjunction with silicon, but it isn't anywhere near ending its life cycle, not for a hundred years or more. We can even do photonic and quantum compute on silicon.
actually a silicon graphene polymer has already shown upto 60% improvements in to the polarities & movement of electrons so maybe silicon will move to that
0:20 Credit for obtaining the first pure silicon goes to Swedish chemist Jöns Jacob Berzelius. He succeeded in this in the year 1823. 4:20 Germanium semiconductors have a maximum operating temperature of up to 70°C, which is much lower than silicon semiconductors.
Another direction we could take to fundamentally change how computers work in order to increase efficiency is reversible computing. The laws of thermodynamics gives us a minimal possible energy cost per bit of computation, which we are slowly approaching. However, there is a workaround, since the actual cost is in erasing information. So long as the computation does not erase information, there is no theoretical lower limit in energy use. With technology based on this, it might be possible to build 3d-chips having to worry about the heat problem at all. That said, it's at least as fundamentally different from how computers function today as quantum computers are, so it will be very difficult to transition and we will still need components that actually erase information in many cases.
I'm still a bit confused by how qbits are so powerful. Yes, having 3 values is better than having 2, but analog systems can do practically infinite, since you can read voltages or amperages from 0 to however high a voltage or amperage your system can handle. The main limitation was that the loss of current greatly varied and made it unreliable, but couldn't that be fixed by superconductors? We already use superconductivity in quantum computers, so it makes no sense to me to settle for 3 when you can have practically limitless numbers in one memory unit.
Oh I think you've missed quite a bit; Photonic computers are not just transmitting data with light, they are CPU's, and very powerful ones at that. Memristors do not just simulate neurons closely they are physically closer to logic gates then transistors
Tunneling as I understand it, should be independent of the material properties like resistance and conduction. Even have an infinite energy barrier and tunneling can go through it. How can changing materials, especially the larger atoms help this situation?
Thank you, I was lacking any news about memristors for about 8 years, so I stopped looking for updates. It is great to hear there is finally some progress
The main problem with memristors is the success of and investment in NAND flash memory. The advent of 3D multi layer NAND devices allowed an otherwise dying technology a new lease on life in the middle of the last decade, and it is still pushing forward. Even though ReRAM / memristors would be far superior in power consumption, latency and speed they are still leagues behind NAND flash in density even at the 2D level, and at the 3D level it's not even worth mentioning. Weebit is making a headway, but it will be years longer before we see anything truly commercial come out of it.
It makes more sense to create a quantum chip instead of a whole computer. Cheap operations use standard cpu and gpus then quantum chip takes the more complex queries.
Umm, the death of silicon as a base for ICs is GREATLY overstated. There is this thought about some END coming that just fails to think clearly about what will happen. So let's evaluate some timelines. Sure, silicon is going to hit a limit, but so would any other material. If you want an IC based on on/off transistors, there is only so small you can make the transistors, but once you hit that theoretical size, is that the death of silicon. The answer is no. YOu can measure CPU performance in multiple ways. One is raw performance. Raw performance is based on work done in a certain amount of time, but to calculate this you use multiple types of applications to calculate this performance. If I want a render machine, do I need a CPU that's great at gaming? The answer is no. I need something that's great at stream processing, and that could be a combination of CPUs and GPUs and accelerators, or a computer that integrates all three in a single circuit to run any type of stream processing programs. Maybe with the accelerators I can program it first before running the job, so the first part of the render application checks for an FPGA and if it exists it programs the accelerators before running the render tasks. Next, at the beginning, which is as far as I got because these kind of videos are silly, the talk is on chiplets and IPC (Instructions per clock cycle). What was skipped is making a process node much more efficient at running higher speeds. There's no law that says a CPU can't be clocked over 6GHz. In fact I think over the next 5 years TSMC and Intel will both have process nodes that run 6 - 6.5GHz and it won't be very high power consumption. Also what isn't mentioned is compute power in 3 dimensional space. We're getting to the point to where ICs can have multiple layers, and this is for different reasons, one being the shrinking of transistor size and every shrink usually comes with power reduction. I think it's safe to say that CPU design is going to involve 3D layouts and this will involve layering in such a way to where one layer isn't producing so much heat and the other layer can, so planning out the layout of circuits will become more complex. But here is the REAL meat and potatoes of this. Home users already have access to so much compute power it's crazy. A home user can now do with large servers had to do 10 years ago, and in another 5 years the amount of power that can exist in a laptop is competing with servers from a decade ago. The move to 2nm or Intel 20A which are the same thing is SO monumental that for home users there's no need to get past that. The issue won't be not having enough compute power, but having too much with rare exception. One area of growth is PC gaming. Graphics compute still has to improve to get high quality 4K gaming. But, we're there NOW. The GPUs that AMD and Nvidia are releasing right now are good enough for high quality 4K gaming, and this is using a 5nm node or modified 5nm node (TSMC N4). When semi-conductor companies get to 2nm the issue won't be not having enough transistors anymore. It will be hard to keep adding more compute units (graphics processors) and in fact for Nvidia GPUs they're using multiple types of compute in a single GPU, and that's basically the raw compute for making the image on the screen, a lighting and shading technique used for games called ray tracing for there are RT cores, and then the cores for AI, tensor cores. 2nm is SO small with SO MUCH transistor density that home computing just never needs to move to anything else, but there will be other nodes that come out with better transistor density, so like 18, 16A, etc.... and this can probably go all the way down to 10A. By this time we're about 2 decades from now. The REAL issue isn't going to be transistor density. It WILL be about making machines smarter, to where they can do more with the same number of transistors. IPC isn't a gimic. It's how you get more performance in a fixed time period from a CPU over an older CPU at the same clock speed. So, there are many tools for improving compute power, and they are NOT gimmicks to hide some flaw. It's PART of better engineering for computing. One is more processor cores. For home computing though you hit a limit because most programs home users use don't have so many processing threads and adding ever more cores is a total waste. For servers on the other hand, there are many types of server compute loads that can benefit for a LOT more cores. Another is clock speed. But this has to be solved while at the same time keeping power consumption down. THIS is a real issue and it's hard to solve, but it IS getting solved slowly, but every process node is different so you have to solve that problem for every process node. Another is instructions per clock (IPC). This also improves over time and one of the ways to solve this is simply to have more cache so the cores have more data and instructions next to the cores instead of having to access system memory to get data/instructions. But there are other ways to improve IPC too. Another is making better processors that can accelerate workloads. AMD, Intel and Nvidia are all working on this, and even Apple has done excellent work with acceleration and that's part of what makes the Apple M1 and M2 processors so fast and NOT the fact that it's based on ARM instead of X86-64. Another is being able to stack transistors which shrinks the area needed for the same circuit. Another IS chiplets. This helps with heat issues. The point is, yes silicon will hit a limit for transistor density for a 2 dimensional area, but that's a single problem, not THE problem. Solving the other problems I listed is more important. And lastly we can talk about how much silicon is needed. Well, as process nodes shrink transistor size over time, a single wafer can make a lot more ICs, so we are always improving the number of circuits that some from a fixed amount of silicon. It's not an issue at least not for another 50 years or so.
How does one remember which position all those switches are in ?!? Either On or Off. Well that is what computer glitches are, the chip is forgetting where, what, who and when the switches are doing at any given time. Software errors can multiply these error glitches.😮😊
Why not switch to hematite transistors, or silicon carbide, gallium nitride, or even synthetic diamonds, while also combining either option with carbon nanotubes, & optical computing?
Germanane has more potential in the power transistor market, as the higher conductivity will produce less heat, allowing for very small high power transistors that produce little heat, and therefore don't need heat sinking.
Researchers at the University of Groningen in the Netherlands and the University of Ioannina in Greece, have reported on the first field effect transistor fabricated with germanane, highlighting its promising electronic and optoelectronic properties.[5][6] Germanane FET's show transport in both electron and hole doped regimes with on/off current ratio of up to 105(104) and carrier mobilities of 150 cm2 (V.s)−1(70 cm2 (V.s)−1) at 77 K (room temperature). A significant enhancement of the device conductivity under illumination with 650 nm red laser is observed. lol im greek and just did read about germanane in wikipedia seems promising crystal but thing is how much work it needs to make make billions of trillions transistors with it and how they will price it in the final products you know you pay a product every penny goes to labor research branding production costs etc etc etc but very promising and i hope there my brethrens find more discoveries with it
Quantum computers are NOT a replacement for the classical computers, holy hell
A Classic Computer (C-Comp) becomes the I/O device to talk to the Q-Comp.
Input-> C-comp-> Q-comp-> (back to)C-Comp-> Output
@@DocWolph It doesn't matter what the connection is, quantum computers simply are not meant to do the same things that classical computers do. That's what the guy meant by "are not a replacement". They are not better than classical computers for most of the tasks that the vast majority of people use computers for, like web browsing, text/image processing, gaming, etc. They have little to no values for consumers. They will likely forever remain in labs, datacenters, and industry for highly specialized tasks and research.
Until someone discovers something you can do with them as a general person….just like when the internet arrived
@@fridolinkoch There are a few Quantum ray tracing algorithms with papers now.
@@cebo494
Well at this point it is implied, at least, that you can not directly interface with a Q-Comp. That is, you need some kind of bridging software running from the C-Comp to the Q-Comp and back.
This is regardless of application. Aside from scientific, engineering, security work, among other things, I can see Q-Comps being used for animation, rendering, or simulation (for example, I can see water sims taking only a few minutes rather than many hours or days for a FEW high quality frames with a Q-Comp) and that is not even the color of the tip of the iceberg of what is possible with Q-Comps.
For most things people use Computers for, C-Comps will be at least enough.THIS is agreed. But there are things that are way bigger that even best current C-comp technology just is not adequate for. and Q-Comps at a price a small studio, or a very dedicated hobbyist, can buy to radically accelerate their sim work, animation, and rendering work (albeit the software may be coming a few years after the fact) is just what the Studio head asked for. And that is JUST animation.
Anybody can become a "Doc Brown". pursue science, mathematics, engineering and MORE at home.
Again, I generally agree with what you saying BUT you are not thinking broadly enough.
Very good content. I just find the desync between your video and sound a bit annoying.
I think it's dubbed, probably from Russian
@@PareshPatel-xc2vu Nope. Just desync. The lips almost match the sound, with a little delay.
Nope video is all ai generate from stock video so it's all jibberish. Might as well listen with your eyes closed
@@DunnickFayuro it is dubbed from russian. The original channel called "Мой Компьютер"
@@DunnickFayuro Yeah, it is dubbed. I am the guy who recorded the dub. I synced it the best I could to the original voice, but I could only do so much. And yes, I get it that it's annoying.
I love these dual language channels, I don't know Russian and would have never have been able to understand your phenomenal video otherwise. Thank you, truly.
No wonder, I was wondering why my video/audio was out of sync lol
Ah so that's why his disgusting face seemed so familiar and audio was out of sync! He is one of the idiots that supports war in my country. He is one of the people that thinks I should die... Yeah, hope at least somebody is going to see this comment, and stop watching this genocide supporter.
Спасибо друг! Это очень приятно читать! Привет из холодной России!
@@mka2 привет друг! я из Американский
@@mka2 я говорю просто 😢
So glad RUclips recommended this video. Nicely done, great balance of information and presentation without coming off overly optimistic or pessimistic.
Yes indeed, so far it sounds intelligent. I'm hoping I'm not going to be lectured about Moore's law. I'm convinced that is a marketing meme for Intel.
Quantum Computing isn't a replacement.
Quantum computers can't replace regular computers. Yes they could calculate some very complex equations at a fraction of the speed. But they suck at doing simple calculations.
@@Andrew-rc3vh Bruh 😂
@@hyll6700 I did watch it to the end on this occasion and it was indeed no-nonsense. Good job.
A very interesting compilation, the RUclips algorithm recommended this hidden gem.
You mentioned photonics use in communications but didn't mention photonics switching. They've been trying to make purely photonic chips for decades, and it's always just around the corner much like fusion. It's that research that led to photonics being integrated into silicon chips. It seems like that research was very close to yielding results but focus and funding got taken over by quantum computing. From all I've read purely photonic chips would run 100-1000 times faster than silicon at far lower power and heat. Hopefully as silicon reaches it's limitations there will be renewed funding and research for it.
It's a bit sad really... Quantum computing while exciting, is very far from producing anything useful in terms of actual computing. There has been no more than 10 algorithims made to run on quantum computers for example.
The comment I was searching for. I was hoping the video is on that topic. Not quantum computing.
isn't fusion have focus and funding but still no result ? is it really that close for purely photonic chips ?
light switches have their application in networking & data transfer
Intel's working on it. Have a buddy that works in the photonics dept. Can never talk about where they're at though.
also worth considering in the mean time, their are computers that can run on trits (-1, 0, 1) which requires some fundamental changes, but could theoretically be more effective. This could even be extended further though it gets less practical the more you add.
Silicon photonics is what I have the most hope in for in the next couple of decades. Imo, transitioning from electricity to light is just the most logical step forward. It will set moores law back by quite a bit, but the insane clock rate of the processors will make up for it. Most modern keyboards already use light to transmit signals.
well light transmision exists for decades now unless you mean signals like hid interface that uses speed bandwidth like in hz its known that for ex most audio devices ps5 xbox etc have a light slot on the back to connect with optical fiber to send sound now a kb uses a microcontroller if its mechanical that works like microcomputers with speeds of some mhz so either at the usb there is a decoder that reads the signal or the computer itself can read it which i think its the first to be honestly but the photonic is promising concept but dont expect pcs out of the world sure there will be decent speeds we already have 6ghz on new gens cpu with 10+ cores and in reality the gains will be diminishing if for ex you play games you might just squeeze a bit more performance but thats it unless they make games with the most realistic graphics to look like real life which i think we have some decades to achieve such feat and at the end its also depend on gpus will they be made with the silicon photon ? and if yes when
@@HeLrAiSiNg1 Had a really hard time understanding what you typed out. The point is that the speed of light, being about 100x faster than electrons, will counteract the size increase of transistors and gates with its sheer speed. This would give Moore's law a good amount more headroom to keep progressing.
@@HeLrAiSiNg1 I ALWAYS THOUGHT THERE WAS MORE TO GHZ STORY,CANT THE CPU MADE TO THINK ITS OPERATING ON HIGHER FREQUENCIES?
@@tony_T_ dream on what i wrote were about light transmision exists for decades unless as the original post states keyboards with light transmision the logical thing is the usb is a decoder and the keyboard an encoder that transmits light with the information of what is pressed anyway it sure will bring some improvements but nothing super super wow that will drop jaws
@@dinozaurpickupline4221 nope it cant ? You see a cpu has trillions of transistors gates etc the more small they make em to cramp more transistors everytime the less electricity they can stand if you see older cpus were able to hit 8ghz with nitro but newer is yet to hit anywhere that cause they cant moores law thing is thats apply everywhere not only pc cause overclock on cpu is by increasing voltages on cpu to gain speed that heats it if you for example put a 12v 2A the fan will be fine as long its 12v stable while its amperage draw is only 0.2A versus 2A but if voltage goes 12.1v it will heat a bit and slowly burn the same goes to cpus no mater what you do the only option is to give it more voltage for speed which means it need better cooling there is no magic programm the best you can improve is buy a rx 6900 and flash an rx 6900xt bios if it exist it will make it work a bit better cpu doesnt have anything stored that makes it work the way it does
Silicon carbide is used in power transitors, handles high temperatures and very high frequencies, well above silicon. It already has a supply chain, and looks like a candidate.
It expands and contracts a lot.
What about gan
oh boy, quantum computers are even further out than tfets, optical compute, or memristors and you still need a von-neuman silicon computer to work with it. thanks so much for the great video, fantastic questions and background material. much appreciated
Not only are they far away from any practical implementation, but they will simply never replace your standard computers in the way people think. Quantum computers will be better at performing certain tasks, but those tasks are not in our every day usage, so for your personal needs, you won't use a quantum computer, you'll stick to the standard computer.
Well-Done, summary:
SILICON: Above vs below a 5nm gate width silicon is discrete on eor off vs statisticial on&off (aka tunnel field effect)
GERMANIUM: Germanium Has much better lab performance but silicon is extraordinarily more pragmatic/practical (availability/cost, heat dissipation/tolerance, oxidation, freq band, etc). However germanium might be modified to improve its characteristics (ex: Molybdenite in development, lab germanane, etc).
CARBON nano-tubes (in development): Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lattice. A carbon nano-tube is a tube of graphene
GALLIUM NITRIDE: has some better performance than silicon & can be manufactured with silicon based equipment/industry
Better CMOS: design based on statisticial on&off (aka tunnel field effect) instead of discrete on eor off to lower power/heat. Only work for graphene & at super low temps.
MEMRISTOR: A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power.
OPTICAL COMPUTING:
QUANTUM COMPUTING: In theory quantum computing can find least-worst solutions to problems no-matter the number of potential candidates (ex: NP problems like the traveling salement, decryption/password breaking, etc)
I like how you said nothing about optical computing xD
@@Luizfernando-dm2rf
*I wasn't trying to show-off. These was just quick notes to remind me of technology tends I was fuzzy-on (thks to this good video). I'm a retired physicist & am already familar with optics & optical computing.
*There is Trillion$ ... Trillion$ of semiconductor electronic infrastructure around the-world & comparatively optic/quantum infrastructures are very small. Quantum computer infrastructure is small but growing much faster than optic computing infrastructure. So we just might jump from semiconductor dominance to quantum dominance computing.
What I like about quantum computing is it could eventually be used to work the problems we face much faster, like disease cures, and silicon limitations.
No one wants to cure disease. That should be obvious. If even half a dozen major diseases were cured, it would result in Trillions of dollars in losses for the Medical profession, big Pharma and medical hardware industries. They only want to "treat" diseases because that is where the profits are.
4:56 Not quite true. I have some Germanium transistors here which were used at 10.7 MHz, and some other low power ones which were used at over 100 MHz. They were expensive, but very capable at low currents. Bipolar Silicon transistors have been made which would amplify at frequencies close to 20GHz (I made some of them, back in the 1980's.) For higher frequencies, JFET, then MOSFET, IGFET and other types are needed. But yes; Germanium transistors are definitely very limited at higher frequencies, in comparison to Silicon devices. Leakage currents are very problematic, and they lead to high noise levels. It would still be interesting to know how a Germanium MOSFET would perform at somewhat higher frequencies, however.
Great video I found today I don't know if it's just me but the video and audio was out and not synced
It's actually a dubbed video and we didn't have the orignal voice track without background music baked in, so yeah, doesn't look particularly stunning. We'll fix that with new vids. I mean the really new vids. Those that are already produced, will have to be dubbed the same way unfortunately.
Dear Ivan. At 9:05, there's one frame where the cmos construction steps are shrinking, and the diagram there is not translated from Russian.
Please enjoy it!
Very good video, but I have two questions:
-I wonder if it took around a half a century to reach this performance on silicon/CMOS doesn't it mean that it would take around the same time for others technologies to catch up or do improvements already discovered accelerate the rest?
-After the huge rise of chip price in the last years can we expect before others technologies are catching up, let's say in the next decades that chip prices will decrease a lot because of the silicon limit that meanwhile we will see a comeback of dual CPU and SLI/crossfire for PC to keep improving performances?
amd chiplet design and intel vastly inferior big,little design is already a form of dual cpus.
sli is very unlikely to ever come back due to how frustrating it was for everyone.
for gpu its most likely gonna be software improvements.
RISC-V is the future.
The next frontier will likely be 3D chips. We already do build them with a few layers, especially memory chips which are dozens of cell layers thick now. We will need to do the same with processors, and also switch to more thermally efficient materials to help avoid the increased heat per sqr unit as a processor gets thicker (building heat exchange tubes into the design itself can also drastically help combat this).
A 3D design also opens up pathways for much more optimized computation methods that a chiplet design does not really lend itself well to as it scales. Chiplets primarily communicate to eachother through side channels, whereas a 3D chip would simply be one single complex chip of interconnected logic in all directions.
The potential performance gains as we add layers is insane to think about, when you consider how thin a given layer of a processor is. You could have hundreds of thousands of layers eventually. A single processor rivaling billion dollar super computer warehouses of today.
da cube chip
Would the AMD 5800X3D be considered a 3D chip?
@@dylanhecker6686 Not really, it's only the cache, which is essentially on-die RAM that is stacked, which was already being done for your SYSRAM too. The core logic is still single layer.
@@invertexyz thanks!
Well we already do have '3D chips'. A processor consists of many layers of circuits stacked on top of each other.
I don't see quantum processors replacing processors but if they become cheap enough I can definitely see them becoming a new component of the computer. If they do replace anything it will probably be graphics cards. All in all, we probably won't see quantum computers hit mass market until probably decades. What is more likely to happen than most theories is the integration of supercomputers, powered by more rare metals, that use the internet to give you your computer as a service rather than owning a computer. The system we have will be a lot more stupid and only decode the sent information.
Yeah I feel like cloud computing will be big, but hopefully the next big "breakthrough" will be just moving away from the x86 architecture. This should buy us a couple more decades at least before we have to start replacing silicon, or whatever other new innovation occurs during that time
god i hope not, i hate this rent mentality.
fuking WEF kikes will better write their wills if they try this.
This video is more informativ than I thought. I love the web for channels like this.
"that is, if it continues to exist for us, in this line of events"
that hit hard
Overheating becomes less of an issue with the reduction of transistor size. The only reason modern transistors are emanating more heat than older models, is because manufacturers make use of that reduction in heat output, by increasing cores, core frequency, and core complexity (more transistors per core).
that's only happening because the transistors no longer use less power when Shrank at the same rate of the volume reduction which used to be the case now a 2x volume reduction only leads to 20-30% power improvement, which is not enough,
Good video but the video and sound is not synced and it's annoying but other than that it's 10/10
Really good run down of the limits of silicon. Great research and presentation!
great research? most of the information given was a vast oversimplification or just outright wrong. I wish there wasn't so much dishonesty in tech channels. Being able to accurately depict advanced technology is difficult and it's much easier to make it sound fantastical and get more viewer retention
I just saw this is a dub channel of another. I didn’t know I was looking for content like this and I love it. I’ve only seen dub channels go to other languages from English so it’s really cool to see that it actually does work!
The biggest problem with Quantum computer is the lack of software. Writing code for them is so complicated it practically eliminates 99% if not more people who currently make their living by writing code. That means less developers and thus less development. Unless coding will become heavily AI assisted.
I'd say the biggest problem with them is that their cooling systems are the size of a room but, ya know.
@@timothywilliams8530 Seeing how much of a mess it's been just to get quantum computers to the point they are now, I have doubts about all their pros in general.
@@timothywilliams8530 My mom worked with a computer that took an entire room before I was born. Her company actually had 2 of those. Size, price, power consumption do not matter if it has a purpose... a profitable use. Current quantum computer have no use whatsoever... not because they are bad but because there is no software that would make them better than standard computers for them. And there is no software because there is only a handful of people who can write some and they they probably call each other by their given names ;-)
i say the biggest problem is they barely exist....(YET). We still count qubits in the 100's and there quantum property dont last at all .
Software is nothing i promise you as soon as quantum computers are truly viable software will be created asap.
software has little to no effect on quantum computing development big corps like google are just throwing money on it like its just around the corner when its the furthest from that
finally yt algorithm doing his job well
We still got some ways to go with classical architecture, you can also gain some performance by improving how quickly we can access memory and how much of that fast memory we got, there are also architectural adjustments with concepts like RISC, bigLITTLE, vcash, new schedulers in OSs etc. so both on hardware and software level.
With many new technologies being developed like Quantum, DNA-computing and Optical-computing, I don't think these will replace classic computer but with interconnects between them then they can work in tandem in the tasks they are best at.
According to some scientists, quantic computer may never become real fully capable computers and instead became at best some kind of accelerator or limited to very specific tasks.
i think just coming up with better architechture and firmware is the easier route for now than finding a replacement for silicon
Look at it this way. Human DNA is 2nm. To achieve human DNA size would be not only astounding but also open the potential for micro robots. At some point all manufacturers will have no choice but to develop more cores to handle more tasks. In addition to this, it might be possible to develop cybernetics and simulate human nerves.
I wholeheartedly disagree with the concluding section of this video. It insinuates that quantum-cloud-compute backend reliant hardware, which are about as functional as a storage device with some added connectivity, but mostly hollow toys which will serve as GUIs is the future of computers. I reject such a plane of existence, even if there is simplicity, or even necessity, in it; such a paradigm shift would basically mark the end of personal computers and ownership of anything related to computers. Quantum computers may as well exist in their own space and continue to get more sophisticated; they will still require close-to-0-Kelvin temperatures to even function, so an average person cannot carry one such device, let alone drive one in a home setting in today's society. I cannot visualise a future in which these devices become so compact, efficient, easy-to-manage and affordable as to be the de facto standard for computers because somehow, for some godforsaken reason, they are the only way going forward. How dreadful.
Ignoring the severe limitations Q-computing has as of today, the prospect of not being able to own a full computer is actual garbage. I'd rather stay with my classical slow piece of junk than having to trust big techs and cloud providers.
Years ago, I did a calculation to determine when the speed and power benefits of process scaling would no longer offset the process variation. I got 5nm for that number as well.
Nice job researching and pulling all these alternatives together.
Keep it simple >~
When it comes to silicon being a pillar of computing, that will never really go away.
Why ? Mass computing. Basically, when wanting to do computing en masse, given that
power usage is not a problem with enough silicon-based photovoltaics in mind,
computing can still use mostly silicon-based computing for when computing something
that requires so much computing power that using the more scarce resource based
computing options become prohibitively expensive.
It's like creating a very large computer that can be like a workhorse, while using the more
scarce resource based computing option for small applications.
The workhorse can be used as an off-site computing 'monster' to offload work on.
This may seem rather inefficient, but given that electricity can easily be farmed with solar voltaics,
the power requirements are a lesser drawback, while the mass computing becomes cheaper
over time, and can yet benefit from increased performance.
Once people go into space and build space stations, remote computing this way can be
much cheaper than supplying the more scarce resource based computing options for all/everyone.
The problem is not one about creating a few million small computing options that require scarce
resources (and thus becoming expensive), but once the newer options need to be created
by the billions, yearly.
The amount of mining required to get enough of the scarcer resource would be bad
for the environment in such a fashion that humanity would 'compute' itself to death.
Since, computing to death does not compute, en masse computing with the cheaper
silicon option makes more sense.
Also notice that en masse computing is much more efficient. When a resource isn't used
by someone, someone else may be allocated more computing resource aiding his or her
task to complete more rapidly.
When it comes to remote computing for things like gaming, this may also work well.
Silicon will always stay the best and easiest resource for computing, since it's most abundant.
Also, the new computing options will have their own size limitations, albeit somewhat smaller.
This would be a fixed factor.
4:28 Yes, the cost of fabrication of non silicon processors is much higher due to the scarcity
of the required materials. This will never change. Also, due to the limitation of availability
the newer processors would be fabricated in much smaller quantity, while being much more
expensive. Due to rising cost of scarce materials the cost over time would rise, not become lower.
For regular silicon-based processors, these would become cheaper over time.
Combined with photovoltaics the increased power usage would not be the limiting factor.
13:56 Photonics are best used for transfer of data, not computing. Extreme short relays,
let's say between chip layers and external to chip pathways would be an option,
since these can transfer data really fast, and yet use very little power,
which results in less heat release.
Photons travel light so to speak while electron travel is heavy. This means less heat release.
There's one problem though, the creation of very small light emitters/receivers
and very small fiber optics or such is much harder than a regular circuit.
It's at this point in time at least not very useful. A similar thing is with the wattage per computing.
If the wattage becomes low enough, further reduction of such becomes less useful,
and with that only extreme high end computing purposes would be served by further reduction.
Let's examine an example, where you can have a fully capable processor using a 10 cm by 10 cm
solar panel for continuous usage. What would the difference make if it's then operable
by an 8 x 8 cm solar panel instead, by increasing the cost tenfold ?
It would mean less power used, yes, by as much as 33%, but then the tenfold cost ?
So, at some point in time the resource cost will define the computing solution,
rather than its capability, once again referring to the scarcity of materials.
14:46 Photonics will not remove the limitations of distance. Any signal will still be limited to
the speed of light, whether electronic or photonic based. Latency stays the same no matter
which of these mediums are used. Basically, the best way to make processors smaller
and faster, currently, is to provide on-chip memory, with comparatively slow memory being
used for the externals. On chip memory would increase processing by removing much of
the latency of memory instructions, like cache does, but then used on a larger scale,
with only one cache and larger memory area. Then you'd have 16 GB base memory on the chip,
with like 4-8 megs in cache. When a large-scale external reference would occur
(like writing to SSD) the memory controller would copy the on-chip memory to the external one,
subsequently transferring it to SSD, while optimizing the transfer speed, between on-chip
and external memory. On chip the memory could have the same latency as regular cache,
but transfer between the on-chip memory and external (DDR) memory would still be many
times greater than the transfer speed between regular memory and SSD.
Also, when using stuff like virtual memory on an SSD, this could then be replaced by
the external (DDR) memory, increasing computational output per unit by that much as well.
Knowing that most of the CPU's and GPU's time is wasted on waiting for memory return,
you know that this transfer of operations from external memory to on-chip memory
can make a big difference, even when the computational output of the processor
is yet smaller. Solving each bottleneck in turn is also a way to speed up computing,
in some cases making a bigger difference than making it smaller.
Ayyy I am your 1000th subscriber! Great video
GE is used in military industry that's why. + Gallium arsenide is used in military because chips made from that materials has lowest fail rate and extended temperature limits.
First time I've ever seen a channel translated like this, cool
hey i realy liked the video nicely balanced and no stale moments, i got the random recomendation video in auto play and didn't regret a BIT XD
Silicon is also relatively easy to work with and process.
Exactly and they’re working on 3nm already
Glad i found this channel premium content
Check into computing with Ternary. It is mathematically more efficient than binary and requires just one extra voltage level. This could lead to more efficient and faster CPUs than we have today and some are already being built. All good wishes.
Thank you. That was very informative with a thought processing progression that was spot on without someone knocking on your door. I've been imagining crystal tech ever since I learned about piezoelectric when I was 5. Magnetic cooling, heating and generators are my insanity. It's all about switches and ball bearings. Space and Storage. I still don't know why they just didn't cut the 0 into an 8??? Cheers
Very informative and explained in very interesting way. Thank you.
your audio isnt lined up with the video.
Can't wait to get my own silicone chip manufacturing system for cheap in a few decades B)
If I were to guess which one would become the first step in a post-silicon CMOS world, it would probably be a room temperature tunnel field effect transistor.
I'd put my money on silicon/photonic hybrid processors. There are already ASICs out there using photonic processing which are hundreds of times more energy efficient than silicon equivalents. We already have the technology, it's just a matter of integrating them into a full CPU/GPU.
@@SupaKoopaTroopa64 photons are mass-less packets of energy and thus it is very difficult to make them interact between themselves and with any piece of matter we are made or our chips are made of. that's the reason why we still use CMOS transistors with gate delay of 30 picoseconds instead of using light and get attosecond "gate delays" with "light transistors", if they would exist. Everybody has it very clear, creating a chip purely using light will make it millions of times faster than silicon chips. The reason why light is so fast, it is because it has no mass. But you can't make a flip-flop out of photons. That is why accelerating CMOS chips with analog-style logic (implemented using interference, like some startups are doing right now) is a bad idea. It is like putting a F1 engine on a bicycle and expecting it is going to do speeds of 400 km/hour. The bearings of the bicycle wheels will simply melt because they aren't designed for such high RPM (not talking even about aerodynamics here). It would be a good idea for a college student working in his garage, but not for a company that rises 100 million or so for this kind of project. I suggest you to learn quantum physics and chip-design before you invest. The rate of bankrupcy of new companies is like 9 to 10, if not more. Only understanding the physics of the computation you can make a fair judgement of which company is going right direction or not. There are always tons of options to invest but only few of them are real opportunities.
@@absolute___zero I'm aware of these limitations. I'm just saying that production-ready photonics already exist, so they have a head start over many other technologies. Also, I don't plan on investing in any photonics startups, or even anything in the microprocessor industry right now, I was just saying that If I had to make a bet on which of these technologies would first appear in a consumer product, I'd go with photonics.
@@SupaKoopaTroopa64 photonics of course has future, but not as co-processors, rather a standalone computing logic with only a few CMOS. But there are many other computing methods that have been developed by scientists but not exploited yet. For example, there are mechanical integrated circuits made of mechanical relays (they achieve 10 nanosecond times to switch on/off), or there are gold transistors working in vacuum like vacuum tubes, there are DNA computers, and so on... the next big idea will be a new computing paradigm. There is also lots of work to do in software to gain performance, for example, development of massively parallel operating system.
That's completely truth. Thanks so much for online classes. We really need enough resources. 🏭👨🏼💻🎮🤖🏡
Great information and you are very carismatic you can give tones of information without feeling overwelming.
16:58 Classic mistake, the qubits are not in every state at once, each can only have one state
at any given time. They fluctuate really fast, though.
The boundaries between reading 1 and 0's digitally limit these state's readings,
while the qubit itself is factually an analog computing bit, with the state varying between
0 and x, where x means the highest possible potential contained in said qubit.
18:07 Well, I'd like to see quantum graphics cards, with a nice new Unreal Engine 6 or so, personally.
Nanites + quantum => huge open worlds with extreme distant view capability.
Another problem with germanium: it’s oxide is water soluble making it more difficult to process.
Thank you! We support you.
Silicon era will continue for at least another century m8, anything else will most likely be 10x-1000x more expensive and used in niche applications.
yeah, until any tech becomes consumer grade and priced like silicon/lower it won't be viable replacement anytime within the century
@@paulssnfuture2752 which is very hard to do considering right now the base material is simply sand, and we got extremely good at making perfect crystals of Si, good luck beating either of those things.
i think we will just have more and more silicon around the cpu ( more cache, integrated ram, accelerators etc)
then we might get a cpu where one of the more important bits is a different faster material and the rest stays silicon
RUclips recommendations at its finest.
Keep up.
This thinking is what we humans do 🤔 and how we achieve it! Just by solving problems one by one. Then we wait for another or more problems and we solve them again, and again. That's the beauty in science and what our world represents.
Thanks. I enjoyed your fascinating journey into the chip realm, exploring the potential future.
Dam man you need to give credit to Eugene Khutoryanski for using his graphics
Photonics can also do prossessing via constructive/distructive wave interference
Not really, as it messes with phase information. You can build simple gates, but as soon as you try to do something like an and gate either your 1+1 value is different amplitude from 1+0, or its phase is different. This is fine if you are just throwing it at a detector, but if you try to throw it into another gate built around interference it is not going to work right. There might be some quantum mechanical way to align the phases, but the naive approach does not work.
Photonics cannot store memory yet as that would imply being able to trap light for very long periods of time which is still impossible.
processing*
The Japanese have made it possible, and in mass. Smart move abandoning silicon 20 years ago to focus on developing GaN. Hands down!
Even if Moore's law may not apply to Si applications, they are still very useful in solving many of our computational applications at 16 and even 32 nm technologies.
I really think that graphene is the right way to move forward, we don't know how or when, but ultimately they will be the solution
only ~800 subs oO such high production, nice video man cant wait to see more!
I think the "next thing" will be photonic cpu or/and ai neuromorphic chiplet, which both are extryemly fast.
The issue is still a silicon as a substrate. I heard that specific "glass" can be used even more better than silicon
You forgot multi valued transistors. Non binary transistors which can switch between more than 2 values.
A 4 state transistor can emulate 2 binary transistor, without miniaturization. So it can preserve Moore's law, without requiring smaller atoms, or getting into tunneling troubles.
is it me or the audio is not synced to the video
Cool, but the audio is badly skewed relative to the picture in this video.
There's good news coming out of georgia tech this week about silicon epigraphene transistors, compatible with current manufacturing methods 👍
The statement made that a Field effect transistor cannot be made under 5nm is not true. TSMC is actually making 2nm transistors that are GAA (gate all around). It’s not the traditional FET but still a FET.
Superb man👍💐👌♥️ #Mycomputer
Hmmm .....when and what about computers in space at impressively and ( conveniently ? ) cold temperatures . What performance potentials ?
I was watching how atom's are being controlled and manipulated on 2d materials. That would suggest we have a ways to go, regarding Moore's Laws future.
Re: "silicon being near the end of its life cycle," this is just sensationalistic journalism (or an uneducated remark). It's not going anywhere in our lifetimes due to the deeply entrenced and highly refined manufacturing process. More advanced materials will have a much higher and impractical cost of scale. The most certain outcome is slowly more advanced materials will be used in conjunction with silicon, but it isn't anywhere near ending its life cycle, not for a hundred years or more. We can even do photonic and quantum compute on silicon.
actually a silicon graphene polymer has already shown upto 60% improvements in to the polarities & movement of electrons
so maybe silicon will move to that
Awesome video, thanks.from India.
Goood video man, best wishes from Switzerland
Good video, thanks. But I still don't see any clear successor (or combination) within 10 years.
Silicon still has a way to go before its phased out. The key is better packaging (things like 3D stacking).
3d stacking will exacerbate heat dissipation, though, so it is not a valid solution
0:20 Credit for obtaining the first pure silicon goes to Swedish chemist Jöns Jacob Berzelius. He succeeded in this in the year 1823. 4:20 Germanium semiconductors have a maximum operating temperature of up to 70°C, which is much lower than silicon semiconductors.
Another direction we could take to fundamentally change how computers work in order to increase efficiency is reversible computing. The laws of thermodynamics gives us a minimal possible energy cost per bit of computation, which we are slowly approaching. However, there is a workaround, since the actual cost is in erasing information. So long as the computation does not erase information, there is no theoretical lower limit in energy use.
With technology based on this, it might be possible to build 3d-chips having to worry about the heat problem at all. That said, it's at least as fundamentally different from how computers function today as quantum computers are, so it will be very difficult to transition and we will still need components that actually erase information in many cases.
In my opinion they need a new material. A material that has silicon properties but doesnt have this limitation. Some kind of super alloy perhaps?
I'm still a bit confused by how qbits are so powerful. Yes, having 3 values is better than having 2, but analog systems can do practically infinite, since you can read voltages or amperages from 0 to however high a voltage or amperage your system can handle. The main limitation was that the loss of current greatly varied and made it unreliable, but couldn't that be fixed by superconductors? We already use superconductivity in quantum computers, so it makes no sense to me to settle for 3 when you can have practically limitless numbers in one memory unit.
Oh I think you've missed quite a bit;
Photonic computers are not just transmitting data with light, they are CPU's, and very powerful ones at that.
Memristors do not just simulate neurons closely they are physically closer to logic gates then transistors
Tunneling as I understand it, should be independent of the material properties like resistance and conduction. Even have an infinite energy barrier and tunneling can go through it. How can changing materials, especially the larger atoms help this situation?
i thought i was having a break in my consiousness when the mouth didn't match the sound
During the video I was thinking what if we make a superconductor chip?
Thank you, I was lacking any news about memristors for about 8 years, so I stopped looking for updates. It is great to hear there is finally some progress
The main problem with memristors is the success of and investment in NAND flash memory.
The advent of 3D multi layer NAND devices allowed an otherwise dying technology a new lease on life in the middle of the last decade, and it is still pushing forward.
Even though ReRAM / memristors would be far superior in power consumption, latency and speed they are still leagues behind NAND flash in density even at the 2D level, and at the 3D level it's not even worth mentioning.
Weebit is making a headway, but it will be years longer before we see anything truly commercial come out of it.
Thanks sir. I never knew this information... Amazing developments..
as an old viewer of yours, i'd like to wish u luck with ur new channel :D
Very GOOD !!!!
It makes more sense to create a quantum chip instead of a whole computer. Cheap operations use standard cpu and gpus then quantum chip takes the more complex queries.
Umm, the death of silicon as a base for ICs is GREATLY overstated.
There is this thought about some END coming that just fails to think clearly about what will happen. So let's evaluate some timelines.
Sure, silicon is going to hit a limit, but so would any other material. If you want an IC based on on/off transistors, there is only so small you can make the transistors, but once you hit that theoretical size, is that the death of silicon. The answer is no.
YOu can measure CPU performance in multiple ways. One is raw performance. Raw performance is based on work done in a certain amount of time, but to calculate this you use multiple types of applications to calculate this performance. If I want a render machine, do I need a CPU that's great at gaming? The answer is no. I need something that's great at stream processing, and that could be a combination of CPUs and GPUs and accelerators, or a computer that integrates all three in a single circuit to run any type of stream processing programs. Maybe with the accelerators I can program it first before running the job, so the first part of the render application checks for an FPGA and if it exists it programs the accelerators before running the render tasks.
Next, at the beginning, which is as far as I got because these kind of videos are silly, the talk is on chiplets and IPC (Instructions per clock cycle). What was skipped is making a process node much more efficient at running higher speeds. There's no law that says a CPU can't be clocked over 6GHz. In fact I think over the next 5 years TSMC and Intel will both have process nodes that run 6 - 6.5GHz and it won't be very high power consumption.
Also what isn't mentioned is compute power in 3 dimensional space. We're getting to the point to where ICs can have multiple layers, and this is for different reasons, one being the shrinking of transistor size and every shrink usually comes with power reduction. I think it's safe to say that CPU design is going to involve 3D layouts and this will involve layering in such a way to where one layer isn't producing so much heat and the other layer can, so planning out the layout of circuits will become more complex.
But here is the REAL meat and potatoes of this. Home users already have access to so much compute power it's crazy. A home user can now do with large servers had to do 10 years ago, and in another 5 years the amount of power that can exist in a laptop is competing with servers from a decade ago. The move to 2nm or Intel 20A which are the same thing is SO monumental that for home users there's no need to get past that. The issue won't be not having enough compute power, but having too much with rare exception. One area of growth is PC gaming. Graphics compute still has to improve to get high quality 4K gaming. But, we're there NOW. The GPUs that AMD and Nvidia are releasing right now are good enough for high quality 4K gaming, and this is using a 5nm node or modified 5nm node (TSMC N4). When semi-conductor companies get to 2nm the issue won't be not having enough transistors anymore. It will be hard to keep adding more compute units (graphics processors) and in fact for Nvidia GPUs they're using multiple types of compute in a single GPU, and that's basically the raw compute for making the image on the screen, a lighting and shading technique used for games called ray tracing for there are RT cores, and then the cores for AI, tensor cores. 2nm is SO small with SO MUCH transistor density that home computing just never needs to move to anything else, but there will be other nodes that come out with better transistor density, so like 18, 16A, etc.... and this can probably go all the way down to 10A. By this time we're about 2 decades from now.
The REAL issue isn't going to be transistor density. It WILL be about making machines smarter, to where they can do more with the same number of transistors. IPC isn't a gimic. It's how you get more performance in a fixed time period from a CPU over an older CPU at the same clock speed.
So, there are many tools for improving compute power, and they are NOT gimmicks to hide some flaw. It's PART of better engineering for computing.
One is more processor cores. For home computing though you hit a limit because most programs home users use don't have so many processing threads and adding ever more cores is a total waste. For servers on the other hand, there are many types of server compute loads that can benefit for a LOT more cores.
Another is clock speed. But this has to be solved while at the same time keeping power consumption down. THIS is a real issue and it's hard to solve, but it IS getting solved slowly, but every process node is different so you have to solve that problem for every process node.
Another is instructions per clock (IPC). This also improves over time and one of the ways to solve this is simply to have more cache so the cores have more data and instructions next to the cores instead of having to access system memory to get data/instructions. But there are other ways to improve IPC too.
Another is making better processors that can accelerate workloads. AMD, Intel and Nvidia are all working on this, and even Apple has done excellent work with acceleration and that's part of what makes the Apple M1 and M2 processors so fast and NOT the fact that it's based on ARM instead of X86-64.
Another is being able to stack transistors which shrinks the area needed for the same circuit.
Another IS chiplets. This helps with heat issues.
The point is, yes silicon will hit a limit for transistor density for a 2 dimensional area, but that's a single problem, not THE problem. Solving the other problems I listed is more important.
And lastly we can talk about how much silicon is needed. Well, as process nodes shrink transistor size over time, a single wafer can make a lot more ICs, so we are always improving the number of circuits that some from a fixed amount of silicon. It's not an issue at least not for another 50 years or so.
How does one remember which position all those switches are in ?!? Either On or Off. Well that is what computer glitches are, the chip is forgetting where, what, who and when the switches are doing at any given time. Software errors can multiply these error glitches.😮😊
Why not switch to hematite transistors, or
silicon carbide, gallium nitride, or even synthetic diamonds, while also combining either option with carbon nanotubes, & optical computing?
If 5nm is the minimum size how does Apple’s 3nm M4 work?
Good new channel subbed. Well done.
Silicone has so many uses. Pretty soon humans will completely turn into their favorite element.
Interesting, keep an eye out for Biphenylene I believe this will have a wider application field than graphene.
I worked on photonic semiconductors at Berkeley thats the next wave for chips for sure if yall are interested look it up
Germanane has more potential in the power transistor market, as the higher conductivity will produce less heat, allowing for very small high power transistors that produce little heat, and therefore don't need heat sinking.
Researchers at the University of Groningen in the Netherlands and the University of Ioannina in Greece, have reported on the first field effect transistor fabricated with germanane, highlighting its promising electronic and optoelectronic properties.[5][6] Germanane FET's show transport in both electron and hole doped regimes with on/off current ratio of up to 105(104) and carrier mobilities of 150 cm2 (V.s)−1(70 cm2 (V.s)−1) at 77 K (room temperature). A significant enhancement of the device conductivity under illumination with 650 nm red laser is observed.
lol im greek and just did read about germanane in wikipedia seems promising crystal but thing is how much work it needs to make make billions of trillions transistors with it and how they will price it in the final products you know you pay a product every penny goes to labor research branding production costs etc etc etc but very promising and i hope there my brethrens find more discoveries with it
Integrated circuits are actually manufactured from high purity quartz, not sand, as we’ve been told
Great video. Thanks for posting.