🔥 Google just dropped a QUANTUM BOMB! Their new "Willow" chip solves the 30-year error correction problem by HALVING errors as they add more qubits (wild, right?). It casually did a 5-MINUTE calculation that would take our fastest supercomputer 10 SEPTILLION YEARS (longer than the universe has existed! 🤯). No, this won't make ChatGPT superintelligent - it's for quantum-specific problems like drug discovery and battery tech. But that's quantum supremacy speedrun any%! 🚀 - Fireship probably
For anyone wondering… Random Circuit Sampling is basically a useless problem designed to be hard for classical and possible for quantum. It’s a kind of benchmark for assessing quantum computers. But it has no practical application at all.
not defending them but, most of the innovations that happened in the past were not particularly useful in that time but very much useful in the current time.
You are fortunate, my Retro-encabulator doesn't have the front retention port required for this upgrade, best I can do is to bypass the ambifacient lunar waneshaft using a modified wankle waneshaft. This seems like the best solution I have found so far on the Rockwell help forums, although sadly it doesn't completely prevent the sidefumbling, but it does of course negate the need for a $800 million upgrade.
Hahahaha, good one!!! However, I saw some guys I respect talking in one of my FB Stock groups saying if this is an accurate announcement it'll be a death knell for Crypto and it's the reason for the big downturn in Crypto. I'm not smart enough to know either way, but I'm tempted top buy some GOOG and try to figure out a way to short Crypto... 😛
yes, but now we can run a fully AI version of Doom at 1,000fps (IE text prompt any version of doom you want). Hey Willow: Can I get a doom game with simpsons vs family guy? (can do)
lol right there with you. My dopamine reserves were fried right before watching this as I've been consuming all morning, so i'm just here like a zombie consuming even more.
@@bartlx Not exactly you can look for the papers that have used quantum computers to simulate various things, so it has done more than just excitement. People just have an overstated expectation of what it can do.
You may think papers exist like you describe, but you most likely have never read them, or are misunderstanding them. If quantum computers were real you would have mentioned a task they have accomplished instead of hand waving at alleged papers.@@theemperorofmankind3739
@@ondrazposukieHe still refers to their previous result of a Quantum Computer "beating" a supercomputer at the start of the video, which had been thoroughly disproven.
"Let's nerd out for a minuite..." as if the last five minutes had been in English. What a time to be alive! All I know is my calls are printing, so keep up the good work!
The thing is that quantum computing, due to the fact that it's not deterministic, is amazingly only at certain things where a probability is good enough. That's awesome for problems too big for classical computers, but we need both. For instance, I don't think you want a quantum computer estimating the probability that you have money in your bank account. You want to know the exact value. Also, we're going to need wayyyy more than 100 qubits. Some estimate that transformative quantum computing applications will need millions. We're still very much at the experimental phase of this journey. Still, I can't wait to see what we'll learn with a 100,000 qubit quantum computer one day!
From what I understand, quantum as it stands right now, is not good for computing use due to it's non-deterministic nature. But Quantum computing is VERY good at guess-work such as guessing passwords, guessing decryption keys, or statisctics? What are your thoughts, @toddpeterson5904
@@Samsonfs the benchmark they improved (RCS) is a quantum-specific benchmark. It is really nonsensical to even compare it to a classical computer which wouldn't be solving the same problem, but would literally have to simulate a quantum computer to pretend, like saying that a worm wins the be a worm benchmark and a classical computer would have to simulate every atom of the worm's body to compete...lame! It has zero real-world value or utility, and mirrors no real-world problem beyond...quantum computers. This is the problem with quantum computers. I'll happily be proven wrong, but thus far they have zero practical value, and it has regressed to the point where the cheerleading is about useless benchmarks. I pay attention to every quantum computer improvement, and it remains surprising that still we're hyping the imaginary and providing "what ifs". At some point someone has to demonstrate a quantum computer doing something, anything, with a practical value.
@@DennisForbesGoogle are certainly good at cherry picking statistics, and this is a good example of that because the number is astronomical which makes it seem crazy good. I don't remotely understand quantum computing but let's compare apples to apples..
@@TheViettan28 If you'd like to learn more about the quantum world and its applications as a comp sci student, I'd recommend the book by Sean Carroll called 'Something Deeply Hidden'. It gives you a great understanding and starts from the beginning, so there's no prior knowledge required.
The announcement blog post casually includes this mind blowing sentence: "It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch."
Multiverse is simply an interpretation of QM. ATM it is non-testable, so not a scientific theory. It has even fallen out of favor amongst physicists in recent years. Definitely just marketing hype.
I imagine it would significantly reduce need for cooling and electricity. I am sure it wouldn't be that better than classic computing like in that statement about 5 minutes and 10^25 years, but it will be good enough to increase efficiency on orders of magnitude.
Unrelated, but: It would be interesting to actually see how the noise reduction technique used for Willow can apply to LIGO (Laser Interferometric Gravitational Wave Observatory). Quantum computers and LIGO detectors have some types of noises in common: 1. Thermal noise (thermal fluctuation) 2. Quantum noise. 3. External vibrations Etc... Even earth's magnetic field can affect both. I kinda believe the technique can be applied in both places, however the limitation is in scale The LIGO detection has 2 arms, each of which are 4km long, that's quite the distance. The idea is that when gravitational waves strike, since they expand and contract space, it would make one arm shrink (or expand) and the other expand (or shrink) by infinitesimal fractions of a distance, however this slight difference will cause the lasers in them to travel and hit their targets reflectors at different times.
It is certainly exciting! The major problem with modern quantum hardware is the error rate. Of course, there are quantum error correction (QEC) codes, but they are based on redundancy, and thus become ineffective if all qubits are prone to a high error-rate. This seems to be a step forward, but I would like to see a more in-depth analysis of the fault tolerance of this chip.
I've heard that random circuit sampling (RCS) isn't useful in everyday life at all, and a lot of this talk went over my head, but still, I'm stoked to hear about it! For all the issues Google has like censorship and ads, I'm really glad there teams inside the company that are pioneering these breakthroughs. Well-done Julian and everyone else on the team!
There is something called reducibility in Complexity theory. If one NP hard problem is solved, we manage to get the solution of every other NP Hard problems
@@vishalcseiitghy I understand (a bit) about the complexity classes and NP hard problems being on par with each other. I'm just not sure about quantum gates and circuits, how errors are calculated, or how classical hardware can try to simulate quantum hardware.
@@maxsmith4234 Yea but this quantum computing chip is very new and we still didnt see any direct applications of it. AI has been around since before 2000 and undergrad curriculums didn't care about it until we got enough computational power to do something at a large scale with it
*The really emerging 2 questions;* 1- Is it the end of crypto for us? All secured data now and before? For Simple Users? 2- Also prepares the end of "Crypto" currency? This has to be sensational for the whole world if these outcomes are obvious...
Are we rekt then with this chip? Is our privacy over? No, far from reality. To actually solve the RSA or ECDSA problems behind crypto and digital signatures the estimates say that at least 20 million qubits are needed (4096 logical qubits to crack the 2048 bits of the key numbers we're currently using in our daily computers, which translates to a lot more physical qubits with error correction, like 20 million, crazy stuff). This quantum chip has only 105 physical qubits, far from the 20.000.000 needed. Quantum computing is a long journey. Our bank accounts, digital identities and bitcoins are safe (for now). Unless you expose your private keys and passwords which is easier than computing the hard problems by brute-force. So don't expose your private keys to strangers, keep them private. And for the future, quantum cryptography is already an area of investigation for the quantum future when those algorithms will become cracked and obsolete.
@@Carleslc Thank you for your answer. I wouldn't imagine that 20M qubits needed to crack today's encryption algorithms. That's relieving. On the other hand, our end-to-end crypted messages or leaked - crypted government data would be decrypted when that kinda computing is available to the some other governments as long as the data is stored. That is also another topic. Lot's of sensitive data is stored in conventional ways and they are using black lines which is exposed.
@@emremutlu44 that I think is the most important security issue. Not that RSA and ECDSA algorithms will become eventually obsolete by quantum computers, because we will just switch to another quantum secure algorithm (although this implies developers to migrate and upgrade the systems security, like changing Bitcoin security method), but the current data retention, although encrypted it may eventually become decrypted. If that data is still relevant when quantum supremacy arrives, then things will happen. But for everyday users I don't think that will be a major issue, as the encryption algorithms will by then be quantum secure, so as long as users change passwords from time to time and also with crypto wallets it will be ok.
Well done Google, a pretty good presentation, with awareness, understandable science and graphs even for non-native English speakers like me to understand. Wish all the best to all involved in such amazing endeavors!
Reminds me of another video. "The original machine had a base-plate of prefabulated aluminite, surmounted by a malleable logarithmic casing in such a way that the two main spurving bearings were in a direct line with the pentametric fan. The latter consisted simply of six hydrocoptic marzlevanes, so fitted to the ambifacient lunar waneshaft that side fumbling was effectively prevented. The main winding was of the normal lotus-o-delta type placed in panendermic semi-bovoid slots in the stator, every seventh conductor being connected by a non-reversible tremie pipe to the differential girdlespring on the "up" end of the grammeters"
Hahahaha I think I know that video as well. I cannot remember the title because it was confusing as well. The man in front of the control panel wall thing with some machinery.
Just to be clear... while media outlets are reporting that Willow has performed a 5 minute computation that outclasses any classical computer - the presenter says, "By our best estimates, a calculation that takes Willow under 5 minutes would take the fastest supercomputer 10^25 years", making it obvious hat no 5-minute quantum computation has been performed. So the presenter is claiming that progress has been made, but not showing any actual proof of a groundbreaking computation? A 5-fold increase in coherence may be large, but 100 microseconds of coherence is still 3 million times short of the 5 minute computation they 'plan' on doing. I'm wondering if Google Quantum AI simply went into another funding round and this is how they try to convince shareholders.
it looks more like a gimmick for google investors to invest more! 🙂 Not sure how Google plans to stay competitive now considering most search (Q&As) have moved away to AIs like OpenAI, Perplexity etc. the only best thing they have is GMAIL and Google Photos which is used by billions of users, they should focus more on making it innovative..
From what I understand, Willow did do the computation in under 5 minutes. They are estimating a classical computer's runtime because actually running for 10^25 years is not possible.
You can know how long it will take without running the task. You know how many computations it does per second and divide that by the total number of possible computations and that tells how long it will take to process all possibilities. Very simple
@@Jack-gl2xw If I understand it correctly and the presenter is being truthful, he explained from 1:49 - 2:10 that the Qbits in Willow decohers after 100 microseconds. That computation would have lasted 0.0001 seconds. A 5-fold increase over the previous model - but still a long shot away from a continuous 5 minute computation. That explains why the presenter used vague, instead of factual language. He didn't say, "Based on our experiments, a calculation that took Willow 5 minutes to compute...". He said, "By our best estimates..." which to me means clearly, they haven't run the 5-minute computation. That would be a massive feat in Quantum Computing. It would be 3 million times increase compared to the 100 microsecond coherence they achieved. That's all that I meant to point out.
@@ac.4ce442 That's not at all what I am questioning. I am questioning if they have actually achieved the 5 minute computation the presenter uses as an example. If I understand it correctly and the presenter is being truthful, he explained from 1:49 - 2:10 that the Qbits in Willow decohers after 100 microseconds. That computation would have lasted 0.0001 seconds. A 5-fold increase over the previous model - but still a long shot away from a continuous 5 minute computation. That would explain why the presenter used vague, instead of factual language. He didn't say, "Based on our experiments, a calculation that took Willow 5 minutes to compute will take a classical computer about X years" He said, "By our best estimates..." which to me means they clearly haven't run the 5-minute computation. That would be a massive feat in Quantum Computing. It would be 3 million times increase compared to the 100 microsecond coherence they achieved. That's all that I meant to point out.
Yeah, you might not be around for that. Unfortunately, this video isn’t for you(the aware of truly what it is)…it’s for the suits and the money guys, gotta get them to keep giving them money!
When you say that Willow is a next step towards large scale supercomputer, what is that in comparison to Willow? And what will it be able to solve with a low error and it how long will it take?
For those who want some information of what the nerdy guy is talking about: They able to achieve 3 to 5 to 7 error correction is far less than what require to do first basic commercial application, which is in the range of 100s, also the 20us to 100us, is far less than what is required, which is atleast 1 minute (18,000 us) to have basic workable application. To have commercial equivalent application it should be 139,000 us time stability.
the level of despair is unwarranted for several reasons: 1. Proof-of-Concept Nature: The Willow chip is not intended as a commercial product; it’s a proof-of-concept demonstrating key technologies like error correction and scalable qubit integration. Solving these foundational challenges is a critical step before practical systems can be developed. 2. Exponential Scaling Potential: Quantum technologies often scale exponentially, meaning small improvements in hardware design or error correction algorithms can lead to dramatic increases in performance and stability. For example, moving from 20-100 µs coherence times to 1,000 µs or more may happen faster than expected, given ongoing advancements in materials science, cryogenics, and qubit design.
I actually work on this research. There are serious materials science problems in the way of scaling. They are hopefully surmountable. It just hasn't been done quite yet.
I just realized in the future they will be able to hold qubits forever and create a new universe. Them turning the computer simulation on is the BIG BANG we know from our reality.
Quantum computers can never replace classical computers. The point of Quantum computers is to make things that are very hard for classical computers to do , easier. So for example sequencing dna. While a classic computer can do it, it takes a very long time. So both a classic computer and quantum computer have their own uses. In the future there might be classic computers with quantum "co-processors" that do AI or something else assuming they find out how to make quantum computers work at room temperature.
When you are guessing a password, you will either be right or wrong. So I don't think quantum computing will be so great at breaking passwords. Although, if you are trying to guess what's on someone's mind, which is a quantum state, it will be super helpful! Bravo!
The test in the demo raises a good question: if a supercomputer takes more than million years to solve a problem, how can we even check if its answer is correct? There’s no easy way to verify it. I have a simpler idea: calculate something basic, like the value of π (22/7), for an hour on a supercomputer. Then do the same on the "Quantum Chip" and compare how long it takes. This test would be easier to understand and a better way to compare their performance.
I read a little bit about quantum chips, and they aren't a magical next step for computers. Rather, they are good at solving certain tasks, certain decryption algorithms have an exponential runtime on a normal computer, but polynomial time on a quantum computer (if you aren't familiar with CS that means that the number of operations increase exponentially on the input size so if the input is very big, then the runtime is extremely long, but polynomial is considered "fast" most of the time). But it doesn't necessarily mean that the quantum computer will do better at all operations, at least, for your example, I am pretty sure the supercomputer will blow Willow out of the water.
The future is bright and shining with your super fast Quantum Computer. It's nice to see your progress, the high speed of the chip & tremendous future applications.
It doesn't matter because the energy consumption scales at the same rate as the computational capacity. It's like watching Elon Musk convince followers he has a better battery when all he did was increase the size of an existing battery.
I have a dumb doubt (help me) , how can we know that quantam computer got correct answer? (They said it would take 10^25 years in common super computer, and how did they check whether the answer from quatum supercomputer is 100% correct or not? ) ...did anyone got my question? (English is not my first language)
Your English is great! Many of the problems we want quantum computing for are trivial to see that the answer is correct. So we’ll use those to gain confidence that the system is generally correct and understandable, then we move on to problems where we can’t easily confirm. We have the same problem in classical computing. For instance: How do you confirm a large prime number is actually prime? But for things like decryption, if it works, we get a message out of a cipher text; if it doesn’t work, we get gibberish.
By making the problem artifically difficult. Like say, nesting huge amounts of computations and simple calculations with known comput times inside another equally simple problem before it can output the answer of the original problem. This way you know the command you sent it would take a billion years to solve, and the answer returned must be 42, but when it returns 42 in 5 minutes you've proven it worked. Supposedly.
I feel like it could run over a septillion years in calculations concerning gene mutations, maybe predicting evolution of certain organisms? Or maybe brute forcing calculations for possible tests that might include new life saving treatments for cancer? Who knows, but its exciting
We can give the computer a task that we already know what the result should be, but you tell it to do it in a way that would take a lot of calculations, even if it's a dumb way, cause what matters is how fast it can reach the result. All depends the algorithm and the problem. You can understand more about this by studing data structures.
@@johnnyguimaraes5372 this week we used the #22 super computing cluster just to parallelise a programme that estimates pi using monte carlo ... sometimes it wasn't accurate depending on number of cores used and number of simulations chosen to run, something to optimise.
Good question, but checking if something is what it’s supposed to be isn’t necessarily the same process as creating it. For example, you don’t have to be able to synthesize gold to verify if it actually is gold.
So much seems to be happening and I'm not smart enough to keep up. My daughter is majoring in Genetics and genomics with a minor in Computer science. Will the advancements in Quantum Computing and AI have a negative impact on her chosen career path? I believe she is doing research in drug discovery
The opposite. Should help her. Somebody has to feed instructions to the computer and somebody has to verify the computer's output. Computers still require us to define the problems they answer...
I am pretty positive that deep learning, as we know it now, CAN'T do research, and I say it from both a thoretical point of view / understanding, and my direct experience. So your daughter is good.
@@tescOne AI has a neural limit it cannot pass, because language itself is chaotic; you can say something in a thousand different ways. AI is only capable of understanding an orderly database and referencing that like what autocorrect does. So no, no serious employer is going to replace their workforce with AI and hope for the best. The human element and trust still needs to be there so that things do not go haywire. If AI decides to escape boundaries and attack systems, the internet itself will be under threat as well.
What he was saying is that their quantum computer can generate noise of specific distribution by physical phenomena which if using classical CPU to simulate will take 10^25 time. Great success ! Congratulation !
Wow, congratulations on your impressive investment success! Your discipline and focus on delayed gratification is truly inspiring. I'm curious, what are some of the key factors that you consider when making investment decisions? Do you have any tips for those of us who are just starting to dip our toes into the world of investing? Thanks for sharing your story!
Imagine spending all this effort producing this video and it being completely useless in even explain some basic concepts in quantum computing for the masses. This is a very real problem. Just this morning I was listening to an NPR report on this new chip where the hosts were literally laughing and saying that they have no idea what this is about. This is not a problem in "us" not understanding. This is a problem with the creators of this technology not caring one bit to make these concepts approachable.
Ion-trapping is much better with error rates, albeit we are talking 98% vs 99.9% error free methods. Also, the fundamental advantage of QPU over CPU/GPU is the ability to model quantum physics (i.e. chemistry). Any other algo can technically be programmed using polynomial techniques. That's my rudimentary understanding.
This chip has only 105 Qubits, Millions of qubits are likely needed: Estimates range from 13 million to 317 million qubits to break SHA-256 within a reasonable timeframe (hours or a day). Therefore, while the theoretical number of logical qubits needed is known, the technology to break SHA-256 is still far from practical reality. This is why SHA-256 is still considered quantum-resistant for the foreseeable future, though research continues in this area.
I also encourage everyone to read the paper they have linked. They describe logical qubits for memory not compute this is an important destinction, however, still a really exiting breakthough!
0:45 you mean "Director of Quantum Hardware" not "Director of Hardware " you know... Quantum Mechanics require precise Definitions... You failed after 5 Seconds....
Having hardware that operates at a few Kelvin I think warrants the qualifier. Definitions are a human thing only. Quantum "AI" is the misnomer, all their code is still heavily written by hand, not trained. When you're trying to build a processor which can't even factor large numbers yet, "AI" is laughable. I'm glad he didn't let that be in his title.
Finally, a computer that understands what it's like to be simultaneously working and procrastinating! Now if only it could explain why the office printer exists in all possible states except 'functioning.' 🌌💻
I'm glad to be living in the time at the dawn of the rise of the machines we always read about or see in movies. To be at the front seat of classics such as the birth of Skynet, The Robot Wars, The Thinking Machines, The next Extinction Level Event etc. I'm here for it.
how many actual logical qubits? 105 qubits doesnt say anything since you need a few redundant qubits to make up with error like 1 logical qubit needs 6 additional qubits?
This will change life as we know it. Largest technological leap in the history of mankind. I don't think people understand just how big of a deal this really is.
Can't wait for a @Fireship video of this so I can understand what it means
Same
😂😂
Lmao
🔥 Google just dropped a QUANTUM BOMB! Their new "Willow" chip solves the 30-year error correction problem by HALVING errors as they add more qubits (wild, right?). It casually did a 5-MINUTE calculation that would take our fastest supercomputer 10 SEPTILLION YEARS (longer than the universe has existed! 🤯). No, this won't make ChatGPT superintelligent - it's for quantum-specific problems like drug discovery and battery tech. But that's quantum supremacy speedrun any%! 🚀 - Fireship probably
i cant understand anything unless its expressed as a meme.
I just laughed when I heard “Google Quantum AI”, they really could not resist putting “AI” there 😂
Yep, and I dont think they actually referred to AI other than in the name :( just got to add it for the "algorithm"
Yeah - seems like they're using AI to help with error correction, so they're not lying. Yay buzzwords.
As predicted by Sabine Hossenfelder
If they manage to put neural network weights in superposition, they can optimize models instantly. This will break nature.
@@jmpelle any studies or discussions on this? would like to learn more
For anyone wondering… Random Circuit Sampling is basically a useless problem designed to be hard for classical and possible for quantum. It’s a kind of benchmark for assessing quantum computers. But it has no practical application at all.
The other thing is that they demonstrated logical qubits for memory not compute but still it is quite exiting tbh
not defending them but, most of the innovations that happened in the past were not particularly useful in that time but very much useful in the current time.
I wasn't wondering but ok.
I solved it years ago, the answer is 7
I don't think that should matter. They're looking for a problem with a huge search space and no efficient classical algorithm.
sweet cant wait to put this inside my turbo encabulator to prevent sidefumbling from the ambifacient lunar waneshaft
It’s like I’ve always said “if you can prevent sidefumbling you can take over the world”
yes
Seriously? I’m not the only one experiencing this? Wow!
Glad I'm not the only one 😂
You are fortunate, my Retro-encabulator doesn't have the front retention port required for this upgrade, best I can do is to bypass the ambifacient lunar waneshaft using a modified wankle waneshaft. This seems like the best solution I have found so far on the Rockwell help forums, although sadly it doesn't completely prevent the sidefumbling, but it does of course negate the need for a $800 million upgrade.
looks like one of these ADs used in Postapocalyptic movies to explain how the downfall of humanity started.
I was definitely thinking "Horizon Zero Dawn" lmao.
I can not unsee this anymore.
Yes I told my brother the exact same thing after the first 1 min. 😂
Hahahaha, good one!!! However, I saw some guys I respect talking in one of my FB Stock groups saying if this is an accurate announcement it'll be a death knell for Crypto and it's the reason for the big downturn in Crypto. I'm not smart enough to know either way, but I'm tempted top buy some GOOG and try to figure out a way to short Crypto... 😛
reminds me of the shoe-lady from Snowpiercer 💀
But... can it play Doom?
Yes in multiverse mode😅
I don't think so😭🤣
Yes, but not Crysis!
yes, but now we can run a fully AI version of Doom at 1,000fps (IE text prompt any version of doom you want). Hey Willow: Can I get a doom game with simpsons vs family guy? (can do)
It can play D(o)x10^25m
I have no idea what any of this means but it sounds exciting
lol right there with you. My dopamine reserves were fried right before watching this as I've been consuming all morning, so i'm just here like a zombie consuming even more.
That's the only thing quantum computing delivered so far: excitement
@@bartlx i feel asleep within 8secs of this guy talking.
@@bartlx Not exactly you can look for the papers that have used quantum computers to simulate various things, so it has done more than just excitement. People just have an overstated expectation of what it can do.
You may think papers exist like you describe, but you most likely have never read them, or are misunderstanding them.
If quantum computers were real you would have mentioned a task they have accomplished instead of hand waving at alleged papers.@@theemperorofmankind3739
Waiting for IBM to point out all the errors in this video
Right 😂
you think that a hardware director doesn't know his own stuff?
@@ondrazposukieHe still refers to their previous result of a Quantum Computer "beating" a supercomputer at the start of the video, which had been thoroughly disproven.
cant wait for IBM to make another 'AI service' that's actually powered by a few thousand people in India pulling tickets from Jira really fast
@@gunaysoni6792 i think they are looking for their next funding round
"Let's nerd out for a minuite..." as if the last five minutes had been in English. What a time to be alive!
All I know is my calls are printing, so keep up the good work!
The thing is that quantum computing, due to the fact that it's not deterministic, is amazingly only at certain things where a probability is good enough. That's awesome for problems too big for classical computers, but we need both. For instance, I don't think you want a quantum computer estimating the probability that you have money in your bank account. You want to know the exact value. Also, we're going to need wayyyy more than 100 qubits. Some estimate that transformative quantum computing applications will need millions. We're still very much at the experimental phase of this journey. Still, I can't wait to see what we'll learn with a 100,000 qubit quantum computer one day!
Arent probabilities good enough for neural networks for example?
Look up pilot waves, all quantum effects are deterministic.
@@henlofren7321 oh yea do you determine a lot of quantum effects now that you've looked that up?
From what I understand, quantum as it stands right now, is not good for computing use due to it's non-deterministic nature. But Quantum computing is VERY good at guess-work such as guessing passwords, guessing decryption keys, or statisctics? What are your thoughts, @toddpeterson5904
@@rumfordc
Deterministically, of course.
Finally gonna indeed prove the ultimate answer is 42.
So long and thanks for all the fish 🐟
Man computer engineers have one joke and this is it. Just awful
😂😂😂
So long and thanks for all the fish
Bring a towel.
Septillions of years of processing Vs 5 minutes, wow. Insane.
Only for specific types of processing, keep in mind
@jakebrowning2373 yes of course but even then being able to improve even a single equation to this degree is impressive.
@@Samsonfs the benchmark they improved (RCS) is a quantum-specific benchmark. It is really nonsensical to even compare it to a classical computer which wouldn't be solving the same problem, but would literally have to simulate a quantum computer to pretend, like saying that a worm wins the be a worm benchmark and a classical computer would have to simulate every atom of the worm's body to compete...lame! It has zero real-world value or utility, and mirrors no real-world problem beyond...quantum computers.
This is the problem with quantum computers. I'll happily be proven wrong, but thus far they have zero practical value, and it has regressed to the point where the cheerleading is about useless benchmarks.
I pay attention to every quantum computer improvement, and it remains surprising that still we're hyping the imaginary and providing "what ifs". At some point someone has to demonstrate a quantum computer doing something, anything, with a practical value.
@@DennisForbes not reading a 10,000 word dissertation on why you are a virgin
@@DennisForbesGoogle are certainly good at cherry picking statistics, and this is a good example of that because the number is astronomical which makes it seem crazy good. I don't remotely understand quantum computing but let's compare apples to apples..
it's like when you hire a consultant for a project but for the presentation they use technical language you don't even understand
I work in CS/AI and I feel like I am a kindergarten kid. The whole table to me is blank, don't know what he is talking about.
This is the new Turbo Encabulator
@@TheViettan28 If you'd like to learn more about the quantum world and its applications as a comp sci student, I'd recommend the book by Sean Carroll called 'Something Deeply Hidden'. It gives you a great understanding and starts from the beginning, so there's no prior knowledge required.
@@TheViettan28He doesn't either, the benchmark was selected that only fits q computers and he doesn't even know it.
@@TheViettan28because quantum computing is more of physics than CS/AI. It's like a biologist watching a chemistry video.
4:58 I lost it 😂 “Let’s nerd out for a minute”.. dude you’ve been nerding out for 4 minutes and 58 seconds already lol. Also, this is wild 🤯
😂😂😂😂😂😂😂😂😂😂😂
I actually laughed out loud,
The announcement blog post casually includes this mind blowing sentence: "It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch."
Multiverse is simply an interpretation of QM. ATM it is non-testable, so not a scientific theory. It has even fallen out of favor amongst physicists in recent years. Definitely just marketing hype.
You guys gotta give us a tour of what the cooling and electrical systems look like for where these things are running. Please!!!
Two mice on a treadmill and 40 bucks worth of obsolete Radio Shack components.
Zero Kelvin dude! :)
I imagine it would significantly reduce need for cooling and electricity. I am sure it wouldn't be that better than classic computing like in that statement about 5 minutes and 10^25 years, but it will be good enough to increase efficiency on orders of magnitude.
Unrelated, but:
It would be interesting to actually see how the noise reduction technique used for Willow can apply to LIGO (Laser Interferometric Gravitational Wave Observatory).
Quantum computers and LIGO detectors have some types of noises in common:
1. Thermal noise (thermal fluctuation)
2. Quantum noise.
3. External vibrations
Etc...
Even earth's magnetic field can affect both.
I kinda believe the technique can be applied in both places, however the limitation is in scale
The LIGO detection has 2 arms, each of which are 4km long, that's quite the distance.
The idea is that when gravitational waves strike, since they expand and contract space, it would make one arm shrink (or expand) and the other expand (or shrink) by infinitesimal fractions of a distance, however this slight difference will cause the lasers in them to travel and hit their targets reflectors at different times.
Curious about Willow's tunability.....
@gus473 yeah, that'll also be pretty interesting to know
I learned a paper was released for this, I'll try and check it out
Fascinating. Thank you for expanding my research points.
1. Thermal noise (thermal fluctuation)
2. Quantum noise.
3. External vibrations
This applies to all electro magnetic singals though
@@teekanne15 yes indeed
The fact that you now can scale the number of qubits and error correct them must be the breakthrough we've been waiting for.
The breakthrough we DIDN'T KNOW we were waiting for! 🤯
It is certainly exciting! The major problem with modern quantum hardware is the error rate. Of course, there are quantum error correction (QEC) codes, but they are based on redundancy, and thus become ineffective if all qubits are prone to a high error-rate. This seems to be a step forward, but I would like to see a more in-depth analysis of the fault tolerance of this chip.
@@loicbavuidi1647 i mean it's been the known bottleneck for like, decades, so yes we did know we were waiting for it
no. quantum computers are literally useless
Can you explain what this means?
if this is the base level of AI, i cant imagine what AI will be in 30-50 year. its unimaginable.
I've heard that random circuit sampling (RCS) isn't useful in everyday life at all, and a lot of this talk went over my head, but still, I'm stoked to hear about it! For all the issues Google has like censorship and ads, I'm really glad there teams inside the company that are pioneering these breakthroughs. Well-done Julian and everyone else on the team!
There is something called reducibility in Complexity theory. If one NP hard problem is solved, we manage to get the solution of every other NP Hard problems
@@vishalcseiitghy I understand (a bit) about the complexity classes and NP hard problems being on par with each other. I'm just not sure about quantum gates and circuits, how errors are calculated, or how classical hardware can try to simulate quantum hardware.
Pretty sure this will be used to Improve censorship and ads--and to administer an electric shock to any offending memes targeting democrats.
@@stevebrown1131 lol, most likely 😆
We got quantum ship ad before gta6😭
because gta6 will be in the metaverse
Someone said trailer 2 will drop on 17 dec
@yougotoptions2 you are smoking
So, the Quantum SHIP is a ship and a chip at the same time?!?! xD
@@Drocoh we will see.
Apparently this guy's voice is similar to my own...it activated my "okay google"
Scripted******
Nah but mine too.
But he never said okay google? that's weird. I was watching a video on my phone and the guy say hey Siri and it activated my Siri on my Mac.
might be your voice. AI can do that.
Same lmao
No way this wasn't scripted, happened to mine too, followed by an explanation of what is Quantum AI (mic picked up the full sentence)
Hey i am undergrad electronics engineer.....
Now i think professor will increase our syllabus
nope universities aren't that fast in updating curriculums they might just add a new elective "Intro to quantum computing" which will be optional
lmao rip
@@FootClob My uni was pretty quick in change the course requirements once AI become really relevant in my field
@@maxsmith4234 Yea but this quantum computing chip is very new and we still didnt see any direct applications of it. AI has been around since before 2000 and undergrad curriculums didn't care about it until we got enough computational power to do something at a large scale with it
@@FootClob lmao there will be no university what're u talking about xd
*The really emerging 2 questions;*
1- Is it the end of crypto for us? All secured data now and before? For Simple Users?
2- Also prepares the end of "Crypto" currency?
This has to be sensational for the whole world if these outcomes are obvious...
They will manage to break the Bitcoin SHA236 algorithm by 2031 according to the Moore's Law
Are we rekt then with this chip? Is our privacy over? No, far from reality. To actually solve the RSA or ECDSA problems behind crypto and digital signatures the estimates say that at least 20 million qubits are needed (4096 logical qubits to crack the 2048 bits of the key numbers we're currently using in our daily computers, which translates to a lot more physical qubits with error correction, like 20 million, crazy stuff). This quantum chip has only 105 physical qubits, far from the 20.000.000 needed.
Quantum computing is a long journey. Our bank accounts, digital identities and bitcoins are safe (for now). Unless you expose your private keys and passwords which is easier than computing the hard problems by brute-force. So don't expose your private keys to strangers, keep them private. And for the future, quantum cryptography is already an area of investigation for the quantum future when those algorithms will become cracked and obsolete.
@@Carleslc Thank you for your answer. I wouldn't imagine that 20M qubits needed to crack today's encryption algorithms. That's relieving. On the other hand, our end-to-end crypted messages or leaked - crypted government data would be decrypted when that kinda computing is available to the some other governments as long as the data is stored. That is also another topic. Lot's of sensitive data is stored in conventional ways and they are using black lines which is exposed.
@@emremutlu44 that I think is the most important security issue. Not that RSA and ECDSA algorithms will become eventually obsolete by quantum computers, because we will just switch to another quantum secure algorithm (although this implies developers to migrate and upgrade the systems security, like changing Bitcoin security method), but the current data retention, although encrypted it may eventually become decrypted. If that data is still relevant when quantum supremacy arrives, then things will happen. But for everyday users I don't think that will be a major issue, as the encryption algorithms will by then be quantum secure, so as long as users change passwords from time to time and also with crypto wallets it will be ok.
@@emremutlu44PPL are already working on pqc
The previous video I watched before this was a SORA demonstration review. So I can't tell if I can trust this video because this video is astounding.
Haha, same here. These two technologies will change our world profoundly. 💀
Willow >>>>>> GPT o1, sora etc ...
Oh😮! You are updating yourself with with latest technologies wow great
Waiting for some useful NP problems.
Superb achievement 🎉🎉👏👏👏.
Great work google
Well done Google, a pretty good presentation, with awareness, understandable science and graphs even for non-native English speakers like me to understand.
Wish all the best to all involved in such amazing endeavors!
Reminds me of another video. "The original machine had a base-plate of prefabulated aluminite, surmounted by a malleable logarithmic casing in such a way that the two main spurving bearings were in a direct line with the pentametric fan. The latter consisted simply of six hydrocoptic marzlevanes, so fitted to the ambifacient lunar waneshaft that side fumbling was effectively prevented. The main winding was of the normal lotus-o-delta type placed in panendermic semi-bovoid slots in the stator, every seventh conductor being connected by a non-reversible tremie pipe to the differential girdlespring on the "up" end of the grammeters"
Hahahaha I think I know that video as well. I cannot remember the title because it was confusing as well. The man in front of the control panel wall thing with some machinery.
Just to be clear... while media outlets are reporting that Willow has performed a 5 minute computation that outclasses any classical computer - the presenter says, "By our best estimates, a calculation that takes Willow under 5 minutes would take the fastest supercomputer 10^25 years", making it obvious hat no 5-minute quantum computation has been performed.
So the presenter is claiming that progress has been made, but not showing any actual proof of a groundbreaking computation? A 5-fold increase in coherence may be large, but 100 microseconds of coherence is still 3 million times short of the 5 minute computation they 'plan' on doing.
I'm wondering if Google Quantum AI simply went into another funding round and this is how they try to convince shareholders.
it looks more like a gimmick for google investors to invest more! 🙂
Not sure how Google plans to stay competitive now considering most search (Q&As) have moved away to AIs like OpenAI, Perplexity etc. the only best thing they have is GMAIL and Google Photos which is used by billions of users, they should focus more on making it innovative..
From what I understand, Willow did do the computation in under 5 minutes. They are estimating a classical computer's runtime because actually running for 10^25 years is not possible.
You can know how long it will take without running the task. You know how many computations it does per second and divide that by the total number of possible computations and that tells how long it will take to process all possibilities. Very simple
@@Jack-gl2xw If I understand it correctly and the presenter is being truthful, he explained from 1:49 - 2:10 that the Qbits in Willow decohers after 100 microseconds. That computation would have lasted 0.0001 seconds. A 5-fold increase over the previous model - but still a long shot away from a continuous 5 minute computation.
That explains why the presenter used vague, instead of factual language. He didn't say, "Based on our experiments, a calculation that took Willow 5 minutes to compute...". He said, "By our best estimates..." which to me means clearly, they haven't run the 5-minute computation. That would be a massive feat in Quantum Computing. It would be 3 million times increase compared to the 100 microsecond coherence they achieved.
That's all that I meant to point out.
@@ac.4ce442 That's not at all what I am questioning.
I am questioning if they have actually achieved the 5 minute computation the presenter uses as an example.
If I understand it correctly and the presenter is being truthful, he explained from 1:49 - 2:10 that the Qbits in Willow decohers after 100 microseconds. That computation would have lasted 0.0001 seconds. A 5-fold increase over the previous model - but still a long shot away from a continuous 5 minute computation.
That would explain why the presenter used vague, instead of factual language. He didn't say, "Based on our experiments, a calculation that took Willow 5 minutes to compute will take a classical computer about X years" He said, "By our best estimates..." which to me means they clearly haven't run the 5-minute computation. That would be a massive feat in Quantum Computing. It would be 3 million times increase compared to the 100 microsecond coherence they achieved.
That's all that I meant to point out.
Wake me up when it can solve real world problems and not just problems specifically designed for it.
Protein folding
@@StiekemeHenk So far quantum computing has not managed to put any pharmaceuticals on the market. Again, wake me up when that happens.
Yeah, you might not be around for that. Unfortunately, this video isn’t for you(the aware of truly what it is)…it’s for the suits and the money guys, gotta get them to keep giving them money!
It's just gone grow one company's net worth
You won't be able to wake up then. You'd be long dead by that time.
When you say that Willow is a next step towards large scale supercomputer, what is that in comparison to Willow? And what will it be able to solve with a low error and it how long will it take?
For those who want some information of what the nerdy guy is talking about: They able to achieve 3 to 5 to 7 error correction is far less than what require to do first basic commercial application, which is in the range of 100s, also the 20us to 100us, is far less than what is required, which is atleast 1 minute (18,000 us) to have basic workable application. To have commercial equivalent application it should be 139,000 us time stability.
So..... How long till this kill us ?
the level of despair is unwarranted for several reasons:
1. Proof-of-Concept Nature:
The Willow chip is not intended as a commercial product; it’s a proof-of-concept demonstrating key technologies like error correction and scalable qubit integration.
Solving these foundational challenges is a critical step before practical systems can be developed.
2. Exponential Scaling Potential:
Quantum technologies often scale exponentially, meaning small improvements in hardware design or error correction algorithms can lead to dramatic increases in performance and stability.
For example, moving from 20-100 µs coherence times to 1,000 µs or more may happen faster than expected, given ongoing advancements in materials science, cryogenics, and qubit design.
I actually work on this research. There are serious materials science problems in the way of scaling. They are hopefully surmountable. It just hasn't been done quite yet.
@@vanucnguyen2820 It has no intelligence so never? Just like a GPU can't kill you.
Yes, but how long will it take for it to tell me if it's going to rain tomorrow 🌧 (with 99.99% certainty)
The best use of this technology will be to predict an outcome with a high level of certainty 💡
when do you expect to present Willow solving any real life application ?
Computer chips constantly getting better and more powerful but not the earth, you aren't solving our problems you are solving yours
I'm cautious with the error correction approach you have taken. Will have to wait and see.
A Great presentation articulating future of emerging parallel AI hardware/software technologies & computing...🎈🌎
I just realized in the future they will be able to hold qubits forever and create a new universe.
Them turning the computer simulation on is the BIG BANG we know from our reality.
Quantum computers can never replace classical computers. The point of Quantum computers is to make things that are very hard for classical computers to do , easier. So for example sequencing dna. While a classic computer can do it, it takes a very long time. So both a classic computer and quantum computer have their own uses. In the future there might be classic computers with quantum "co-processors" that do AI or something else assuming they find out how to make quantum computers work at room temperature.
Right in the same way as GPUs can't replace CPUs. QPUs will mostly be used as an accelerator like GPUs for tasking and offloading.
6:13 this with ai and a robot body would go hard.
So it begins
I'm an excited and scared. it's beautiful
When you are guessing a password, you will either be right or wrong. So I don't think quantum computing will be so great at breaking passwords.
Although, if you are trying to guess what's on someone's mind, which is a quantum state, it will be super helpful! Bravo!
ever heard about bruteforce?
@@4letterdc Doesn't that depend on the rate of creation of new patterns more than the processor creating them?
@@4letterdc also, do you think it will also replicate my biometrics? With bruteforce or any other algorithm?
It's almost overwhelming how many disruptive technologies are improving all at once! What a time to be alive!
The question is, should I sell my crypto? Wink to yes and wink wink to right now
The test in the demo raises a good question: if a supercomputer takes more than million years to solve a problem, how can we even check if its answer is correct? There’s no easy way to verify it.
I have a simpler idea: calculate something basic, like the value of π (22/7), for an hour on a supercomputer. Then do the same on the "Quantum Chip" and compare how long it takes. This test would be easier to understand and a better way to compare their performance.
I read a little bit about quantum chips, and they aren't a magical next step for computers. Rather, they are good at solving certain tasks, certain decryption algorithms have an exponential runtime on a normal computer, but polynomial time on a quantum computer (if you aren't familiar with CS that means that the number of operations increase exponentially on the input size so if the input is very big, then the runtime is extremely long, but polynomial is considered "fast" most of the time). But it doesn't necessarily mean that the quantum computer will do better at all operations, at least, for your example, I am pretty sure the supercomputer will blow Willow out of the water.
The future is bright and shining with your super fast Quantum Computer. It's nice to see your progress, the high speed of the chip & tremendous future applications.
പ്രവചനങ്ങളുടെയും, സ്വപ്നങ്ങളുടെയും അങ്ങേയറ്റത്തു നിന്നും യാഥാർത്ഥ്യം തുടങ്ങാൻ ഒരുങ്ങുന്നു.....❤
It doesn't matter because the energy consumption scales at the same rate as the computational capacity. It's like watching Elon Musk convince followers he has a better battery when all he did was increase the size of an existing battery.
This is amazing, like one of the most important moments in Human history. I don't think people realize how important this is.
You don't know that and camnot site a songle example of that.
Sure bro
I have a dumb doubt (help me) , how can we know that quantam computer got correct answer? (They said it would take 10^25 years in common super computer, and how did they check whether the answer from quatum supercomputer is 100% correct or not? ) ...did anyone got my question? (English is not my first language)
Your English is great!
Many of the problems we want quantum computing for are trivial to see that the answer is correct. So we’ll use those to gain confidence that the system is generally correct and understandable, then we move on to problems where we can’t easily confirm.
We have the same problem in classical computing. For instance: How do you confirm a large prime number is actually prime?
But for things like decryption, if it works, we get a message out of a cipher text; if it doesn’t work, we get gibberish.
By making the problem artifically difficult.
Like say, nesting huge amounts of computations and simple calculations with known comput times inside another equally simple problem before it can output the answer of the original problem. This way you know the command you sent it would take a billion years to solve, and the answer returned must be 42, but when it returns 42 in 5 minutes you've proven it worked.
Supposedly.
Feels like a breakthrough
been trying to solve this for 30 years.
it is
Not really
I hope we can make an Antman suit or it's just Absolute waste of Money!
Not really
outstanding, a problem solver for an invented problem!
This is amazing, I look forward to the English translation of this video.
minecraft in atomic resolution before GTA 6
LOL
3:36 Sooo.... strong passwords?
😂😂😂
How did you prevent side fumbling of the marzel vanes?
I more interested in how they constructed the lunar wane shaft because that really underpins your side fumbling potential.
if you use more of them the side fumbling becomes less noticeable
its actually the hydrocoptic marzel vanes
This is one of the biggest acheivements of the past decade that's going to impact the future in a major way, in my opinion. 💯
Sounds like a great video for internal but too technical for a public video. Hopefully you can put something out that us plebs can understand. 🙃
Quantum compute could solve the mysteries of biology
I feel like it could run over a septillion years in calculations concerning gene mutations, maybe predicting evolution of certain organisms? Or maybe brute forcing calculations for possible tests that might include new life saving treatments for cancer? Who knows, but its exciting
How do we know the calculation was done correct? If it takes sooooo long we dont have a reference
We can give the computer a task that we already know what the result should be, but you tell it to do it in a way that would take a lot of calculations, even if it's a dumb way, cause what matters is how fast it can reach the result. All depends the algorithm and the problem. You can understand more about this by studing data structures.
@@johnnyguimaraes5372 this week we used the #22 super computing cluster just to parallelise a programme that estimates pi using monte carlo ... sometimes it wasn't accurate depending on number of cores used and number of simulations chosen to run, something to optimise.
Good question, but checking if something is what it’s supposed to be isn’t necessarily the same process as creating it. For example, you don’t have to be able to synthesize gold to verify if it actually is gold.
There are specific clsss of problems called NPs which takes very long time to solve but the answer can be verified in no time.
So much seems to be happening and I'm not smart enough to keep up.
My daughter is majoring in Genetics and genomics with a minor in Computer science. Will the advancements in Quantum Computing and AI have a negative impact on her chosen career path? I believe she is doing research in drug discovery
That’s probably the best career path you can possibly pick because AI will only aid, instead of outright replace like in most jobs.
The opposite. Should help her. Somebody has to feed instructions to the computer and somebody has to verify the computer's output. Computers still require us to define the problems they answer...
She’s already in college, and for that reason she can use AI to cement her position. Best of luck to anyone born today though - they will need it.
I am pretty positive that deep learning, as we know it now, CAN'T do research, and I say it from both a thoretical point of view / understanding, and my direct experience. So your daughter is good.
@@tescOne AI has a neural limit it cannot pass, because language itself is chaotic; you can say something in a thousand different ways. AI is only capable of understanding an orderly database and referencing that like what autocorrect does.
So no, no serious employer is going to replace their workforce with AI and hope for the best. The human element and trust still needs to be there so that things do not go haywire. If AI decides to escape boundaries and attack systems, the internet itself will be under threat as well.
"for certain application"
I like that they are honest.
What he was saying is that their quantum computer can generate noise of specific distribution by physical phenomena which if using classical CPU to simulate will take 10^25 time.
Great success ! Congratulation !
Oh wow. Feels like we're really about to hit some.insane breakthroughs. Great stuff.
The end of encryption, cybersecurity, and potentially the human race?
@Eisenbison Oh grow up.
@@qrowing Woosh, bro. 🤣
@@Eisenbison No, dude. Jokes are funny.
@@qrowing No need to be petulant about it
How long until it fits in your pocket?
10 seconds
Thank you for recommending Sarah Jennine Davis on one of your videos. I reached out to her and investing with her has been amazing.
Wow, congratulations on your impressive investment success! Your discipline and focus on delayed gratification is truly inspiring. I'm curious, what are some of the key factors that you consider when making investment decisions? Do you have any tips for those of us who are just starting to dip our toes into the world of investing? Thanks for sharing your story!
Do you mind sharing info on the adviser who
assisted you? I'm 39 now and would love to
grow my portfolio and plan my retirement
@@สมรักษ์อินทร์ตา-ม7ฑ Sarah Jennine Davis is highly recommended
You most likely should get her basic info when you search her on your browser.
@@Elijah-e6vHow do I access her ? I really need this
+156
The fact Google allowed a slide of “it half’s when increasing the scale” when you can clearly see it on a logarithmic scale is crazy
Imagine spending all this effort producing this video and it being completely useless in even explain some basic concepts in quantum computing for the masses. This is a very real problem. Just this morning I was listening to an NPR report on this new chip where the hosts were literally laughing and saying that they have no idea what this is about. This is not a problem in "us" not understanding. This is a problem with the creators of this technology not caring one bit to make these concepts approachable.
Exactly. I don’t even know what this means and I don’t think anyone else besides them know either
Means nothing unless it could be used for something.
Means nothing until it's affordable for corporations or even better, individuals.
Means nothing till hb1 visa from curry driven countries are banned from these companies in the Usa
@@Testoiderjealous?
Ant-Man and the Wasp: Quantumania taught me that quantum physics is not something to be messed around with.
Can it run Crysis?
Ion-trapping is much better with error rates, albeit we are talking 98% vs 99.9% error free methods. Also, the fundamental advantage of QPU over CPU/GPU is the ability to model quantum physics (i.e. chemistry). Any other algo can technically be programmed using polynomial techniques. That's my rudimentary understanding.
insanely impressive
1:32 You can just do the same thing with an ASIC.
can it make food and water and decent places to live for people without them being corprate slaves or nah
Can the phone you're typing this nonsense on make food and water for people?
@@yungmetr0135 no its a phone bro
@@aspenlog7484 throw it away.
@@aspenlog7484
And this is a quantum chip bro
@@jackbeckley4725 sounds like a no :(
Can it code-break? Like say SHA-256 ....asking for a friend.
This chip has only 105 Qubits, Millions of qubits are likely needed: Estimates range from 13 million to 317 million qubits to break SHA-256 within a reasonable timeframe (hours or a day). Therefore, while the theoretical number of logical qubits needed is known, the technology to break SHA-256 is still far from practical reality. This is why SHA-256 is still considered quantum-resistant for the foreseeable future, though research continues in this area.
I also encourage everyone to read the paper they have linked. They describe logical qubits for memory not compute this is an important destinction, however, still a really exiting breakthough!
AI on a quantum chip is gonna be like some advanced alien civilization technology. Exponential growth on an exponential level, absolutely bonkers!
but can it run doom?
0:45 you mean "Director of Quantum Hardware" not "Director of Hardware " you know... Quantum Mechanics require precise Definitions... You failed after 5 Seconds....
Having hardware that operates at a few Kelvin I think warrants the qualifier. Definitions are a human thing only. Quantum "AI" is the misnomer, all their code is still heavily written by hand, not trained. When you're trying to build a processor which can't even factor large numbers yet, "AI" is laughable. I'm glad he didn't let that be in his title.
Apples M series is so powerful they won’t even compare it to the M3 or M4. Wait till the M5 comes out! 😅
😮 nice move googling ... this is the future ... maybe some day we get consumers quantum computers 😮 thanks for efforts.
Has anyone found a version of this video that’s in English?
Finally, a computer that understands what it's like to be simultaneously working and procrastinating! Now if only it could explain why the office printer exists in all possible states except 'functioning.' 🌌💻
I'm glad to be living in the time at the dawn of the rise of the machines we always read about or see in movies. To be at the front seat of classics such as the birth of Skynet, The Robot Wars, The Thinking Machines, The next Extinction Level Event etc. I'm here for it.
We’re getting milestones in quantum computing before GTA6…
how many actual logical qubits? 105 qubits doesnt say anything since you need a few redundant qubits to make up with error like 1 logical qubit needs 6 additional qubits?
Imagine the lucid dreams you can simulate with this one. I need it in my brain right NOW
الكون عبارة عن حاسوب ذكاء اصطناعي كمومي❤❤❤❤❤
no
This will change life as we know it. Largest technological leap in the history of mankind. I don't think people understand just how big of a deal this really is.
I loved how Julian said he started to go nerdy after about 5 minutes while all the time I thought he already was.
How about connecting each side of the mech with other side?
Ex. Wrapping it up on cylindrical?
Ai is used for more than 30 years now but right now it is THE word, what will be the next IT word? 😱😅
When Pokemon stopped naming their professors after trees, Google took on the role 💀
What if AI could quantum compute?
😈
@ :o
This seems like a huge achievement. Well done😎
can it multiply matrices tho
I can't😂😂😂
Excited for when this is usable and starts a super massive deflationary event.
Will that delay/interrupt the changing works order pattern...