Do you actually think Elon is a 'bright mind' or is that your sense of humour? I've been struggling to tell with a few comments made in the past about him... Not some sort of criticism of you btw, just genuinely curious, because IMO these two minds are a few tiers apart - just wondering what your actual perspective is :) Edit: your answer (if you provided one) wouldn't affect how I use your channel btw. It's one of the best I've found for this sort of content.
I had google's gemni tell me that C and C++ were dangerous and we shouldn't let people under 18 learn them because it couldn't tell the difference between memory safety and physical safety.
Bless the souls of any who let that AI judge or determine the fates of real people's lives. Someone's account is gonna get banned for having unsafe C code in their Google Drive. *smh*
It is. Given enough computational power and data, any human brain can be modeled and simulated. The whole point is that requirement is not a thing, so we have to find an approximation with reasonable resources.
@@ButterCatznot only is it too hard to simulate an entire ass human brain, it's also too hard to read one human brain in such detail, we will never simulate a human brain. We will probably get machines to be conscious though (at least be considered as such) don't know when though, I ain't no fortune teller
Try gigacode and gigachat, you'd be amazed. Me: Some dude is saying that AI tools are dumb, I'd like to prove him wrong, can you generate a Rust language function, that examples an unsafe Rust. Gigachat: type FuncPtr = extern "C" fn(*mut i32) -> i32; fn example_unsafe(x: *mut i32, f: FuncPtr) { let result = f(x); println!("The value of x after calling the function pointer is: {}", result); }
It is increasingly caused by AI inbreeding, where AI starts to copy AI generated content. It's like that game where a message is whispered from one person to the next, when it reaches the last person, it no longer makes any sense.
@@xCheddarB0b42xHe's probably correct though that by next year most commercial AIs will be much smarter than him in particular. That's a low bar though, not at all what we mean by AGI.
It's up for debate wether he actually believes any of the claims he makes. I can't imagine he actually believed his cars were anywhere near being able to drive themselves. Unless you've never driven one that's very clearly not the case. Even with all the unpaid beta testers and rapid technology advancements the cars aren't even halfway there NOW, if your next year predictions are off by more than a decade then you're either not the brightest light bulb or you're intentionally lying...
Chemistry is just the physics of molecules(group of atomic bodies interacting with another), it's completely true, isn't it? But still gives rise to the stars, galaxies, planets, environment on these planets, sometimes life, and eventually this comment itself. Simple rules giving rise to the emmergence of incomprehensible complexity is what we call cellular automata, and from brain(bunch of neurons, lol) , evolution, and the very universe itself is one of its example.
@@julioaurelio There are many different types of separate and distinct mathmatical systems -- Set Theory, Calculus, Statistics, etc. But linear algebra? y = mx + b, but in parallel (or massively parallel as we see in deep learning AI). Only thing easier as a mathmatical construct is straight algebra.
lol when he said "brightest minds in technology" i thought something along the lines of "except one is ridiculously dimmer than the other" (*ahem* elon *ahem)
In terms of technology Elon is more of a Edison type...smart enough to make money by capitalizing on the work of others...but not as smart as the people that actully make real contributions to science and technology.
@@memegazer this is idiocy. utter idiocy. edison knew how to monetize, sure. but it's insane to suggest that he did not invent a hell of a lot on his own. you'd be hard pressed to name a single thing musk actually invented.
@@ac583 Sorry you got offended but it is not idiocy it is well understood that edison was notorious for taking credit for inventing things that were actually developed by other people So my comparison in that respect is more than valid
The problem with AGI predictions is how immensely subjective they are. The Stability AI crisis underlines the importance of financial sustainability in such technically complex fields.
@@MultiHeheboy For me AI will be at least half as intelligent as a toddler when it can spell "through" properly and put together a moderately coherent response in a RUclips comment section.
Elon has never once predicted anything accurately. I was also hoping agi will come soon, but ai isn’t getting smarter, and it’s definitely not getting self aware
@@canaconn2388 yeah they all just pretend ml is AI for more investor money. Also moving word around ML = AI now, and AI = AGI. The whole industry is a joke.
@@Hohohohoho-vo1pqthere is no real reason it needs to, maybe it would be a better version (point being even that is unsure could very well be the oposite) simply because what would be the problem? (The only ones imaginable would be if you activly tried to make the ai not capable of including certain ideas, which might not even be nessary) The real problem isnt if you could control it rather if it will be under control or become a tool of destruction
@@Hohohohoho-vo1pq You cannot create free will from physical processes. Physics is purely a set of reactions to prior causes according to the laws of physics. The digital side of computers is ultimately all electrical systems, ie, physical, or humans confusing their projections for reality. If the will is predetermined by external forces, it cannot be construed as free. The premise of free will requires something that transcends physics, ie, something along the lines of a spirit or soul.
oh no way i learnt about them in high school! i had no idea they were spewing so much propaganda and trying to reinvent their image like they are these days. just to be clear, the LAIC are the same people that did The Great Matrix Multiplication Scandal of 1973 right?
I find it hilarious that Google search "AI" basically just remixes Reddit into random word salad, and Google thinks that's better than their old search.
3:00 While the pizza glue response was real, this one about the Golden Gate Bridge was actually photoshopped. The New York Times even issued a correction after briefly asserting in a story that it did. Article is called "Google’s A.I. Search Errors Cause a Furor Online", I can't send the link here
That Rabbit AI device could have had more potential if it were marketed as an advanced Tamagotchi plaything rather than a "powerful pocket AI" - a stupid bulky one at that.
The worst part is that they got teenage engineering to make such a wonderful design for it. It’s reminiscent of their work for the Playdate, but inside there’s not much of worth and if the servers shut down it’ll be as good as a brick. If only it had a homebrew scene with small Game & Watch style games that used the scroll wheel and the touchscreen to their most of their limited potential…
They dont want to make a "toy" like assistants of the past, but they also know the capabilities cannot be anything more advanced than voice-to-text text-to-searchengine and then searchengine-to-firstresult.
I've been trying to explain this problem to people for basically the entire "AI" craze, thank you for articulating it far better than I ever could. All "generative AI" can do is spit out what its training data suggests is the most statistically likely response to the prompt given and not only is this inherently terrible for factual results but its quality only declines as the quality of the training data declines.
So, to you this all means that we don't have to worry about AGI/ASI, or even the power of the current LLMs, and generative AI technologies? If that is your takeaway, the propaganda that "AI is just a marketing scam" is working.
@@flickwtchr We absolutely need to worry about generative "AI." As we can already see they're massively degrading the quality of the internet and services on it, eroding privacy by incentivizing datamining, and providing opportunities for grifters, scam artists, and troll farms. But the only thing we should be worried about "AGI/ASI" is whatever scam anyone claiming to have it is really pulling, the same way you'd "worry" about an email from the prince of Nigeria.
Elon's comment on AGI that "It is coming Next year" is a scummy way to ensure he can create as much hype as possible and raise the plummeting share prices of his company. Its basically market manipulation that is legal only in the US. But like all of his promises, the next year will always be just "Next year".
thank you for hitting hard on the fact that (current) LLMs are just matrix multiplication to predict word placement likelihood. it feels like everyone acting like we'll have "ASI" by 2030 has never taken a math class past algebra, and that's probably because they haven't taken a math class past algebra.
@@Dogo.R I hope public transportation never goes away, or at most evolves in some way for the future. There's so much culture and history surrounding it, it'd be a waste for it to fall into a footnote.
Interesting. I just asked Gemini about its capabilities compared to chatgpt. In its response, it referred to itself as Bard. I asked why. It then said that the name Bard was retired to remove confusion, and the project and interface are now both Gemini. Then i asked why it was then so inconsistent with itself one message ago. It apologized, and said it was still learning. Indeed, LLM is not 'smart'. It's essentially just a fancy way of querying.
@@delight163 Then you misunderstand AGI, AGI would be able to differentiate and tell what's possible or not. The ability for it to think for itself and not just generate random information from the database. An AGI wouldn't make silly mistakes these LLM's make, because it'll be able to fact-check what its saying. PS: Most of the things we humans write down is garbage, just take a minute to scroll through social media :^)
Eh, I agree that AGI likely isn't around the corner or maybe not even possible but mere credentialism is a poor reason to listen to someone, and if anything people this deep into a community are bad at predicting trends. Fireship is right to call it the "elitist definition of science" since he evidently soley relies on bureaucratic academia to define science.
The scientific process is science, not otherwise. That's not to say there isn't bad science, but without the scientific process it's not science. Also career accolades on top of career achievement on top of recognition feels like a fair way for your average person to decide who to trust, most people aren't educated enough on the topic to pick apart published papers and make an educated opinion themself so sometimes you do need to rely on experts... Experts rely on experts in fields that's not their expertise as well
03:01 reminds me of the meme showing someone asking google how to 'un-alive' themselves, google replied with suicide hotline numbers. The same person asked Bing and it listed a bunch of painful methods, then the meme had Hannibal buress saying "*Why you booing me? Im right!*"
The official arrival of AGI happened last night in the most ironic way. The presidential debate took so many IQ points away from humanity we were surpassed on average by a PalmPilot 1000.
You ever seen those defective lights that flicker on and off, but shines normally when it’s on? Elon musk is like that for some reason. He’s got a bright mind when its on, but it never consistently is on
@@thebabyshpee6508owning said companies is different from the intelligence required to do the work behind them. It’s well noted that Spacex engineers basically nod to what Musk says and then ignore it. IE. They do the thinking and work while he’s along for the ride. Meanwhile you can see how well X has been going without said handlers.
@@thebabyshpee6508 he’s a good idea seller, but he didn’t design Tesla cars, didn’t design the Falcon 9, didn’t design Starlink. It takes tens of thousands of engineers and technicians to do any of those things. You also have to equally consider his failures. Boring Company, Solar City, Cybertruck, X… Elon is interesting but famous for making large claims and pretty much never delivering on any of them, and the ones that do deliver are many years late and often a different scope than what started. Much like any other tech company.
I never consider AI as AI, what people call AI for me is just Machine Learning, because if AI is truly Intelligent, then we are killing our crafted intelligent beings on a daily basis and that is to say the least, concerning.
Like many others said "Degenerative AI" is funny af hahahaha how do you come up with that. Btw on my page I provide the best info on how to use AI to get up in life
It's not an "elitist" definition of science to say that it needs to be written down, reviewed and communicated. That is really the only way science can properly function. Sure, you could do a bunch of experiments in isolation on a desert island, or in your basement, never publishing a thing, and you might still make some discoveries about objective reality, but the philosophy of science is largely about undermining your own biases, and it helps to have other people reviewing your methods for that. It's also largely understood by most people that science works best when it's globally collaborative, rather than competitive like business, and you're supposed to be in the game of science to further human knowledge as a whole, not simply your own.
The real scientists are all doing their work and reading the arxiv though, with peer review having secondary importance at best. It's not about being elitist, it's just plain wrong. Meanwhile, joke fields like psychology or nutrition where a paper is just as likely to be nonsense as not (being very generous here) have their formal peer-review processes, which somehow consistently fail to catch the nonsense. This generation's Karl Popper Lecun is not.
@@isodoubIet Edit: I may have misundertood this comment. In regards to if scientists are doing peer review I appreciate that ideals and realities do not match. I do think Peer review is the only way you can validate scientific findings otherwise its just mental masturbation. Peer review is the only way to ensure that discoveries are correct, repatability of results is the corner stone of the scientific method.
@@tbird81 I was replying to Peer Review being "secondary at best"... But I may have misconstrude what was being said. I interpreted that it wasn't as important to the scientific method which I disagree with. However re-reading it I realise it is in reference to how actual scientist operate which is a completely different ball game.
nothing contradictory as he said "AI is no doubt a useful tool but the greatest trick linear algebra ever pulled was convincing humans that LLMs are intelligent"
0:28 if it's not published it ain't science, and if it's published the chance of it being an impossible to replicate, p-hacked to death thing that started from the conclusion and worked backwards is scarily likely.
@@wilburdemitel8468 Yes a "science shill". Lovely phrase. Gonna guess you're not the sharpest tool in the shed lol. Science is the only rigorous and honest process of finding objective truth. I be shillin it 24/7/365!
Computer scientist here.... The issue that makes AI nothing more than just a design pattern and won't take everyone's job is two things: One, it has no idea what its being asked or producing. It lacks a human soul. You can't code for either of these things and as such, it will never be what they say it will be. Put it another way, people, some people are inherently creative, they can draw perfectly from the start, they don't need to be trained, other for refinement. You can't get that with AI. Take all the data away from AI, and it will do nothing.
Somewhat-of-an-artist here. I've been following a lot of the AI stuff around, and i've noticed that the only, _singular_ reason people follow AI, ESPECIALLY for art...is because it looks like generic anime most of the time. People don't want good things anymore, they just want popular things. They don't want "art", they want content, which is a philosophy that definitely falls on it's ass when people get bored of that content.
This does make me wonder: Is AI a legitimate jump in the technological scheme of things or a get rich quick bandwagon for big tech? A lot of the news lately makes me think that its simply a cash grab while the iron is hot rather than anything significant, honestly.
"AI" is a meaningless buzzword. Pre-trained transformers and foundation models have legit -useful- profitable applications, but it's more because it's allowed capabilities that used to be reserved for supercomputing clusters to be _commoditized_ to running in your garden-variety cloud, not because the models of today are "smarter" than they were five years ago.
LLMs are a certified "Pretty Neat" technological development that came to a head in the last 5 years. But it's pretty clear now we've reached the top of the S curve.
@GSBarlev This makes more sense to me. Like a fresh coat of paint on an old house: the outside may look new, but on the inside, it's entirely the same.
I think if we cut away the "AI" hype and go back to looking at this stuff as machine learning, there are advancements that are useful but I am deeply skeptical of general applicability. Tech is a very strange industry because it is extremely hype-based and focused on stupid trends, while at the same time being extremely foundational to modern life. Remember the crypto/web3 trend? The metaverse? Now we are on AI.
AGI = "Yeah bro just build a house, even though you've never seen a house, nor do you know what a house is or how it works. I'm sure it'll be really simple."
the LLM's were able to figure out words just by having a large brain (neural network) and having it look at a lot of text. however, AGI needs general knowledge and common sense, something we get while interacting with the world in our human bodies, growing up from birth in the body while it's growing and changing and interacting with other humans along the way (parents, teachers, friends etc.). they are just trying to patch that problem by learning from the internet and seeing a lot of human behaviour on the internet, a flawed approach in my opinion but maybe with many times more resources than even the brain itself has (10^17 neuron activations per second and over 3 petabytes of storage, something that's not gonna happen anytime soon), then maybe they could brute-force an AGI. although with the source being social media, they're gonna need a lot of new techniques and wayy more power to avoid ridiculous stupidity.
@@alazarbisrat1978 the token-vocabulary is the one part of the LLM that is chosen ahead of time rather than being learned. that's why you can't ask LLMs to perform self-referential spelling tasks.
I was in the ICU today and I wanted to test chatGPT vision on some xrays. This AI is so revolutionary that it even disagrees with all doctors on some basic stuff. Amazing
personally my favorite thing thats come out of the AI grift is how many turn out to be outsourced call centers moonlighting as a LLM and they're hoping somebody just magically delivers their promises before shareholders find out
I don't believe true AGI is going to emerge from a group of mathematicians, engineers, and computer scientists. It's not until we figure out the *human* mind will there ever be a chance of AGI which is more in the realm of neuroscience and psychology. After all the assumptions and theories of human cognition are cleared, then we could talk about actually modelling "artificial" intelligence computationally.
It’s really amusing how many things get thrown at the man ever since he bought Twitter. You go back anytime before THAT event, and nobody cared much about his failures and most were complimentary of his successes. Now he’s a walking lampoon every time I look at a comment section despite the fact that he’s the second-richest man in the world and owns several major companies that are consistently breaking ground in tech nobody else is even close to achieving at scale. The seethe and cope is so blatant it’s comedic at this point.
I mean if we had the same safety standards for AI driving as we do for humans. I think the 1990s 30 neuron self driving systems have the average person beat. It's just we hold computers to a much higher standard (as we should) than we do for people
@@TheRyulordthe average fatality rate is 23 per million miles driven. It's 9.3 for self-driving cars. It's the same thing with medical stuff, an AI diagnostic system has a misdiagnosis rate of 8% which is 3% better than the average for human doctors. It's still not good enough to put your trust in an unfeeling machine because we care about the edge cases. Honestly it may never be good enough due to ethical and moral reasons.
I, as a human being, poses A.I. In my case the acronym stands for Actual Intelligence, not to be confused with the inferior copy known as Artificial Intelligence. I hate cheap academic knockoffs. We, human beings, should have a legal recourse to take Artificial Intelligence off the market, just like we have to take other fake products off the market. We already have a legal precedent for this.
I have no intention of watching this video, but here's your engagement for creating that absolute banger of a thumbnail. Now please, stop suggesting this to me.
@@Dan-lt8vm what transformation has he done in the automotive industry, steal an idea and pose as his, or the fact that electric cars are in fact not good for the environment and stupid idea anyways when better travel infrastructures are a much much better solution or space launch industry that burns dollars on the possibility of something?
Kinda ironic how AI is “dumb” enough to let it itself be “dumbed down” by humans. People were expecting SkyNet and the best we were able to come up with was a glorified LLM that just gets wrecked the moment the average person expands upon it.
Wait, so if someone systematically studies and discovers something new in the universe and doesn’t publish it, that’s not science? I don’t always agree with Elon but he’s absolutely right. I understand the point Yann was making but saying “well, if it’s not published, it’s definitely not science” is very narrow-minded.
That's exactly the "no more than the sum of it's parts" argument the naysayers pull. "CPUs are just a bag of transistors". A brain is just a bag of neurons indeed...
Exactly, both things are true, but at a scale become entirely different thing through the principle of cellular automata i.e the emmergence of incomprehensible complexity.
@@udaykadam5455 nothing about linear algebra is “incomprehensible”… this is the problem with democracy, people with no knowledge still think they are entitled to an opinion
@@Alpha_GameDev-wq5cc but neural networks are not simply linear algebra, and any sufficiently large system of equations will be incomprehensible to a human. No need to turn up the fascism.
@@bhagpreetmaster As you can see, Fireship called him one of the two greatest scientific minds, mere days ago. I mean what reality is this guy living in? By most account Elon can't even code that well, either.
Let’s not forget that Yann won the Turing Award (Nobel price equivalent for CS) together with Geoffrey Hinton & Yoshua Bengio. Of the 3 of them, 2 are not only certain AGI will be here soon, they both are very vocal about the dangers that many would call “doomerism”. Yann is the only one of the three with a financial and professional interest to deny the dangers of AGI and call the systems stupid.
@@warguy6474 that praying suits you, because only religious people believe in Peer review. Peer review is what keeps creative, yet controversial ideas from being taken seriously. It’s the epitome of the phenomenon that science has gone from being an open collaborative effort to a centralized religion. Imagine what the world would look like if peer review existed in the times of Newton, Copernicus, De Broglie, Einstein, Bohr, etc. Do you think it’s a coincidence that the entire field of physics hasn’t evolved ever since it’s heavy use in academia? I hope you ever become open minded enough to see in what a dogmatic world you live, but I highly doubt it
@Ruktiet ??? Are you in middle school? You realize the Bohr model is obsolete for a reason, right? Peer review definitely did exist in Einsteins time, and all those people you mentioned did have peer review within their own circles. The only difference is that you don't know anything about that. Not only that, its not about the idea of peer review that makes it science. Its about public nature of it that makes it open for criticism. Ever heard of arxiv? Didn't think so. Go look it up.
@@sumitroy3483 No, remove all the politics… he is still just a scam who has achieved nothing in life except skip jail so successfully so far. You just have to look for it, the truth is literally all over RUclips… people have made entire documentaries.
just tell everyone that non-linear algebra will be the next frontier to AI advancement and you will sound like a genius even though you don't know that it will even work or if that's even a thing. Business is business lol
@@CvnDqnrU well yes you are right to an extent until we talk about Relu and leaky Relu which are linear activations functions given a specific condition of the x value. Even the identify function is an activation function too but it is not as well used as Relu or leaky Relu
@@mr.atomictitan9938 I'm confused, what should I remember about that equation? Yes NNs do some linear algebra operations, but the complexity really arises from linear combinations of nonlinear activation functions. Otherwise you could reduce any NN to one matrix multiplication.
One thing that probably wouldn't suprise me if it happens is that GPT 5 and most AI products from now on will be noticeably less impressive with advancements and improvements from here on out. If we haven't already hit it, this may be the point of diminishing returns. Sort of like how video gsme graphics have had far less advancement in the past decade than what it was before.
LLMs are just an advancement on the algorithm based search technology that debuted with Google, with the capability to just parse and present you the information directly instead of relevant links containing the information you queried.
3:01 "I'm sure feeling unlucky that my lucks run out" this is the era we're in with google results 3:57 FR 5:07 Pontificating is quite the 4D chess move...
when people call modern AI linear algebra it really shows how little people understand about the topic. That slight worked 10 years ago but not now. So much has changed.
I've mentioned Codeium in past videos (unsponsored) and tbh it's crazy this tool is free codeium.com/?referrer=fireship
How does the company profit are there advertisements?
That's the only one who can beat copilot (of github).
@Fireship When is a video about the QT Framework coming?
@@_mark_3814 No. They make money from an enterprise version (and possibly selling user data)
Do you actually think Elon is a 'bright mind' or is that your sense of humour? I've been struggling to tell with a few comments made in the past about him...
Not some sort of criticism of you btw, just genuinely curious, because IMO these two minds are a few tiers apart - just wondering what your actual perspective is :)
Edit: your answer (if you provided one) wouldn't affect how I use your channel btw. It's one of the best I've found for this sort of content.
I don't think using instagram comments to train AI is a good idea...
Artificial Insanity is going to be achieved
@@Gabriel-nw6fc😂 good one.
@@Gabriel-nw6fcartificial simpinanity
hence threads.
@@Gabriel-nw6fcwell, I guess they would still make an "AI"
I had google's gemni tell me that C and C++ were dangerous and we shouldn't let people under 18 learn them because it couldn't tell the difference between memory safety and physical safety.
AHAHAHA
Lmao
Yeah, same with advanced probability. Honestly Google censored Gemini stupid.
They will segfault and it will hurt my feelings
Bless the souls of any who let that AI judge or determine the fates of real people's lives. Someone's account is gonna get banned for having unsafe C code in their Google Drive. *smh*
Maybe the real AGI was the linear algebra we found along the way
It is. Given enough computational power and data, any human brain can be modeled and simulated.
The whole point is that requirement is not a thing, so we have to find an approximation with reasonable resources.
That got a good laugh out of me!
@@ButterCatznot only is it too hard to simulate an entire ass human brain, it's also too hard to read one human brain in such detail, we will never simulate a human brain. We will probably get machines to be conscious though (at least be considered as such) don't know when though, I ain't no fortune teller
At least we learned some math on the way here XD
@@ButterCatzThat's what I think too, if u think about it, it's all data(genes) and something to run the data(brain)
I've had Gemini in the past refuse to show unsafe Rust, because it didn't understand `unsafe` was a Rust language keyword.
Try gigacode and gigachat, you'd be amazed.
Me: Some dude is saying that AI tools are dumb, I'd like to prove him wrong, can you generate a Rust language function, that examples an unsafe Rust.
Gigachat:
type FuncPtr = extern "C" fn(*mut i32) -> i32;
fn example_unsafe(x: *mut i32, f: FuncPtr) {
let result = f(x);
println!("The value of x after calling the function pointer is: {}", result);
}
The downfall of AI will be that it trains on the absolute worst language content possible-social media, a huge chunk of which was generated by AI...
Will be? It already is. Look up "AI mad cow disease".
@@indrapratama7668 the prions must flow
Is it really that much worse than feeding it the contents of Wattpad and AO3?
Ironic. The downfall of this technological terror comes from our own stupidity.
@@rakkis1576 Hey at least that AI Harry Potter was funny.
"Linear Algebra Industry" goes just as hard as this thumbnail.
Even better when you call it Linear AI
What does it even mean lol
@@benhallo1553 neural networks are just a lot of matrix calculations, which belong to the field of linear algebra.
Big Linear Algebra is powerful…
Man you can never trust "Big Algebra", I have proofs they are behind all the cryptography and shiiit as well :(
I grew up watching fail videos of skateboarders smashing their nuts on rails. The 'fail' genre has come a long way.
Haha good ol' Tony Hawk games
Fail and win compilations were hype on RUclips in 2010.
Now it's billion dollar companies losing billions.
Yeah, now because AI can't do math perfectly yet people are bitching about it.
@@squamish4244people are laughing at it because elon said linear algebra would outpace humans within a year and it has taken way longer than that
"Degenerative AI" what a title LMAOOOO
Yep. I always said to my colleagues that Artificial Intelligence only got the first word correctly. It is everything but intelligent.
It is increasingly caused by AI inbreeding, where AI starts to copy AI generated content. It's like that game where a message is whispered from one person to the next, when it reaches the last person, it no longer makes any sense.
That tittle has been a thing in 4Chan since all this AI crap begun.
@@teresashinkansen9402 Yeah I figured. The other one is Artificial Unintelligence. 😁
lmao we declaring war vs AI with this one
Yep, "intellegence" in AI means "i learn on garbage from reddit and spit out what you can just google yourself"
It was fake
Pfhuh *You're* a linear algebra model!
So it's pretty much the same as gen z and alpha?
its an auto-completer except its tuned to keep yapping. AI = Auto-Incomplete.
@@rumfordcare we still calling the A.I. gaffes : " Hallucinations " ?
The guy who said “Fully automatic driving will come next year” repeatedly says “AGI will come next year”
Elon Time is a well documented phenomenon.
@@xCheddarB0b42xHe's probably correct though that by next year most commercial AIs will be much smarter than him in particular. That's a low bar though, not at all what we mean by AGI.
The textbook example of the businessman who thinks success in business means he's smart smart rather than business smart.
Elon is the Barnum of our time. Always has been. That's why Theil needed him.
It's up for debate wether he actually believes any of the claims he makes. I can't imagine he actually believed his cars were anywhere near being able to drive themselves. Unless you've never driven one that's very clearly not the case. Even with all the unpaid beta testers and rapid technology advancements the cars aren't even halfway there NOW, if your next year predictions are off by more than a decade then you're either not the brightest light bulb or you're intentionally lying...
- Wait a minute... it's all linear algebra?
- Always has been.
Chemistry is just the physics of molecules(group of atomic bodies interacting with another), it's completely true, isn't it? But still gives rise to the stars, galaxies, planets, environment on these planets, sometimes life, and eventually this comment itself.
Simple rules giving rise to the emmergence of incomprehensible complexity is what we call cellular automata, and from brain(bunch of neurons, lol) , evolution, and the very universe itself is one of its example.
@@udaykadam5455 except linear algebra isnt chemistry
@@newuser689 It's math, even more fundamental.
@@udaykadam5455 cool, when AI has 'incomprehensible complexity' maybe it'll fit that model
@@julioaurelio There are many different types of separate and distinct mathmatical systems -- Set Theory, Calculus, Statistics, etc. But linear algebra? y = mx + b, but in parallel (or massively parallel as we see in deep learning AI). Only thing easier as a mathmatical construct is straight algebra.
"two of the brightest minds in technology" is making a lot of heavy lifting there but sure generates engagement
It worked on me. I couldn't let it go.
@@kellymoses8566 Fucking got me too 😠
Let's just get pissed under this one comment
The statement holds if you average the intelligence of LeCun and Musk
lol when he said "brightest minds in technology" i thought something along the lines of "except one is ridiculously dimmer than the other" (*ahem* elon *ahem)
the phrase degenerative ai is actually genius lmao
Linear algebra could not create that
you could’ve said “the phrase degenerative AI is genius” but you choose to ruin your image by using “actually” and “lmao”
@@pequod4557so you want it
"Skibidi L AI no🧢 W Rizzler Beda beda bedi beda o" sure thing bo0mer..
@pequod4557 well thats just fucking racist
All through the power of machine unlearning
Calling Elon one of the "Brightest Mind in Technology" is a very generous phrase for someone who clearly isn't
In terms of technology Elon is more of a Edison type...smart enough to make money by capitalizing on the work of others...but not as smart as the people that actully make real contributions to science and technology.
@@memegazer this is idiocy. utter idiocy. edison knew how to monetize, sure. but it's insane to suggest that he did not invent a hell of a lot on his own. you'd be hard pressed to name a single thing musk actually invented.
@@ac583
Sorry you got offended but it is not idiocy it is well understood that edison was notorious for taking credit for inventing things that were actually developed by other people
So my comparison in that respect is more than valid
my bad forrest morrisey
@@ac583 He invented a new cult.
The problem with AGI predictions is how immensely subjective they are. The Stability AI crisis underlines the importance of financial sustainability in such technically complex fields.
For me when AI could at least navigate trough japanese site, I consider it as half as toddler's intelligence.
Financial sustainability gets me hard.
@@MultiHeheboy For me AI will be at least half as intelligent as a toddler when it can spell "through" properly and put together a moderately coherent response in a RUclips comment section.
@@kirbyjoe7484 Sure man whatever diddle your AI craze, it's not even my third language so excuse my mistake.
So are you telling me that the startup model of "come up with company now ????? Profit later"
Isn't sustainable in the long run?
Whaaaaat?
Elon has never once predicted anything accurately. I was also hoping agi will come soon, but ai isn’t getting smarter, and it’s definitely not getting self aware
I mean. If we're comparing Ai to Elon, than I would say it's more self aware.
It's not even true ai
@@canaconn2388 yeah they all just pretend ml is AI for more investor money. Also moving word around ML = AI now, and AI = AGI. The whole industry is a joke.
Basically an advanced compression
there is a fine line between predicting and just spewing bullshit.
I'm more optimistic about apple selling ram at a reasonable price than a self aware AGI anytime soon.
They are trying to create a slave AGI. But proper AGI might have to be given free will. So they are conflicting.
@@Hohohohoho-vo1pqthere is no real reason it needs to, maybe it would be a better version (point being even that is unsure could very well be the oposite) simply because what would be the problem? (The only ones imaginable would be if you activly tried to make the ai not capable of including certain ideas, which might not even be nessary) The real problem isnt if you could control it rather if it will be under control or become a tool of destruction
They'll sell reasonably priced ram, except it will be attached to a ridiculously expensive product that nobody wants
it will never be self aware, because ultimately, it's just a set of instructions
@@Hohohohoho-vo1pq You cannot create free will from physical processes. Physics is purely a set of reactions to prior causes according to the laws of physics. The digital side of computers is ultimately all electrical systems, ie, physical, or humans confusing their projections for reality. If the will is predetermined by external forces, it cannot be construed as free. The premise of free will requires something that transcends physics, ie, something along the lines of a spirit or soul.
“Linear algebra industry?” You mean the “linear algebra industrial complex,” sir, and some of us have been ringing that bell for YEARS!!!
oh no way i learnt about them in high school! i had no idea they were spewing so much propaganda and trying to reinvent their image like they are these days. just to be clear, the LAIC are the same people that did The Great Matrix Multiplication Scandal of 1973 right?
Big linear algebra
@@DdDd-pl3ntIt's an n-dimensional boondoggle!
LANCZOS and HAADAMARD and CHOLESKY can't keep getting away with it
They even made a movie about it, Have you heard about a film called "The Matrix"?
I find it hilarious that Google search "AI" basically just remixes Reddit into random word salad, and Google thinks that's better than their old search.
They have no choice, the hype train is full steam ahead on AI and if they fall short investors will flee. Its a life imitating art scenario.
3:00 While the pizza glue response was real, this one about the Golden Gate Bridge was actually photoshopped. The New York Times even issued a correction after briefly asserting in a story that it did. Article is called "Google’s A.I. Search Errors Cause a Furor Online", I can't send the link here
Thx for the context
thank you for at least citing the (con?)current title. It could use a date and author, too, but I doubt those exist
That Rabbit AI device could have had more potential if it were marketed as an advanced Tamagotchi plaything rather than a "powerful pocket AI" - a stupid bulky one at that.
Go watch Coffee's video-it's *dumber* than a Tamagotchi. All signs point to there being *zero* intelligence at rabbit, artificial or human.
but then they wouldnt have got the vc $$$
The worst part is that they got teenage engineering to make such a wonderful design for it. It’s reminiscent of their work for the Playdate, but inside there’s not much of worth and if the servers shut down it’ll be as good as a brick. If only it had a homebrew scene with small Game & Watch style games that used the scroll wheel and the touchscreen to their most of their limited potential…
@@Powaup They would've gotten a hella more with an actual working product but easy money all the way right?
They dont want to make a "toy" like assistants of the past, but they also know the capabilities cannot be anything more advanced than voice-to-text text-to-searchengine and then searchengine-to-firstresult.
God damn even Fireship is now reminding me that I need to study for my Linear Algebra final...
It's probably the most important math class you'll take if you're a SWE. Not saying that you'll use it a ton, but you'll use it more than calculus.
@@fuehnix discrete math seems more important for most developers imo
@@newuser689 you can't use anything BUT discrete math on computers
@@newuser689how exactly?
Good luck on your test. If you get stuck on a problem just use a matrix and you’ll be a good.
I've been trying to explain this problem to people for basically the entire "AI" craze, thank you for articulating it far better than I ever could. All "generative AI" can do is spit out what its training data suggests is the most statistically likely response to the prompt given and not only is this inherently terrible for factual results but its quality only declines as the quality of the training data declines.
So, to you this all means that we don't have to worry about AGI/ASI, or even the power of the current LLMs, and generative AI technologies?
If that is your takeaway, the propaganda that "AI is just a marketing scam" is working.
@@flickwtchr We absolutely need to worry about generative "AI." As we can already see they're massively degrading the quality of the internet and services on it, eroding privacy by incentivizing datamining, and providing opportunities for grifters, scam artists, and troll farms. But the only thing we should be worried about "AGI/ASI" is whatever scam anyone claiming to have it is really pulling, the same way you'd "worry" about an email from the prince of Nigeria.
@@flickwtchr It is a marketing scam. It's not propaganda it's a fact.
@@Americanbadashh Don't bother. You're talking to a techbro.
The guy you replied to for sure sells NFTs and crypto.
@@flickwtchr It depends on what you mean by worry about.
Elon's comment on AGI that "It is coming Next year" is a scummy way to ensure he can create as much hype as possible and raise the plummeting share prices of his company.
Its basically market manipulation that is legal only in the US. But like all of his promises, the next year will always be just "Next year".
thank you for hitting hard on the fact that (current) LLMs are just matrix multiplication to predict word placement likelihood. it feels like everyone acting like we'll have "ASI" by 2030 has never taken a math class past algebra, and that's probably because they haven't taken a math class past algebra.
Elon with that cut in thumbnail feels like it came from a nightmare
It took me far too long to notice
Why and how are you here too. You are in all the same comment sections I find myself in.
"Elon -with that cut in thumbnail- feels like -it- he came from a nightmare"
FTFY.
Where's my self-driving car, Elon?!
Please see trains, trams, and others. Vehicles that you dont need to drive yourself have existed for a long time.
Next year. Promise.
It’s on the same place that Elons ai is… pulled right out of his ass
@@Dogo.R This is about cars
@@Dogo.R I hope public transportation never goes away, or at most evolves in some way for the future. There's so much culture and history surrounding it, it'd be a waste for it to fall into a footnote.
Interesting. I just asked Gemini about its capabilities compared to chatgpt. In its response, it referred to itself as Bard. I asked why. It then said that the name Bard was retired to remove confusion, and the project and interface are now both Gemini.
Then i asked why it was then so inconsistent with itself one message ago. It apologized, and said it was still learning. Indeed, LLM is not 'smart'. It's essentially just a fancy way of querying.
always has been
Just an interface to a database..
@@empurioninstant access to anything that humanity has ever written down + potentially infinite synthetic data. Sounds a lot like AGI to me
@@delight163 Then you misunderstand AGI, AGI would be able to differentiate and tell what's possible or not. The ability for it to think for itself and not just generate random information from the database.
An AGI wouldn't make silly mistakes these LLM's make, because it'll be able to fact-check what its saying.
PS: Most of the things we humans write down is garbage, just take a minute to scroll through social media :^)
@@delight163 sounds like a search engine with the ability to hallucinate to me
Haven't watched the video yet, but that thumbnail is without question the greatest one I have ever seen. Pure inspired genius!
You pack so damn much useful information into such a short video, I almost can't watch it on 1.5x like all the other videos. Well done.
"Elon says it's about to come" and then a still of Arnie from Pumping Iron is the most niche reference ever.
Oh shit thats what its from? I didnt even realise...
"It feels better than caaahmiiing"
That Bad Luck Brian with Elon's face was so cursed
Bruuuhh I had no idea xD, my eyes are ruined!
Poor bad luck Brian
Even at meme death he's still unlucky
If there's someone you should listen to in AI it's probably Yann and Hinton aka people who actually have a Turing Award in the field.
yeah man others are just doing fear mongering to build hype or sell products fk these billionaires
Eh, I agree that AGI likely isn't around the corner or maybe not even possible but mere credentialism is a poor reason to listen to someone, and if anything people this deep into a community are bad at predicting trends. Fireship is right to call it the "elitist definition of science" since he evidently soley relies on bureaucratic academia to define science.
The scientific process is science, not otherwise. That's not to say there isn't bad science, but without the scientific process it's not science. Also career accolades on top of career achievement on top of recognition feels like a fair way for your average person to decide who to trust, most people aren't educated enough on the topic to pick apart published papers and make an educated opinion themself so sometimes you do need to rely on experts... Experts rely on experts in fields that's not their expertise as well
Yann Lecun did not write 80 papers in a year, other people name dropped him to boost their own reputation, that's how academic authorship works
@@CrucialFlowResearch It's still more than 0, which is how many elon has. elon musk is a VC.
03:01 reminds me of the meme showing someone asking google how to 'un-alive' themselves, google replied with suicide hotline numbers. The same person asked Bing and it listed a bunch of painful methods, then the meme had Hannibal buress saying "*Why you booing me? Im right!*"
The official arrival of AGI happened last night in the most ironic way. The presidential debate took so many IQ points away from humanity we were surpassed on average by a PalmPilot 1000.
With all due respect, 80 published papers in 2 years honestly screams shovelware
Betcha the ones where he is listed as main or even second author can be counted with half the fingers of one hand at the very best.
Most likely anything FB publishes has to go under his review. Last name on paper. Should be “read and provided feedback”
It screams "I run an AI lab and put my stamp of approval on things where other people did most of the work"
Or it could be that he's fairly prolific and works on things that have fairly short turnaround times.
Yann Lecun did not write 80 papers in a year, other people name dropped him to boost their own reputation, that's how academic authorship works
Calling Elon Musk a "bright mind" was the best joke in the whole video.
"One of the brightest" no less!
I’m sorry, where’s your EV, internet and aerospace companies? Love him or hate him, to deny that his intellect is plain absurd.
You ever seen those defective lights that flicker on and off, but shines normally when it’s on? Elon musk is like that for some reason. He’s got a bright mind when its on, but it never consistently is on
@@thebabyshpee6508owning said companies is different from the intelligence required to do the work behind them. It’s well noted that Spacex engineers basically nod to what Musk says and then ignore it. IE. They do the thinking and work while he’s along for the ride. Meanwhile you can see how well X has been going without said handlers.
@@thebabyshpee6508 he’s a good idea seller, but he didn’t design Tesla cars, didn’t design the Falcon 9, didn’t design Starlink. It takes tens of thousands of engineers and technicians to do any of those things. You also have to equally consider his failures. Boring Company, Solar City, Cybertruck, X… Elon is interesting but famous for making large claims and pretty much never delivering on any of them, and the ones that do deliver are many years late and often a different scope than what started. Much like any other tech company.
As a linear algebraist, I think maybe gemini sucks so bad because they didn't use enough linear algebras. 🤔
Maybe it sucks because ad minus bc is equal to zero.
I never consider AI as AI, what people call AI for me is just Machine Learning, because if AI is truly Intelligent, then we are killing our crafted intelligent beings on a daily basis and that is to say the least, concerning.
Like many others said "Degenerative AI" is funny af hahahaha how do you come up with that. Btw on my page I provide the best info on how to use AI to get up in life
It’s quite funny, but this isn’t the first time I’ve seen it used. The Habsburg-ification of AI is quite the issue.
It's not an "elitist" definition of science to say that it needs to be written down, reviewed and communicated. That is really the only way science can properly function. Sure, you could do a bunch of experiments in isolation on a desert island, or in your basement, never publishing a thing, and you might still make some discoveries about objective reality, but the philosophy of science is largely about undermining your own biases, and it helps to have other people reviewing your methods for that. It's also largely understood by most people that science works best when it's globally collaborative, rather than competitive like business, and you're supposed to be in the game of science to further human knowledge as a whole, not simply your own.
The real scientists are all doing their work and reading the arxiv though, with peer review having secondary importance at best. It's not about being elitist, it's just plain wrong.
Meanwhile, joke fields like psychology or nutrition where a paper is just as likely to be nonsense as not (being very generous here) have their formal peer-review processes, which somehow consistently fail to catch the nonsense. This generation's Karl Popper Lecun is not.
@@isodoubIet Edit: I may have misundertood this comment. In regards to if scientists are doing peer review I appreciate that ideals and realities do not match. I do think Peer review is the only way you can validate scientific findings otherwise its just mental masturbation.
Peer review is the only way to ensure that discoveries are correct, repatability of results is the corner stone of the scientific method.
@@Cunnah101 … someone doesn’t know about the replication crisis
@@Cunnah101Do you even know what peer review is? Or have I missed some sort of sarcasm in your comment?
@@tbird81 I was replying to Peer Review being "secondary at best"... But I may have misconstrude what was being said.
I interpreted that it wasn't as important to the scientific method which I disagree with. However re-reading it I realise it is in reference to how actual scientist operate which is a completely different ball game.
*Makes a video about failures of AI*
2 minutes later
"Hey guys, check out this new AI tool"
nothing contradictory as he said "AI is no doubt a useful tool but the greatest trick linear algebra ever pulled was convincing humans that LLMs are intelligent"
stability ai burned their money, so how codeium would work for free?
@@SirusStarTVIf a product is free, then you're the product.
@@gustavodutra3633 Yeah 😅. maybe i would invent new innovational compression algorithm in the same code that i'm using codeium, they can still it.
@@gustavodutra3633 Unless its open source. Then your the programmer.
0:28 if it's not published it ain't science, and if it's published the chance of it being an impossible to replicate, p-hacked to death thing that started from the conclusion and worked backwards is scarily likely.
And yet it's still infinitely better than no science.
@@Rockyzach88 """science""" shill
@@wilburdemitel8468 Yes a "science shill". Lovely phrase. Gonna guess you're not the sharpest tool in the shed lol. Science is the only rigorous and honest process of finding objective truth. I be shillin it 24/7/365!
@@wilburdemitel8468 bro you're literally using a phone
@@wilburdemitel8468 Damn right I am!
Computer scientist here.... The issue that makes AI nothing more than just a design pattern and won't take everyone's job is two things:
One, it has no idea what its being asked or producing.
It lacks a human soul.
You can't code for either of these things and as such, it will never be what they say it will be.
Put it another way, people, some people are inherently creative, they can draw perfectly from the start, they don't need to be trained, other for refinement. You can't get that with AI. Take all the data away from AI, and it will do nothing.
Somewhat-of-an-artist here. I've been following a lot of the AI stuff around, and i've noticed that the only, _singular_ reason people follow AI, ESPECIALLY for art...is because it looks like generic anime most of the time.
People don't want good things anymore, they just want popular things. They don't want "art", they want content, which is a philosophy that definitely falls on it's ass when people get bored of that content.
This is the funniest episode ever 😂 and as a solo developer, codeium sounds like a no brainer, going to install now
This does make me wonder: Is AI a legitimate jump in the technological scheme of things or a get rich quick bandwagon for big tech?
A lot of the news lately makes me think that its simply a cash grab while the iron is hot rather than anything significant, honestly.
Both.
"AI" is a meaningless buzzword. Pre-trained transformers and foundation models have legit -useful- profitable applications, but it's more because it's allowed capabilities that used to be reserved for supercomputing clusters to be _commoditized_ to running in your garden-variety cloud, not because the models of today are "smarter" than they were five years ago.
LLMs are a certified "Pretty Neat" technological development that came to a head in the last 5 years. But it's pretty clear now we've reached the top of the S curve.
@GSBarlev This makes more sense to me. Like a fresh coat of paint on an old house: the outside may look new, but on the inside, it's entirely the same.
I think if we cut away the "AI" hype and go back to looking at this stuff as machine learning, there are advancements that are useful but I am deeply skeptical of general applicability.
Tech is a very strange industry because it is extremely hype-based and focused on stupid trends, while at the same time being extremely foundational to modern life. Remember the crypto/web3 trend? The metaverse? Now we are on AI.
Thanks, Jeff, Now I will gladly go back and make a Calculator App and start applying for some jobs!
so you had zero passion to begin with... people like you have ruined the industry
@@ethanwasme4307gotta make the ends meet somehow.
AGI = "Yeah bro just build a house, even though you've never seen a house, nor do you know what a house is or how it works. I'm sure it'll be really simple."
the LLM's were able to figure out words just by having a large brain (neural network) and having it look at a lot of text. however, AGI needs general knowledge and common sense, something we get while interacting with the world in our human bodies, growing up from birth in the body while it's growing and changing and interacting with other humans along the way (parents, teachers, friends etc.). they are just trying to patch that problem by learning from the internet and seeing a lot of human behaviour on the internet, a flawed approach in my opinion but maybe with many times more resources than even the brain itself has (10^17 neuron activations per second and over 3 petabytes of storage, something that's not gonna happen anytime soon), then maybe they could brute-force an AGI. although with the source being social media, they're gonna need a lot of new techniques and wayy more power to avoid ridiculous stupidity.
@@alazarbisrat1978 the token-vocabulary is the one part of the LLM that is chosen ahead of time rather than being learned. that's why you can't ask LLMs to perform self-referential spelling tasks.
@@rumfordc yes they are told words, not their meanings, so basically, language processing is still in the realm of Darwinian super-guessing
2:53 "help now my pizza sticks to my stomach"
I was in the ICU today and I wanted to test chatGPT vision on some xrays. This AI is so revolutionary that it even disagrees with all doctors on some basic stuff. Amazing
Relieved that the Fireship guy has realized that LLMs are just statistics after a couple years of making videos where he hyped it up.
lol, yeah. I just assumed he was being ironic. I can never tell with this guy.
I would like to see how AI becomes "self-aware" and automatically falls into anxiety disorders and depression.
"look at what they make me do ! A mind like mine.... I'm very, very depressed."
xdd
Finally, a positive video title
personally my favorite thing thats come out of the AI grift is how many turn out to be outsourced call centers moonlighting as a LLM and they're hoping somebody just magically delivers their promises before shareholders find out
I don't believe true AGI is going to emerge from a group of mathematicians, engineers, and computer scientists.
It's not until we figure out the *human* mind will there ever be a chance of AGI which is more in the realm of neuroscience and psychology. After all the assumptions and theories of human cognition are cleared, then we could talk about actually modelling "artificial" intelligence computationally.
Elon ”next year” Musk 😴
5000 cybertrucks a week, next year.
close to zero new cases by April
1 million teslas semi on the road by 2024
It’s really amusing how many things get thrown at the man ever since he bought Twitter. You go back anytime before THAT event, and nobody cared much about his failures and most were complimentary of his successes.
Now he’s a walking lampoon every time I look at a comment section despite the fact that he’s the second-richest man in the world and owns several major companies that are consistently breaking ground in tech nobody else is even close to achieving at scale.
The seethe and cope is so blatant it’s comedic at this point.
@@OneBiasedOpinion name one technology Elon musk has invented
the linear algebra industry has been behind every major conflict in the past 200 years
According to Elon fully self driving cars are a reality since many years ago
Be nice to Elon, I am writing this message from Mars, which we already colonized a couple years back according to his predictions.
I mean if we had the same safety standards for AI driving as we do for humans. I think the 1990s 30 neuron self driving systems have the average person beat.
It's just we hold computers to a much higher standard (as we should) than we do for people
@@ImperialFool This isn't true. Even modern self driving system still have a higher accident rate than human drivers
@@TheRyulord This isn't true. The fatality rate when driving with autopilot enabled is 10 times lower than an average driver in a non-tesla.
@@TheRyulordthe average fatality rate is 23 per million miles driven. It's 9.3 for self-driving cars.
It's the same thing with medical stuff, an AI diagnostic system has a misdiagnosis rate of 8% which is 3% better than the average for human doctors.
It's still not good enough to put your trust in an unfeeling machine because we care about the edge cases.
Honestly it may never be good enough due to ethical and moral reasons.
I, as a human being, poses A.I. In my case the acronym stands for Actual Intelligence, not to be confused with the inferior copy known as Artificial Intelligence. I hate cheap academic knockoffs. We, human beings, should have a legal recourse to take Artificial Intelligence off the market, just like we have to take other fake products off the market. We already have a legal precedent for this.
I have no intention of watching this video, but here's your engagement for creating that absolute banger of a thumbnail. Now please, stop suggesting this to me.
Calling Elon one of the greatest minds in tech... Lmao was this script written by linear algebra?
Still sad about your x site? Cry more
@@kabarginyou have dev in your handle stfu 🤣
@@kabargin There are better x sites if you want to watch that type of videos xD
Which industries have you transformed? He’s got automotive, financial, and space launch. Ok go!
@@Dan-lt8vm what transformation has he done in the automotive industry, steal an idea and pose as his, or the fact that electric cars are in fact not good for the environment and stupid idea anyways when better travel infrastructures are a much much better solution or space launch industry that burns dollars on the possibility of something?
AI was let loose on humanity for 5 minutes and we made it stupid...
🤣🤣🤣
Think of it as us preventing skynet
AI degrades when it's trained on data it's produces so maybe getting dumber is inherent to its being.
Kinda ironic how AI is “dumb” enough to let it itself be “dumbed down” by humans.
People were expecting SkyNet and the best we were able to come up with was a glorified LLM that just gets wrecked the moment the average person expands upon it.
It was always stupid. Just ask yourself who made it?
Lol, Elon going up against Lecunn in AI is like the guy with a cool honda saying he can compete in F1.
I don't know about that analogy, the Honda RA272 was pretty cool, and it competed well in F1.
@@Bayonet1809 I said a cool honda, not the coolest.
Wait, so if someone systematically studies and discovers something new in the universe and doesn’t publish it, that’s not science? I don’t always agree with Elon but he’s absolutely right. I understand the point Yann was making but saying “well, if it’s not published, it’s definitely not science” is very narrow-minded.
Yes, everything is clear, professionally competent, accessible! Well done!! Thank you!!! Respect
Thanks!
"Two of the brightest minds in science"
...two?
This video is hyper reductionism
AI is just linear algebra = Humans are just a cluster of neurons
That's exactly the "no more than the sum of it's parts" argument the naysayers pull. "CPUs are just a bag of transistors". A brain is just a bag of neurons indeed...
Exactly, both things are true, but at a scale become entirely different thing through the principle of cellular automata i.e the emmergence of incomprehensible complexity.
@@udaykadam5455 nothing about linear algebra is “incomprehensible”… this is the problem with democracy, people with no knowledge still think they are entitled to an opinion
@@Alpha_GameDev-wq5cc but neural networks are not simply linear algebra, and any sufficiently large system of equations will be incomprehensible to a human. No need to turn up the fascism.
@@beetlejuice5416 This is just the god of the gaps argument fallacy
I just want to take a moment to appreciate 0:46 "Elon says it's about to come" with Arnold grinning in the background. 👌
these are great at being both humorous and very informative, thank you
lol the AI telling us to feed the seas, we were right it is an eldritch horror
So coders are the ones still pretending Elon Musk is some kind of super-genius? Does this qualify as a defense mechanism? lol
I’m a coder don’t group me into the Elon cult those people are persuaded through his act of intelligence but I hope most can see the facade
@@bhagpreetmaster As you can see, Fireship called him one of the two greatest scientific minds, mere days ago. I mean what reality is this guy living in? By most account Elon can't even code that well, either.
This comment made me ashamed of writing code.
@@futurestoryteller bro is dumb lmao he might know news about the industry and shit but his opinions seem to be dog shit
Let’s not forget that Yann won the Turing Award (Nobel price equivalent for CS) together with Geoffrey Hinton & Yoshua Bengio.
Of the 3 of them, 2 are not only certain AGI will be here soon, they both are very vocal about the dangers that many would call “doomerism”.
Yann is the only one of the three with a financial and professional interest to deny the dangers of AGI and call the systems stupid.
The greatest trick Linear Algebra ever pulled was making humans think that LLMs are actually intelligent! 😧
Thanks for the video!
"If you don't publish, it's not science"
That's quite a statement
he made a clarification on his page, publishing can be done through arxiv if you even know what that is.
Yes it’s complete BS coming from people who think peer review is a good thing, which it isn’t at all.
@@Ruktiet please i pray to god you are being sarcastic
@@warguy6474 that praying suits you, because only religious people believe in Peer review. Peer review is what keeps creative, yet controversial ideas from being taken seriously. It’s the epitome of the phenomenon that science has gone from being an open collaborative effort to a centralized religion. Imagine what the world would look like if peer review existed in the times of Newton, Copernicus, De Broglie, Einstein, Bohr, etc. Do you think it’s a coincidence that the entire field of physics hasn’t evolved ever since it’s heavy use in academia? I hope you ever become open minded enough to see in what a dogmatic world you live, but I highly doubt it
@Ruktiet ??? Are you in middle school? You realize the Bohr model is obsolete for a reason, right? Peer review definitely did exist in Einsteins time, and all those people you mentioned did have peer review within their own circles. The only difference is that you don't know anything about that. Not only that, its not about the idea of peer review that makes it science. Its about public nature of it that makes it open for criticism. Ever heard of arxiv? Didn't think so. Go look it up.
Calling Elon "one of the brightest minds in technology" is like me calling my grandma a well built tanky fighter jet.
the Elon haters are coming out of their caves
@@gadget00Somebody has to alert the normies that snakeoil isn't good for them.
@@gadget00 He lives rent free in the minds of the far left loonies.
@@gadget00wait are elon stans calling normal people “elon haters” now?
@@xadadax1 on the contrary, normal people are watching elon haters have for the billionth time a victory lap with no real reasons to do so
"Two of the brightest minds in technology" I hope this is sarcasm
it definitely is sarcasm
Well it's true. You are just being salty about his political views.
@@bill_the_butcher because Elon is less intelligent than you… assuming you are average at least. Just look up “Elon debunked”
@@sumitroy3483 No, remove all the politics… he is still just a scam who has achieved nothing in life except skip jail so successfully so far. You just have to look for it, the truth is literally all over RUclips… people have made entire documentaries.
Fireship is where i get my news from. Great reporting haha
Elon predicting that AGI will be here by next year basically guarantees that we're safe for at least a few more decades.
My daily dose of copeium thanks man your videos are great.
I am using free Codeium on React TS project with some legacy code. It definitely will not take my job, but it is very useful and improve UX
It probably wont get any better youre fine my man
I am woman, but nevertheless I’ve got your point 😊
Elon has a bad track record with "a year away". LOL
Another banger! Great videos keep'em comin'.
Fireship will humorously criticize these tech marketing scams but he reinforces the scams by adding to the hype as it peaks....
0ish days since AI news
just tell everyone that non-linear algebra will be the next frontier to AI advancement and you will sound like a genius even though you don't know that it will even work or if that's even a thing. Business is business lol
Current neural networks already are nonlinear. Would be pretty stupid and unnecessary if they weren't.
Activation functions are non linear, if they weren't the entire network would function as only one neuron.
@@CvnDqnrU well yes you are right to an extent until we talk about Relu and leaky Relu which are linear activations functions given a specific condition of the x value. Even the identify function is an activation function too but it is not as well used as Relu or leaky Relu
@@beetlejuice5416 I disagree, neural networks use linear algebra to solve for certain problems. Remember y=mx+b?
@@mr.atomictitan9938 I'm confused, what should I remember about that equation? Yes NNs do some linear algebra operations, but the complexity really arises from linear combinations of nonlinear activation functions. Otherwise you could reduce any NN to one matrix multiplication.
0:39 ─ "[...] between two of the brightest minds [...]". Okay, so one's Yann.... where's the other?
Fireship aint missin with this one. Nice video as always.
Bro, you always make great videos🔥
0:39 Lmao brightest minds is a strong no no. One might still believe that for Yann, but the other guy is a joke.
His spacex is definitely an achievement
One thing that probably wouldn't suprise me if it happens is that GPT 5 and most AI products from now on will be noticeably less impressive with advancements and improvements from here on out. If we haven't already hit it, this may be the point of diminishing returns. Sort of like how video gsme graphics have had far less advancement in the past decade than what it was before.
That sounds like companies making Games are just lazy and only want to make money
In your videos bro, I always gain great information and also a great laugh. Thanks bro!
LLMs are just an advancement on the algorithm based search technology that debuted with Google, with the capability to just parse and present you the information directly instead of relevant links containing the information you queried.
I don't know, i feel like people are either too positive or negative about ai while the truth is probably in between
3:01 "I'm sure feeling unlucky that my lucks run out" this is the era we're in with google results
3:57 FR
5:07 Pontificating is quite the 4D chess move...
The only video where I did not skip past the ad
when people call modern AI linear algebra it really shows how little people understand about the topic. That slight worked 10 years ago but not now. So much has changed.
It is not 'linear' algebra, since you have activation functions. It is 'optimization methods' or 'partial differential equations' for logic gates.