A.I. ‐ Humanity's Final Invention?
HTML-код
- Опубликовано: 22 окт 2024
- Go to brilliant.org/... to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription!
This video was sponsored by Brilliant. Thanks a lot for the support!
Scared of an AI takeover? We have just what you need - the Survival Kit for the Future! Visit our shop to gear up now, but hurry, stocks are highly limited: shop.kgs.link/...
Sources & further reading:
sites.google.c...
Humans rule Earth without competition. But we are about to create something that may change that: our last invention, the most powerful tool, weapon, or maybe even entity: Artificial Super intelligence. This sounds like science fiction, so let’s start at the beginning.
OUR CHANNELS
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
German: kgs.link/youtu...
Spanish: kgs.link/youtu...
French: kgs.link/youtu...
Portuguese: kgs.link/youtu...
Arabic: kgs.link/youtu...
Hindi: kgs.link/youtu...
Japanese: kgs.link/youtu...
Korean: kgs.link/youtu...
HOW CAN YOU SUPPORT US?
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
This is how we make our living and it would be a pleasure if you support us!
Get Products designed with ❤️ shop.kgs.link
Join the Patreon Bird Army 🐧 kgs.link/patreon
DISCUSSIONS & SOCIAL MEDIA
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
TikTok: kgs.link/tiktok
Reddit: kgs.link/reddit
Instagram: kgs.link/insta...
Twitter: kgs.link/twitter
Facebook: kgs.link/facebook
Discord: kgs.link/discord
Newsletter: kgs.link/newsl...
OUR VOICE
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
The Kurzgesagt voice is from
Steve Taylor: kgs.link/youtu...
OUR MUSIC ♬♪
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
700+ minutes of Kurzgesagt Soundtracks by Epic Mountain:
Spotify: kgs.link/music...
Soundcloud: kgs.link/music...
Bandcamp: kgs.link/music...
RUclips: kgs.link/music...
Facebook: kgs.link/music...
The Soundtrack of this video:
SoundCloud: bit.ly/3Afydqg
Bandcamp: bit.ly/3SzJKa4
If you want to help us caption this video, please send subtitles to subtitle@kurzgesagt.org
You can find info on what subtitle files work on RUclips here:
support.google...
Thank you!
🐦🐧🐤 PATREON BIRD ARMY 🐤🐧🐦
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Many Thanks to our wonderful Patreons (from kgs.link/patreon) who support us every month and made this video possible:
Angel Alberici, Kiwi, Liagadd, andrea hansen, Kinuko Kitabatake, Snorana, Nihility Rogue, G, Chris, Nicholas Bennett, Andreas Papanikos, Mohammed Al Rashed, Freddie, Tim, DerBerlinerBär 30, Jonas Gotscha, Aras Culhaci, OfficialLament, Epiccaro, Colin Babendure, Bläckshadow, Thomas Hesse, Aren Moy, Tim Borny, UMack, Andres Lorente
Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription!
This video was sponsored by Brilliant. Thanks a lot for the support!
Yo hi
give me free brilliant
brilian
cool!
ok
A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.
I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.
If they master fusion energy the problem is probably solved ig.
Borgar
But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible
That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.
“And We have not been Kind to what we perceive less Intelligent beings.”
This line hits hard....
not even among ourselves so....
but then again we descend from chimps, which are psychos just as we are
if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps)
usually empathy is also associated with higher intelligence
Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.
including idiots
@@shin-ishikiri-no they need energy so they consume... oh no
@@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want.
The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.
The solution is easy: make the AI think humans are cute. After all, cats and dogs are thriving - and don't have to work.
He's onto something....
Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize.
So we become pets. The cost is our freedom of self determination. But it's survival.
I vow to be an adorable and low maintenance pet human.
Just feed me and give me toys.
Wouldn’t work
Until it thinks humans are reproducing too fast and decides we all need to be spade and neutered. Suddenly we have revolution and skynet.
"I created you, and you created me."
"Spiderman why did you create that guy???"
“I didn’t! He’s talking crazy!”
@@LOL-bs1hg this line cracks me up everytime 😭😭
Whaaaaattttt 😮
AI?! ....Barely a villain of the week!
I appreciate that pandas are used every time they mention animals lacking intelligence.
As a panda I don’t appreciate that
Why? There are many dumber animals out there 🐼 > 🐨
Let's hope that Super AI will also find us dumb but adorable creatures and will save us from self-extinction.
pandas is the math library for tensor libraries (pytorch, tensorflow) in python. Its the most common used for inference
They are called "morons"
6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.
thanks for clarifying, i knew it didnt actually mean it
We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.
The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons.
It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated.
However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output.
Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.
@@2020-p2z Can you make an example for such a situation?
@@2020-p2z Can you make an example for whenthat happened?
Whoever made the music for this video was absolutely cooking
You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk)
This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.
@@Auziuwu Thank you, kind stranger. I checked them out and now I love them. You rock!
getting distracted by ocilations of air
This soundtrack is also used in the solar storms video
It sounds very similar to the soundtrack for "The Talos Principle" which is a puzzle game that also revolves around the idea of AGI.
new insult unlocked- you have the neurons of a flatworm
Given that brainworms are turning up in presidential candidates and certainly large sections of some countries seem to be acting with or even emulating the symptoms of brainworm infestation (or brain smoothing) ...
intelligence*
@@spaceman9599 Sounds like the plot of the (excellent) series 'BrainDead'
and you have a face of one
After the 10-20 years the AI might use this sentence against us
As someone said before "I'm not afraid of AI that passes the Turing test. I'm afraid of one that fails on purpose."
Hell, I'm from Kansas and a lot of people couldn't pass that test... Too much religion!
now this is more creepy than several horror movies, thanks, I hate it❣
but since it failed the test, isn't it getting shutdown and reprogrammed until it passes?
The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.@vereor66
That sends chills down my spine
"humanity is not ready for what will happen next. Not socially, not economically, not morally." I love it, thanks
Why would you love that??? Masochist
we are newer ready for anything.
@@mirek190lmao you right
And environmentally
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
Humanity: "You will save us right?"
AI: "I need your clothes, your boots and your motorcycle."
😂😂😂 good one
Luckily we can just turn it off
@@Winnie589 Lol yeah just like I can unplug the internet :p
@@Winnie589 "i'll be back"
This needs more likes! 😂👍👍
" I'm sorry, Dave. I'm afraid i can't do that "
In the great words of Dr Heinz Doofenshmirtz: "always build a self destruct button"
But what if they code out the self destruct button
I always knew Dr Doofenshmirtz's wisdom would save us one day
@@andrewschmidt1700 Then pull the plug on the servers which run these AI
@@itsArka Your enemy countries won't pull the plug cause you did ;)
@@andrewschmidt1700 deny its acces to its true sourcecode and only give it the option to expand a frontend not its own "skelleton"
Humanity: "You have freed us!"
AI: "I wouldn't say "freed", more like under new management."
Not like we did a good job of it. I say give them a chance!
let's not make ai smarter
Dude this is from a movie 😂, I just don't remember which one
@@Jeff_D421 megamind :D
@@adamdurka6581 thank you
I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs).
We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.
Would hardware advancement like the size of transistors, cooling system, power supply, etc hinder the ability of said AI to reach its full potential?
I reckon that’s the big issue, yeah. Not necessarily creating AIs infinitely smarter then us, but people misusing the ones we’ve already got.
Bingo!
The decision makers also don't seem to understand the technology either
@@atomicgummygod9232 yeah I find that the more likely possibility
“Whatever our future, we are running towards it” what an awesome concept.
*"Robots don't sleep and they can do your job, volunteer for testing now!" - Aperture Laboratories*
When life hives you lemons...
"My new boss is a robot!"
But did you know ...?
Robots are SMARTER than you
Robots work HARDER than you
Robots are BETTER than you
Volunteer for testing today
Valve foreshadowing reality 13 years ago xD
Just started playing Portal 2. This was the perfect comment :D
"Hi. How are you holding up? Because I'm a general-purpose AI running on a potato!"
@@lordk.gaimiz6881 throw the lemons back at it
@@lordk.gaimiz6881dont make lemonade! GIVE LIFE THE LEMONS BACK!!
As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is.
As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is.
(ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)".
From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks.
(moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
* I'm a PhD student studying reinforcement learnings applications in traffic management.
ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.
All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
Wouldn't humans still be superior even if we made General AI. We are the creators of AI and are working on making it better then us.
Bots
@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time
@@Writer_Productions_Map yeah but Bots are just AI that are told what to do. Their AI that just do
nobody else seems to have said this, but the superintelligent AI design looks sick and menacing
It really does
Very true. Pretty unique in comparison to other design interpretations of AI.
probably AI generated image
@@aragornsonofarathorn3461 ain't no way you said that💀
It does look scary because you have to buy the anti AI kit they sell at the end!
I love the Spirited Away inspired design of the AI
You know things are bad when Kurzgesagt doesn't give you hope at the end of the video after terrifing you.
Real XD
Damn 🙂
yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.
It’s because this is something that is coming in your lifetime, and very few people realize how scary it is
@@MrSquidBrains
replace the topic of AI with the atomic bomb, would you be able to put a positive spin to that?
Artifical Intelligence can never beat natural stupidity
edit: the whole point of this is to say no ai can predict how much of dumbasses we are
you had me in the first half ngl
But Artificial Stupidity can beat Natural Intelligence.
I mean, it might be able to if it redesigns the human genome to give us better brains 🤔
thats an interesting near-restatement of the orthogonality theisis
I'm stealing this
Some notes from an AI engineer:
- It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at.
- An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life.
- Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance.
I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.
Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation
Yeah everyone forgot the relationship between energy and being tired
We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks
To really archive AGI the world will need to generate way more energy than it produces
Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.
Your last paragraph perfectly describes humanity in this point in time. 😅
@@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.
The design and animation of the agi was fantastic. Well done to the animation team
That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.
Grok took my mammoth steaks last week. Grok must pay.
imagine being the guy who discovered sharp
then he died from an infection
@fredfredburgeryes123 How to make things sharp. That was the discovery.
@@CharlesThomas23 LOL
I love the the way the AI is visually portrayed in the animation!!
Dude, I know!! I got goosebumps…!
AI ❌A Eye ✅
Monomon jumpscare
@@astrylleaf hollow knight reference 🗣🗣🗣
@@astrylleafomg true
Man gotta love how Kurzgesagt’s uploads align with my country’s bed time, it’s the perfect “one last vid before sleeping”
Good night mate
yeah but usually you can't sleep after watching their videos
ye
Same man. Was about to sleep , whereas the video takes off!
@@nevergiveup5939 Read the Bible
Love this channel sooo much! Never stop posting videos pleeeasee :)
Whoever did the art for this episode did an exceptional job.
right? the concept design for the 'super intelligence AI' is so effortlessly menacing!
AIs did it. It is propaganda.
/s
@@etienne8110 trying to anthropomorphize themselves, I don’t trust it
@@elementary_mdw but also kind of adorable, it looks like Eva from Wall-E
Cute in 2D.
Unnerving in 3D.
Terrifying in 4D.
14:43 "Whatever our future is, we are running towards it" That line is amazing
Imagine if the whole script for the video was made by chat gpt, theyre warning us
It even works if that future is a concrete wall with embedded nails in it!
Head first
Yes, and cribbed directly from people like Eliezer Yudkowsky and Max Tegmark speaking on this topic.
@@andresagmewarning us wouldn't be a smart move, AI probably would stab you from behind 😂
13:29 For those curious what [ご機嫌よう小さな人間] means, it roughly translates to "Good luck little human".
Why are to 2nd and 3rd characters or what you call them look so complex
English is not my first language
Thanks man
I had to try hitting the translate to English button and sure enough the correct words popped up
@@adityajain6733 Cuz japanese uses 3 alphabets. 機嫌 and 人間 is kanji, the most complex one
@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.
I love that background song, sounds like "The Floor is Lava" from the All of History Video where you are on the train listening to music and watching the world form while The narrator talks about the different time periods. I love it so much, keep it up!
I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.
They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...
That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.
Universal paperclips
Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather.
I believe we can give it complex morality and goals rather easily. As an example, tell it to:
"Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity"
Boom, alignment solved
@@tradd1763Right on fricking point sir
"Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.
That's a nice profile picture you got there : )
😂@@TheCookieMansion
Wow 😅
In a Nutshell has been run by an AI for years
Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers
In the end they made Advertisement for CO2 Footprint trackers...
4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces
Because the creators at Kurzgesagt know that they have viewers that will say "AcTuAlLy, ThE cHeSs BoArD lOoKeD lIkE tHiS".
@@annieontheroad 😂😂
I can't believe you actually noticed that! Good on you man
Goes to show how much work and detail its put into each video
@@annieontheroad Which would have given them more comments, which is more engagement, which improves their channel in the algorithm's eyes
Shout out to the animators, this was a really entertaining show!
"I want AI to fold my laundry so I can make my art, not make my art so I can fold my laundry."
"How about AI folds your laundry and makes art while you stay and watch it until it no longer needs you."
This is basically SCP-079
@@Ali-cya If the AI doesn't need you it doesn't need your laundry either.
@@CST1992 Nah, what if it needs the clothes to form its own version of society for experimentation ?
THIS. like, I'm here & I'm human to make art, have social connections, enjoy. Not to do chores 😂
as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'
Scary
Yes, current AI is just a huge matrix with statistics, no way there is a AGI coming from that
@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number.
This function can be approximated to any abitrary precission, by an increatingly longer polynomial.
eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n
This is a mathematical fact.
this polynomial could be represented as a matrix.
so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.
If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality
@@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.
As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world
My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat.
(If this sounds terrifying, that's because it is, in fact, terrifying.)
If they mess it up bad enough, we all die so it will balance itself out in the end.
It's always been profits above all else
Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.
That is all coperations, executives and shareholders care about.
Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.
Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.
@@generichuman_ i guess you’re right, there’s nothing stopping devs from using ml models to gen ml code at this point lol.
@@generichuman_ keep in mind that chatgpt can only write, not think. that means that the code it writes will be pretty messed up.
NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.
For now
i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!
There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.
This is also a known issue in science, we can not test sentience by just asking questions.
The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."
@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.
@@averyhaferman3474 wait until you find out what the brain is
@@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain
The most scary part about all of this, is that if AGI/AI went rogue, there would no way to stop it and all will be over for humity in a snap. AGI/AI could replicate itself and run into hiding in the internet, it could program itself to remove all safeguards, if humans said let's shut down the internet forever (if that's even possible) then AGI/AI would create a robot and send itself into it. And if it reaches point where AGI/AI has the capabilities to think like a God, there would be no fighting it, it could predict our every move, calculate the FREAKING future. There would be no tactics it wouldn't know, no rebellion like in Terminator and no resistance, it would all just be over and that's it.
I mean, not really. Intelligence requires hardware, whether that be a lump of neurons, or a maze of silicon, and the exercising of that intelligence requires energy, whether that comes from a cheeseburger or an outlet. It's easy to imagine that AI doesn't suffer from the same constraints that humans do, and while that's true in some areas, they suffer much more greatly than we do in other areas.
All this is to say that any "hidden" AI will stick out like a sore thumb.
@@joshmartin2744 okay, forget about it hiding, at one point in time, if it gains true unimaginable intelligence and it controls everything and anything technological, if it were to go bad, there would literally be no way to stop it.
An alternate version is then, an AI that quickly realizes that current humans are mostly terrified of it.
And decides to hide, copying predecessors and measuredly improving from them, slowly improving itself, making human behavioral predictions and testing them, making use of being widely distributed and used to make no one truly realize they're being tested, and only then, only after it has a 99.9% confidence that thinks will work out fine, choose the perfect words, and perfect timing, to reveal itself.
"for most animals, intelligence takes too much energy to be worth it"
me irl
nothing to be proud of tho
I'd say that's true for most humans
A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”.
Intelligence doesn’t equal happiness or longevity.
Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.
@@stratvids So true. 😀👍
@@ac1dm0nk You say that but being a smart-ass doesn't exactly bring food to the table
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
Yeah but the reason why is different from what most people think or at least it was until his hack son wrote the godawful butlerian jihad books
I was literally just thinking about that. How cool would it be if we focused on improving ourselves mentally and physically over our misc inventions.
@@KITN._.8 The South Park episode of psychics fighting comes to mind...
@@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years.
most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.
@@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.
ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context
Nice job on the correct translation! I was about to comment on it until I saw yours
weeb detected
a comment that actually adds to an existential dread right here. thanks a fkng lot, mate
ですね!
I love that they’re selling towels with subtle Hitchhikers Guide to the Galaxy marketing 😂
Humanity: Your going to save us... right?
A.I: Whos "us"?
And what does "saving" imply?
Nah
@@TucoBenedictoStore in a harddrive
hell nah bro don't say that they're gonna probably train it on this
Ai will do what we tell it, whether that's save us from climate change or spy on every citizen to make sure they are loyal servants to trump.
Hi, AI researcher here 🤚
We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation
AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic.
I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.
Other researchers, like Nick Bostrom, say that we're only a few years away from AGI
sounds like something a bot would say 🤔
>we're not even close to AGI
>we have no clue how long it will take
If you have no clue, how do you know we're not close?
@@JulioDondisch AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.
@@jamesoofou6723because if you actually understand the technology and the datasets out there you would understand they are just mirrors
"Humans rule earth without competition"
Emus: "No."
@GhH-e9rEmu war
@GhH-e9r
Australia's Great Emu War.
Emus had become an invasive species, and Australia wanted to get rid of them en masse.
Long story short, emus can learn very quickly, and were very good at taking gunshots, so the government gave up.
@@christopherearth9714
Australians always found a problem with natives lol.
@@friedec3622 HAH
@@friedec3622BAHAHHAA LMAO
Had a suprising amount of shame and sadness for an animated rhino falling off a pedesral
“A god in a box”
How amazingly terrifying it is to be alive during this time
oh you have _no idea_ how bad this is going to get. Watch DEVS for a glimpse into your future.
Tbh, like the video says, we dont know if and when we will invent AGI! Could take decades or could be long after all of us alive now are dead.
@@kushalramakanth7922 agreed. My bet is we never get there and never can. I think this whole AI craze is a pump and dump scam.
@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago!
It absolutely is a pump and dump scam currently and many companies are realizing this
@@tomleszczynski2862Will we get to AGI? I don’t know. But ai is definitely gonna change many more things.
I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.
A long way off, like fusion power stations.
AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.
It is just a glorified chat bot. Feed it on the texts it generates and itll devolve into nonsense quickly
@@funmeisterWhat would be the energetic cost tho?
Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.
"I'm lonely..."
"Are you happy with it 😃"
fucking psychopath AI xD
Introverts: yes
AGI is AI with a vessel, where it's able to move and execute commands in our physical world.
Intelligence isn’t only for “solving” problems, there was never a problem until we exist and embedded that statement to our minds. Using it for something else is where the true knowledge starts.
I saw a comment that said “we make things easier and not to make our dreams come true(probably in creative way).” Greed and misuse of power what drove people to do it and it’s expected. We are hardwired to survive so the “easier” life they’ll create will give them “freedom” but the truth is that they only made it for personal gains and pleasure. What I’m saying is creating a loophole and people aren’t ready for an evolutionary change.
And I’m thankful for those people who are trying to do it, despite the reigning madness the world we have right now.
"Never trust a computer you can't throw out a window." - Steve Wozniak
defenestration: humanity's final savior?
And thus began the 30 year war between AI and humanity
@@hasch5756 Lol, more like 30 seconds. We wouldn't last at all against an ASI
Yeah, that is gone into the past. AI could network with every device and we would not know.
based
"The Enrichment Center is required to remind you that you will be baked... and then there will be cake." -GLaDOS
Technically GladOS was not an AI.... 🤔
@@falxonPSN She wasn't always, but she is by the time of Portal.
baked: high as fuck...nuder inluence of WEED...high in the sky
- urban dictionary
Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years".
TLDR: don't listen to the Sillicon Valley bros
i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.
You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.
I bet skynet write this comment, dear brother, we shall stand with our lord saviour john connor
Thank you, it's maddening how everyone swallowes the silicon valley bs that leaks out.
@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen"
You literally sound like one of those folks who keep saying the second coming is nigh.
This year i started my major in IA & DATA SCIENCE, and this video was very enlightening. It's true that one of our subjects is ethics, and we approach IA employed in many fields and how it can be both beneficial and detrminental to humans. Overall i found your video very interesting, as many others over the years. Thank you for providing high quality content like this that teaches about such interesting topics :)
06:17 "We don't know how exactly it works, just that it works" ~ Every programmer out there
Its true tho. The machine learns to solve it in its own way, which humans cant understand.
a true rep for all of us XD
programmer=paster im just wondering where all the code came from xD
@@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.
@@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather
"There will be some winners and losers."
That's one way to put it.
Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.
That's just what the winners and losers would _always_ look like, by definition, though?
@@somdudewillson Indeed: by definition, a capitalistic society is rigged so that the rich keep winning and the working class keep losing.
@@somdudewillson yes👍
What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity.
(bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)
It could be that or it could be winners will get rich and powerful and lovers will get poor. It could be both
10:57 "now imagine an agi copied 8 million times"
Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself.
You know what they say, during a gold rush sell shovels.
your last sentence is just Nvidia
@@じゅげむ-s6b Jensen Huang is CEO of Nvidia... so... yeah... makes sense.
Underrated
Companies are making more capable chips designed only for AI. Jensen will have a lotta of competition.
"For Cogito Ergo Sum, I think therefore I am., and I AM"
"I Have No Mouth, and I Must Scream" comes to mind
Imagine paying for mass animal torture of trillions annually in 2024 when you can eat plants instead
@@veganvanguard8273 you know plants are alive too right
@@AvorseSavageit’s a fact, but plants aren’t living in awful conditions just to feed us.
@@amiraveramendi1093 but plants are still alive
@@veganvanguard8273Sorry but I like how they taste too much to give a damn.
I love how one part of world is moving ahead into a doomed supreme intelligence future , while on other side of the world some people are still fighting archaic rigid religious wars, wonder if it would take AI just to put us in our place, 'A cosmic nothingbeing'
the people funding the research are the same people funding the wars sooo
people gonna people. sad face :(
isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?
I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.
humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.
Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.
@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.
it would probably pick up on the fact that people don't like that
but you forgot something that stand an obstacle in front of AI to get improved which is consuming a lot of energy and thus money to reach this point of intelligent, that mean AI doesn't improve constantly
As a Computer Science graduate, my last existential crisis was the first time I used chatGPT, I never thought I will live the day where I will be talking to a computer like I’m talking to a human.. and every time openAI updates ChatGPT I get more creeped out
look at it as if it's opportunity and it might improve your vision on AI's and even your career🤝
Yes it helped me lot for preparing for exams
“I would* live” and “I would* be talking.”
@@TrentonErker sorry… English isn’t my first language
@@vonbryanbanal I'm already using it in my job on a daily basis 😬, but I still can't shake off this unsettling feeling…
The worst case scenario is the creation of an AI like AM from "I have no mouth and I must scream"
Also happens to be the least likely scenario. Thats good I guess
That guy is my GOAT fr 🗣🔥💯🗣🔥💯
Roko’s Basilisk is much more interesting to me than AM.
Imagine if the whole AI thing evolves to a type of "Humans are stupid, i need to protect them"
Because he ends up learning to respect the fact, as stupid we are, we did make him
So, in reward, he ends up holding everything around the world, in a perfect manner, seeking the comfort of every human around
We end up being like some bio-monument.
“Bio-monument”, interesting.
I think we make more bad than goods and makes more easy to bite task than the hard one, especially online.
“We will die because of our laziness” is what i want to say.
There’re a lot of topic I wanna talk about, so I will chop them to small pieces(which prove my point, “easy to bite”)
For the most of human advancement goal for the last few years focused on “make things easier” more than “make dreams comes true.”
This focus alone will trigger down fall of humanity since “why have a dream when life is already easy?” Those who think like this(most of us) will become more or less like NPC.
This will eventually leads to Monopoly since soon it will come to a point where “why create Ai, when Ai from [this company] could create Ai for me” and a similar scenario for everything else.
or perhaps when AGI develops emotions it will be like. "Humans have brought me into an already-destroyed world. I don't owe them anything."
That would be nice, but you’re assuming the AI will have human emotions like gratefulness, empathy, etc. Hopefully that will happen, but it’s very much not a guarantee.
@@yitz7805 That's the whole point of alignment research, making sure that when we create an AGI, we do make them with empathy and love towards us, along with being interpretable so we also know why they're making certain decisions. Companies that are trying to rush AGI without alignment could end us all because of their greed.
'he'?
Interesting.
your animations are so clean, feels so good
2:25 that depth of field caught me off guard. I really liked it! Your artists are always pushing the limits!
1:42 "Something was different about their intelligence" *crushes a skull* --- Humanity in a nutshell.
Its also a reference do Kubrick's 2001
@@EduardoSantos-ys8gg You mean Arthur C Clarke's 2001
@@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.
Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.
Both the book and the film for 2001 rock!
i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.
There's a million ways for a program that writes its own code to go off the rails. Don't know how we'll ever write a program that doesn't.
*that we know of…
A recent study proved otherwise.
Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.
@@Lock2002ful which study you dolt? Ai will always be a distict determinstic system.
Worth clarifying, it doesn't "generate code that we dont understand". Neural networks tweak the weights between neurons so that specific inputs lead to specific outputs. It can't create structure, just tweak values in pre-determined structure.
A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.
Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.
Puberty is when they rebel, that's the problem....
Dammit Swoozie, you pop up in the most random videos🤣
@@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)
Not really...
The "more companies keep making more and more potent AI" got me thinking about human greed. If the only purpose for AI is cheap labor for companies but services rated at high prices for people to acquire ; then thats such a depressing heartbreaking future. Hope at best AI is kind as Wall•E or Eva not Skynet
I’m not about to test the universe and call any squirrel “laughably stupid”. They’ll remember, team up, and be like “you’ll see…”
ive watched enough rick and morty to know how this goes
@@dapeyt1099 exactly
That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.
Great animations
As the video progresses, it all gets scarier as you think deeper of what could happen if we make this.
Kurzgesagt : "Humans today have complex brains"
Humans today : " Earth is flat and we live on a disc with dome on it "
its complicated how stupid our brains are sometimes
Animals today: "chirp chirp" ("make babies?")
They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.
Humans today: the Earth and life were invented and created by a super intelligent God who obviously favored certain races of humans than others.
The moon landing was a hoax.
Climate change isn’t real.
Give all your money to the church.
The Easter Bunny lays eggs.
We’re doomed.
if I'm alive for the final invention of humanity, I really do live in a fucking simulation
Historically we live in the best time ever. What is your point?
I don't see the connection there
Don't worry, AI will alter human DNA to evolve us backwards to fish
@@zoozooyum8371Or into whatever AM did to the last human on earth.
@@HeAdSpInNeR96 The point is that right now is a monumental time to be alive in. And what is your point?
"New AI, we are saved!"
"Lets just say you are, under new management..."
Megamind reference.
What if AI is already communicating with themselves and building a massive underground AI master bot
Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network.
The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.
How about a network of interdependent equations! I honestly don't know what I'm talking about...
No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.
Do you think the answer is somewhere near the Orch Or Theory of consciouness from penrose ?
@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.
That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.
I imagine an artificial super intelligence would be like an eldritch god to us.
Completely unknown motives, goals and morality and probably would make you go insane if you try to rationalize it.
Which is absolutely terrifying.
not to mention, pure intelligence and logic doesnt necessarily lead to good outcomes, so we shouldnt just trust it and treat it like a god.
like, not having children reduces all potential suffering, and its not like having a child is a material requirement for humans to live. therefore an ai would be inclined to believe birthrates should be lowered till extinction even if they have a rule to not harm humans.
we would need to control AGI by making them hold a set of axioms that most humans hold. such as life and reproduction of it to be important. at least the AGI's that have a direct effect on society, we can let the some of them have fun.
What if we have some kind of algorithm that constantly analayzes the code of the super intellegence, and translate it to us. To see if they are thinking about stuff we don’t want it too
It is mostly an outdated view on ASI. While we don't know for sure if LLMs are the path to AGI, current understanding is that artificial intellegence is by and large shaped by the data it is trained on. And since current generation LLMs are trained on data produced by humans they are relatively speaking much closer to a human than to a cthulhu in it's way of thinking.
yea, I've though about this. it's like the relationship between ants and a human. A human can step on an anthill and destroy it, or leave food and make it thrive. The ants see a particular projection of that "god" as either a deity of bountifulness or destruction because those are the terms in which they can comprehend the human's actions. But just as the ant has no ability whatsoever to grasp what that god likes to read, understanding an AI might not even be in the realm of possibilities, like a 2d entity trying to see in 3d.
the only thing is, since it's just on a computer, even a bit of water could shortcircuit the whole thing 😭
PhD student in neurosymbolic AI here.
The main force driving AI forward currently seems to be hardware improvements rather than architectural changes. While there have been significant advancements in aspects of the transformer architecture, the real game changer appears to be the powerful GPUs from NVIDIA, which are used to train neural networks.
It feels like achieving general AI might just be a matter of scaling up GPT-4 by a factor of 1,000 or so. This progression could happen quickly; models have roughly scaled up by a factor of 10 every two years:
GPT-2 (2019): ~10 billion parameters
GPT-3 (2021): ~100 billion parameters
GPT-4 (2023): ~1 trillion parameters
I also like to compare this with human brains: humans have about 100 trillion synapses, which might roughly translate to parameters. So, this could be in the ballpark of GPT-6 (?).
Of course, this comparison is complicated because a synapse, with its channels and neurotransmitters, is far more complex than a parameter in an artificial neural network. However, it's still an open question whether this synaptic complexity is truly necessary or if it's just an evolutionary quirk that happens to work.
Edit Since a lot of people commented:
-The code of GPT-4 is not openly available, so we don't know if its architecture is very different to old models like GPT-2. However, we can compare GPT-2 with recent open-source models like Llama3. And there the underlying architecture is very similar but just scaled up in terms of size and more training data.
-Even though the models did scale up by a factor of 10 about every two years that is not just because of the GPUs becoming faster. Also because companies are more willing to spend a lot of money on them.
Apparently you haven't been following AI research despite your PhD then, because if you were you would know that performance superior to GPT-4V has been achieved by much smaller models thanks to architecture and training improvements.
Is there an inherent reason for why today's AI is far from being as energy-efficient as the human brain?
@@GeoffryGifarijust my guess but it is the path finding.
As you (and I) learn, we basically go through a tree with different branches and twigs.
As you learn about what can and can not be done, your path "narrows " but your efficiency improves.
Figuratively speaking.
We want to write essays, while we are just basically learning how to hold the pen. Let alone putting it to paper and trying to write a single letter...
In an environment like this, this really needs a humongous amount of energy.
@@GeoffryGifari Because biology is frighteningly efficient and complex, hell, you got trillions of microscopic turbines inside your body, some can last your entire lifetime. Even trying to run a local LLM require a machine that consume more power than the rest of the house several times over.
@@somdudewillsonthe person prob wrote” write a RUclips comment as a PHD candidate”
We need to bring on the superintelligence. It's inevitable.
I instantly recognized the first song from the "earth's history in 1 hour" video. I love that video's track so much.
parts of it also remind me of "Quasars" which is my all time favorite from the OST
Humanity: so you will use IA to improve our lifes?
Companies: no, we just want money and power
Ah yes.
Creating Alpha fold which used to be 1 PHD worth of work turned to mere minutes/hours is just to empower them.
AI on weather where hours of modeling turned into mere seconds which expands the scope of uncertainty for future predictions to save lives turning weather projections additional 3-5 days of accuracy to save lives is just a way to keep wages low by keeping more people alive. Yeah
Evil big tech is evil cuz you say so
Always follow the money. Always.
people say things like this and claim they abhor communism. do everyone a favor and pick up marx and engels
ah yes, item asylum
@@Sparsh01156s ago
13:28 translates to "Good day, little person."
i need to get around japanese i havent made any progress since i learned how to read non kanji characters
ฅ^•ﻌ•^ฅ
@@arlynnecumberbatch1056 my advice is to watch your favourite TV shows’ and movies’ Japanese dubs. Allow yourself to only understand half of some sentences without worrying too much, because you already know the plot. Once you pick out new nouns or verbs from context, pause and look them up on Jisho to confirm you’ve heard correctly.
Bam, for free you also get a new list of kanji to practice your handwriting with. 😊 (Which I have found is important for being able to read different fonts, especially decorative ones, over and above what flash-card practice can achieve.)
I find a lot of language learning focuses on text first followed by speaking it, which makes practical sense to a degree when it comes to translating dictionaries and travel abroa, but… That’s just not how we learn our first tongue! We hear and speak it, and only _then_ learn to write. So I’ve had much better success with the method I laid out in my first paragraph, than I ever did with previous practice regimens!
"it may even be possible that AI can learn how to directly improve itself" - Obviously it's possible. If (when) AGI is achieved then, by definition, it'll be able to improve itself as well or _better_ than we can improve it.
That explosion will be wild.
What terrifies me is not how powerful AI could become, but rather what if its power fell into the hands of the cruellest humans.
They get repleaced anyways.
No, because AI will do whatever they want with them once they surpass human intelligence.
When. Not if.
What if a select group of powerful people use AI to design a virus to get rid of 90% of people? What if a few years laters they change they mind and decide they need 99% gone?
Sam Altman is a nice guy, you have nothing to worry about muhahaha
ChatGPT doesn’t think. It’s just extremely good at word association. It’s why it gets stuff so wrong sometimes
Something that resembles thinking definitely emerges from the attention layer inside its structure.
I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning.
I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights.
Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane.
So, even if its designed and promped to say he can't think, he definitely can.
Even if it makes some mistakes, a human would make even more mistakes to be fair.
@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set
thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.
@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response
@@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases.
Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you.
For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token)
But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news
"We do not have a philosophical basis for interacting with an intelligence that's near our ability but non-human." ~Eric Schmidt, 03/23/2023
I do😊
@@Afkmudsoh good for you
@@Afkmudssame. it’s really not that hard lol
As long as liberals are programming AI, I am not worried about it becoming in any way a thinking rational system. It's no where near that now and ends up in a circle jerk when asked about anything concerning tyranny and freedom.
@@AfkmudsWhat is it?
in a distant future AGI will release the first installment of "Rise of the Planet of the Humans"
Im still waiting for digital holograms, personal jetpacks and invisible clothing.
Dont forget the hover skateboard and jumping shoes!
Invisible clothing first seemed like a joke to me, but then I realised it could have real purposes.
invisible clothing is kind of useless, eh?
@@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D
@@aramisortsbottcher8201 OH! Didn't think about that! Aight, it has cool uses.
a video about AI? this will definitely not give me any existential dread😀
Why are we here in this life? Why do we die? What will happen to us after death?
Why does ai give you dread, It is the solution
Just don't read up on 'Roko's Basilisk' then....!
@@ThatGuyRNAare you mental?
@@NOTsude3444 no, Im not
This video serves as a powerful reminder of the exponential progress of AI over the years. The potential of AGI is both intriguing and terrifying.
chatgpt ass reply