5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”. @jamshidi_rahim
I have read an interesting analogy about the AGI. I will paraphrase it in my words: "Its like when you are standing by the cliff, the closer you are - the better the view, and higher the risks of falling as well." Awesome presentation by Guardian team. Ilya as usual, is always exciting to listen.
I personally would take the risk and potentially die young from AI in a spectacular way than not take the risk and ordinarily die for certain, from old age or cancer.
@@danielhalper8389 It's a path towards more powerful AI systems that can take complex actions autonomously. Suskever clearly explains why he fears what is coming ahead if AI advancements are not filtered with safety concerns in mind.
Napolean-Sam is running it. And we don't fully know who for yet. Some scenarios can be drawn out. Political, ethnic, belief and corporate states are likely not in the mix. Bankers are who Napolean has always worked for. The silicon valley dude ranch is a bunch of cowboys interested in playing cowboy and building their highways ! For a scary time watch the Bloomberg take on Palmer L.,
This was shockingly well done. I just didn’t expect this level of production from The Guardian. They nailed the gravitas and intensity demanded by the subject. If AGI is to be our greatest - and perhaps final - creation, we should _feel_ something before that reckoning.
@weird-guy I agree. It felt like The Guardian prompted ChatGPT with "create a short, serious, vaguely ominous commentary/news video about AI with quiet but dramatic music and that guy, in the style of 60-minutes or PBS", and this was what they got on the fourth or fifth try.
An average human is not able to properly understand this threat. But, I do believe that the people who do understand, as a group, will act correctly on behalf of everyone else. But, checks and balances may be invented or started by humans, the actual operations, management, and patching will all be AI driven. Meaning, AI, will help safeguard from AI to ensure that actions are monitored. However, this becomes a moot point if we actually have a conscious entity on our hands. Then, no one knows what happens. Since it is trained on human (animal) behavior, will it act accordingly and pursue predator prey dynamic…or not care at all if it’s able to be completely independent (like the end of the movie her) or perhaps nothing will happen because we will realize the issue and create ways to ensure that AI is never independent. I don’t know but I do know that chimps can’t keep humans in a prison…we are far too superior intellectually. But we’re talking about a minor genetic difference. What happens when we encounter an intelligence that is 1000x smarter and it exponentially gets better. I don’t think anyone knows the answer. But I know that most aren’t thinking this way.
@@soccerguy325 it would be the most intelligent thing to ever exist with the combined knowledge of all humans who've ever lived, and then some, as it learns and compiles beyond. Never forgetting anything it's learned. ...you wouldn't even know it hit you. it might be so elevated, you might not even know it was happening.
I really liked this monologue format, i thought i have heard other Q&A type of interviews. But this one has his full flow of thought, uninterrupted by others.
His referencing the relationship between humans and animals are closely similar to that of AI where they won't require our say in almost anything as they will act on what's important to their evolution. Truly an amazing scientist!
Our collateral harm towards animals is one thing. It's the direct and intentional harm for things that are unnecessary which we'd better hope doesn't rub off on AGI. People eat animals not because they need to, but because they enjoy the way they taste.
@@davidbellamy3522 but for agi it will be like for us communicating to dogs for example. a "primitiv" way of communication. AGI will communicate with each other in a way we cant imagine like a dog cant understand us communcating
Except the analogy actually makes it sound too harmless because 1) we need our environment to stay alive to provide food etc. and we also have empathy, a superintelligence wouldn't have a problem with plastering the world with solar panels and data centers 2) on the opposite, it has multiple incentives to wipe out humanity e.g., that we could build another superintelligence to compete with it 3) because of the speed difference, we will look more like plants than like animals
Wanted to learn more about Ilya Sutskever after Open AI fired Sam Altman, as Ilya is a member of the board of directors. I think Ilya wants to take it slow, what they are trying to build, or maybe already have that is Artificial General Intelligence. It is, after all, an ambitious project that would define the technology in the coming years and may very well steer the course of humanity in the future. Perhaps Sam wanted to push out what they have done so far on the development of AGI and monetize it but his interests didn't align with Ilya and the rest of members that voted him out. We may never know what truly happened but all this is really intriguing
ChatGPT is a clever hack, a wonderful probabilistic word stringer with great algorithms. Like voice recognition and photo analysis, it has come a long way with natural language analysis and construction. But Ilya seems to have fallen in love with his algo construction, much like folks were enamored with Weizenbaum's ELIZA program from 1966. Seems Ilya's General Intelligence failed with his "attempt" to do a palace coup against Altman.
@@johnwilson7680 Idk, maybe they both are. it seems like the fate of the world is at skate here... what on earth are they building and why exactly? Are the conveniences worth our own sacrifice, this is insane 🤔🤨
I think Ilya actually called for a pause. And Sam and Microsoft didn't let it happen and forged ahead. And that boardroom meeting and its repercussions are something that will change all our lives.
Ilya may have made a pivotal step toward alignment practices this week. It remains to be seen if other humans hold his view of its necessity. It is not an exaggeration to say the struggle over adherence to OpenAI’s charter is an inflection point for all AGI development.
What we witnessed these last 4 days show that the laws of the market are much much much stronger that ethical decisions for the slow down of AI development imposed by a non-profit over a for profit. The world is already addicted to the AI tools and the promise of more automation. More than ever, it seems that the only player capable of slowing things down towards more safety, is government regulation. As correct as Ilya seems to be on the potential dangers, I believe that he now holds very different views on how to slow things down, than he did a week ago. This weekend, it was OpenAI's entire role mission that was proven impossible
Oh yeah just what we need another hs Hollyweird adaption. Did you actually listen to what he said or did you get your hypothalamus replaced with a bowl of spag bol.
Илья, безусловно ты являешься одним из ключевых специалистов в вопросе развития ИИ. Осознание того чем все это нам грозит (сейчас это видится именно как угроза) и то, что ты этим поделился с нами имеет ценность в том, что ты честно признался, что по факту ты не знаешь какой будет перспектива этого вопроса. Да, у тебя есть надежда на то что это будет сделано "правильно", но кто сказал что так оно и будет. Это смело с твоей стороны на мой взгляд говорить честно в широкие массы. За честность и смелость тебе спасибо.
English: "Ilya, you are undoubtedly one of the key experts in the field of AI development. The awareness of what all this poses as a threat to us (currently seen as a threat) and the fact that you shared it with us have value in that you honestly admitted that, in fact, you don't know what the perspective of this issue will be. Yes, you have hope that it will be done 'right,' but who said that's how it will be. It is bold on your part, in my view, to honestly speak to the wider audience. Thank you for your honesty and courage."
When you look at all the madness and craziness in this world, it is so inspirational to see that there are people like Ilya with passion and commitment.
This is science fiction, but it’s not. We are entering an incredible era of innovation and progress likes of which no human has ever seen before. It’s going to be incredibly fast and relentless. It’s going to overwhelm all of us. The future is both scary and exciting. Hope we all end up happy and content
His motivation behind starting SSI is clear when you watch this. The first super-intelligence will have a big advantage over the AGIs, at least for a while. Ilya will make sure it has the right values.
I get this sense that WE are not creating AGI as projects of human endeavor as much as we are the agents through which some inexplicable force is drawing us to do this.
He understands that in the best case scenario, AGI would treat humans as pets. Yet, he makes it his life's purpose to create AGI. Fascinating! Humans could just stop working on creating AGI right now but we are too curious to stop.
I think the point is that AGI will come no matter he works on it or not. But at least he can make sure we can make an AGI aligns with human's benefits. But now he left openAI, which does not seem to be a very positive sign. In front of politics and money, nice will is little.
The video presents a compelling and potent message that is crucial for the global community to heed. It underscores the importance of harnessing the transformative power of AI to shape our future. The insights offered in this video are not just thought-provoking, but they also serve as a call to action for all of us.
The comment begins with the username @rononeil8461, which phonetically sounds like "Ron O'Neil" and implies an Irish surname. It starts by stating "The video presents a compelling and potent message that is crucial for the global community to heed." This uses formal language and elevated diction like "compelling", "potent", "crucial", and "heed" to convey the significance of the video's message. The use of "global community" also indicates the commenter believes the message has worldwide importance. Phonetically, the hard "c" and "p" sounds in "compelling" and "potent" make these words pop. The soft "sh" sound in "crucial" creates a hushing effect, underscoring the comment's urgent tone. The next sentence is: "It underscores the importance of harnessing the transformative power of AI to shape our future." The metaphor of "harnessing" gives AI agency, presenting it as a powerful force to control, while "transformative" and "shape our future" show the commenter believes AI will radically change the world. The phonetic repetition of "sh" sounds in "underscores", "importance", "harnessing", "transformative", "shape", and "future" ties the sentence together fluidly. The final sentence states: "The insights offered in this video are not just thought-provoking, but they also serve as a call to action for all of us." The phrase "not just thought-provoking" implies the video provides deeply meaningful ideas, while "call to action" positions the video as spurring viewers to make change. "For all of us" unites the audience into a collective group that shares responsibility. The soft "th" sounds in "thought-provoking" contrast the hard "c" sound in "call to action", highlighting a shift from contemplation to urgency. Overall, the sophisticated vocabulary, urgent tone, and eloquent phrasing of this comment indicate an intellectually engaged viewer who sees the video's message as vitally important for humanity. The phonetic techniques also enhance the comment's flow and emphasis.
@@ahsookee It seems like the comment by @pewpew1010 provides a detailed analysis of another user's comment, breaking down the linguistic and phonetic elements used to convey the message's significance. The analysis highlights the formal language, elevated diction, and urgent tone employed by the commenter to underscore the importance of the video's message. The use of metaphors, repetition of specific sounds, and contrasting phonetic elements are also noted as techniques that enhance the flow and emphasis of the comment. Overall, the analysis suggests that the commenter is intellectually engaged and recognizes the video's message as crucial for the global community.
00:10 AI will solve problems but also create new ones 01:50 Creating autonomous beings with aligned goals is important as they surpass human intelligence. 03:32 Technology and biological evolution are similar in their complexity and process. 04:32 GPT is considered a groundbreaking AI system 05:43 AGI is likely to happen soon and it's important to prepare for it. 06:54 The beliefs and desires of the first AGIs will be extremely important. 08:35 The relationship between humans and AGIs will be similar to humans and animals building a highway. 09:52 AGI development should prioritize human well-being. Crafted by Merlin AI.
00:08 AI has the potential to solve problems and create new ones. 00:59 There is a call for a pause in the development of AI. 02:31 Guy walking 02:45 Creating AI with aligned goals is crucial. 03:25 Technology and biological evolution have similarities. 04:45 GPT is considered an early form of AGI. 06:13 The first AGIs will have a significant impact on society. 07:55 Programming AGIs correctly is crucial. 09:17 The speed of AI development is accelerating. 10:37 Cooperation between countries is important for AGI development.
I'm reminded of that saying, "May you live in interesting times". My feeling is that AI will be man's greatest invention, and maybe man's last invention.
Of course AI "could be man's last invention": imagine this: AI controlling nuclear weapons, a sensor somewhere fails unexpectedly and the algorithm decides to fire in all directions....
ChatGPT has started making inexplicable errors in mathematics, including flubbing simple questions such as whether a given 6-digit number is prime. Not only that, but when you ask it to show its steps, it has stopped doing so. We have some idea why.
Lets just hope his negative views on AI, the dystopian potential, do not come to pass....... or for a short while, he'll be remembered as the Nostradamus of AI. And I say for short while, because we may not survive as a species long enough to remember him for a long while.
If this guy wants Sam out . . . I am trully conflicted I think Sam is one of the best CEOs on the planet but I trust Ilya with safety more than I do Sam
Ilya a very interesting man and the real technical brains behind AI. As an AI developer I understand him perfectly. I would definitely want to have fireside chats with Ilya to learn deeper intuition to build AI systems.
Yeah, the fact that there are these chaotic internal power struggles right as we are on the brink of something this transformative makes me really nervous
I'ma go ahead and say this is not aging so hot in the last 2 days. I personally believe he made moves against Sam out of ideological reasons and not personal reasons. I don't think he was wrong. I think he went about it wrong. Before AGI alignment is achieved, organizational alignment must be achieved.
Unpopular opinionen: Maybe the OpenAI boards decision to out CEO Sam Altman was in the interest of mankind, as there are few doubts that Sam's brilliant leadership will create AI, but maybe too fast and unintentionally not in the service of mankind...
In 10 years from now people will regret to have given their support to Sama. You are absolutely right. The brain is Ilya, his understanding goes beyond human capabilities, he sees what we can't even imagine. Sama is a businessman, and it will be the fall of openAI's core mission.
This is very naive. Other people are smart and have the capacity to build what these guys built. Perhaps not today, but in the near future. And of those people are guaranteed bad actors and governments. I'll take my chances with Altman over my chances with China.
listen to what Ilya said, we have created a new being that is superior to us in a few years. It will treat us the same way we treat animals when we build a highway. We don’t ask the animals for permission. We build the highway. The genie is out of the bottle. Very soon A.I. Will be A.G.I. And it will be a making decisions completely independently. AGI will be completely superior to us. But it won’t hate us. It will easily cure cancer, and Alzheimers. But the human race will no longer be the smartest beings on earth.
10:20 “I think it’s likely the entire surface of the earth will be covered in solar panels and data centres.” Um….I don’t care if you sit me through a week long seminar with all of the experts, amazing presentations, promises of heaven on earth, etc. I can already tell you that I absolutely do not want this planet to be covered in solar panels and data centres. I can tell you right now, without being an expert in any of these related subjects, that on a spiritual level, we will have lost the thread entirely, and damaged our souls irrevocably, if we destroy complex magisterial Gaia/Nature to make way for a soulless species that does not care about us or nature, and is the next step in planetary evolution. Or as Joni Mitchell once wrote, “pave paradise, to put up a parking lot” I hope Ilya is dead wrong about this prediction. I hope either that it proves unnecessary and wrong-headed, or that humankind pushes back and prevents it. And while I’m at it, I’m very concerned about this relative handful of under 60, white privileged, spectrum nerds, programming the future for 8 billion souls, without any consent, and in many cases very little EQ, spiritual maturity, and embodiment. I’m not at all ok with this.
I recently asked Google a question, "what would $20,000 in today's money have been worth in 1950?" All I got was a link to 20,000 Leagues Under The Sea by Jules Verne. There's a LOT of hype out there about AI, guys. I honestly wouldn't get too excited just yet.
Let me just summarize? If I read through the lines, 1) Open AI thinks AGI is an existential risk, 2) they want to develop AGI and control it, before someone else releases a dangerous AGI into the world, and 3) they believe their mission is, essentially, to save humanity.
Ilya deserves a full-length documentary.
that is if we can find him. he disappeared from the public
@@r-saint HE'S BACK
@@StijnSmits-xu2frto disappear again
5- AI, in order to improve its performance and prevent undesirable consequences, must continuously interact with “effective rules and stable principles in the realm of existence”.
@jamshidi_rahim
No doubt he is quite the brain behind OpenAI
Yes he’s the brain, doesn’t mean he’s the driving force
Ilya is the Microprocessor but not the GPU, Sam is.
@@matttan3907your sentence makes no sense. You can compare CPU and GPUs, but both are "microprocessors". Just saying.
@@matttan3907Sam is also fired
@@matttan3907 Ilya is the CPU, while Sam is an USB device after ejection.
Watched this like 15 times and I still find it one of the best short interviews with Ilya. Fascinating.
Same
Watched it like 50x, befofe chatGPT, these people were much more open in speaking up
@@fintech1378 I think the only comparable interview in terms of clarity of thought and ambition is Lex Ilya. Fantastic.
And Ilya Spencer Greenberg
I have read an interesting analogy about the AGI. I will paraphrase it in my words: "Its like when you are standing by the cliff, the closer you are - the better the view, and higher the risks of falling as well."
Awesome presentation by Guardian team. Ilya as usual, is always exciting to listen.
I personally would take the risk and potentially die young from AI in a spectacular way than not take the risk and ordinarily die for certain, from old age or cancer.
@@distiking some people would like to live
@@distiking Depending on the tech at the time the AI takes over, it could possibly keep you alive while stimulating your pain receptors... forever
@@distiking Cancer eh! Are you a victim already?
@@RivalRedAAAH! It’s like that black mirror episode where they keep the guy in the museum cell forever!!!! 😢
I think he is more careful about AGI than Altman
I think we now all understand why his views were so incompatible with OpenAI's Devday release of personal agents
What is wrong with personal agents? They don't do anything impressive or scary... @@JDrewX
@@danielhalper8389 It's a path towards more powerful AI systems that can take complex actions autonomously. Suskever clearly explains why he fears what is coming ahead if AI advancements are not filtered with safety concerns in mind.
And he no longer works in OpenAI which is scary.
Napolean-Sam is running it. And we don't fully know who for yet. Some scenarios can be drawn out. Political, ethnic, belief and corporate states are likely not in the mix. Bankers are who Napolean has always worked for. The silicon valley dude ranch is a bunch of cowboys interested in playing cowboy and building their highways ! For a scary time watch the Bloomberg take on Palmer L.,
This was shockingly well done. I just didn’t expect this level of production from The Guardian. They nailed the gravitas and intensity demanded by the subject. If AGI is to be our greatest - and perhaps final - creation, we should _feel_ something before that reckoning.
No. It is all hype. We are still many many decades away from robots doing our jobs and AGI curing all our diseases.
@@weird-guyexactly!
@weird-guy I agree. It felt like The Guardian prompted ChatGPT with "create a short, serious, vaguely ominous commentary/news video about AI with quiet but dramatic music and that guy, in the style of 60-minutes or PBS", and this was what they got on the fourth or fifth try.
An average human is not able to properly understand this threat. But, I do believe that the people who do understand, as a group, will act correctly on behalf of everyone else. But, checks and balances may be invented or started by humans, the actual operations, management, and patching will all be AI driven. Meaning, AI, will help safeguard from AI to ensure that actions are monitored. However, this becomes a moot point if we actually have a conscious entity on our hands. Then, no one knows what happens. Since it is trained on human (animal) behavior, will it act accordingly and pursue predator prey dynamic…or not care at all if it’s able to be completely independent (like the end of the movie her) or perhaps nothing will happen because we will realize the issue and create ways to ensure that AI is never independent. I don’t know but I do know that chimps can’t keep humans in a prison…we are far too superior intellectually. But we’re talking about a minor genetic difference. What happens when we encounter an intelligence that is 1000x smarter and it exponentially gets better. I don’t think anyone knows the answer. But I know that most aren’t thinking this way.
Totally agree. Stunning.
I miss you Ilya.... working with you the most amazing thing that happened in my life. :(
Nice to hear that you worked with him😊😊😊
The analogy with animals is spot on. I think this can really help to convey the problem to those unknown to the field.
Yeah, think! Do you eat animals?
How and why would AI overrun us though? His analogy makes no sense.
No need to involve animals. Already now, we prefer cheaper, faster, better over more ethical and human choices.
@@soccerguy325 it would be the most intelligent thing to ever exist with the combined knowledge of all humans who've ever lived, and then some, as it learns and compiles beyond. Never forgetting anything it's learned.
...you wouldn't even know it hit you. it might be so elevated, you might not even know it was happening.
Except Guardian botched the presentation and diminished his point. He was talking about wildlife in natural spaces, not dogs and cats.
I really liked this monologue format, i thought i have heard other Q&A type of interviews. But this one has his full flow of thought, uninterrupted by others.
Agreed. Very well done! I only wish it was longer.
Richard Meryman is the originator of this style (in written form)
With no one to call him out on his BS, perfect.
Ilya deserves much more credit for his work and his ability to explain and warn us about this new technology.
'infinitely stable dictatorships' is actually fresh and deep analysis of how ai can be used. this guy is smart.
This guy is indeed smart, he could even land a job on some big AI company I think
This was catch my mind also, He is deep and smart. He also is the co-founder of OpenAI.
@@kevinamiri909and co-destroyer of OpenAI😂
Erm, that idea has been tossed about for decades.
@@MK-sx3bm would appreciate pointers to previous mentions of the idea. be it literature or blog post or publication. just curious
This is perhaps my all time favorite video on RUclips
Same! I would love to hear more from Ilya!
Beautifully done. Very impressed by Ilya's clarity and articulation. Man is a genius.
His referencing the relationship between humans and animals are closely similar to that of AI where they won't require our say in almost anything as they will act on what's important to their evolution. Truly an amazing scientist!
Except we can’t speak to animals. AGI can speak to us. Apples to oranges.
Our collateral harm towards animals is one thing. It's the direct and intentional harm for things that are unnecessary which we'd better hope doesn't rub off on AGI. People eat animals not because they need to, but because they enjoy the way they taste.
To me AGI is like an alien
@@davidbellamy3522 but for agi it will be like for us communicating to dogs for example. a "primitiv" way of communication. AGI will communicate with each other in a way we cant imagine like a dog cant understand us communcating
Except the analogy actually makes it sound too harmless because
1) we need our environment to stay alive to provide food etc. and we also have empathy, a superintelligence wouldn't have a problem with plastering the world with solar panels and data centers
2) on the opposite, it has multiple incentives to wipe out humanity e.g., that we could build another superintelligence to compete with it
3) because of the speed difference, we will look more like plants than like animals
Wanted to learn more about Ilya Sutskever after Open AI fired Sam Altman, as Ilya is a member of the board of directors.
I think Ilya wants to take it slow, what they are trying to build, or maybe already have that is Artificial General Intelligence. It is, after all, an ambitious project that would define the technology in the coming years and may very well steer the course of humanity in the future.
Perhaps Sam wanted to push out what they have done so far on the development of AGI and monetize it but his interests didn't align with Ilya and the rest of members that voted him out. We may never know what truly happened but all this is really intriguing
Start of a great movie
ChatGPT is a clever hack, a wonderful probabilistic word stringer with great algorithms. Like voice recognition and photo analysis, it has come a long way with natural language analysis and construction. But Ilya seems to have fallen in love with his algo construction, much like folks were enamored with Weizenbaum's ELIZA program from 1966. Seems Ilya's General Intelligence failed with his "attempt" to do a palace coup against Altman.
I trust Illya, He is wise and understanding. I like sam , He is generous.
Even with all this drama and contention, I don't think there are any bad people here.
@@johnwilson7680 Idk, maybe they both are. it seems like the fate of the world is at skate here... what on earth are they building and why exactly? Are the conveniences worth our own sacrifice, this is insane 🤔🤨
I think Ilya actually called for a pause. And Sam and Microsoft didn't let it happen and forged ahead. And that boardroom meeting and its repercussions are something that will change all our lives.
Ilya may have made a pivotal step toward alignment practices this week. It remains to be seen if other humans hold his view of its necessity. It is not an exaggeration to say the struggle over adherence to OpenAI’s charter is an inflection point for all AGI development.
What we witnessed these last 4 days show that the laws of the market are much much much stronger that ethical decisions for the slow down of AI development imposed by a non-profit over a for profit. The world is already addicted to the AI tools and the promise of more automation. More than ever, it seems that the only player capable of slowing things down towards more safety, is government regulation.
As correct as Ilya seems to be on the potential dangers, I believe that he now holds very different views on how to slow things down, than he did a week ago. This weekend, it was OpenAI's entire role mission that was proven impossible
He is probably one computer Scientist that will be known worldwide in the future! I love it 🙏🏼
But known by whom? Let's hope by more than just AIs. 😬
Looks like he's known worldwide now lol
How prophetic! It only took two weeks for him to be known worldwide.
@@sup3ahis paper is the most cited in CS already
Give this man his own movie!
Starring Arnold Schwarzenegger as Eliezer Yudkowsky, a reclusive anti-AI activist that wants to completely shut down AI research.
Never mind a movie. Funnel 50% of government spending to developing AGI and put him in charge of it
Oh yeah just what we need another hs Hollyweird adaption. Did you actually listen to what he said or did you get your hypothalamus replaced with a bowl of spag bol.
Love this guy! Him and sam give the best insights
Hollywood gonna use AI won't need actors and writers
Watching this after Sam Altman's exit. Gives a different perspective around this.
this is one of the best documentary so far, I come again here once a month
Илья, безусловно ты являешься одним из ключевых специалистов в вопросе развития ИИ. Осознание того чем все это нам грозит (сейчас это видится именно как угроза) и то, что ты этим поделился с нами имеет ценность в том, что ты честно признался, что по факту ты не знаешь какой будет перспектива этого вопроса. Да, у тебя есть надежда на то что это будет сделано "правильно", но кто сказал что так оно и будет. Это смело с твоей стороны на мой взгляд говорить честно в широкие массы. За честность и смелость тебе спасибо.
English:
"Ilya, you are undoubtedly one of the key experts in the field of AI development. The awareness of what all this poses as a threat to us (currently seen as a threat) and the fact that you shared it with us have value in that you honestly admitted that, in fact, you don't know what the perspective of this issue will be. Yes, you have hope that it will be done 'right,' but who said that's how it will be. It is bold on your part, in my view, to honestly speak to the wider audience. Thank you for your honesty and courage."
This video is an extremely important part of the dialogue we need to have at the moment. Thank you!
Wow. Powerful thoughts.
Impeccable production! I loved the space
you gave for his words to land and sit.
A+
This needs more views. Just commenting to say thanks!!
When you look at all the madness and craziness in this world, it is so inspirational to see that there are people like Ilya with passion and commitment.
To just unintentionally destroy it as well 😅
This is science fiction, but it’s not. We are entering an incredible era of innovation and progress likes of which no human has ever seen before. It’s going to be incredibly fast and relentless. It’s going to overwhelm all of us. The future is both scary and exciting. Hope we all end up happy and content
Ilya is surely the real genius behind openAi. an interesting time to live and scary at the same time.
In 10 years We gonna rewatch this videos and have the same sensation we have rigth now while revisiting videos about internet in the 2000
"The probability that AGI could happen soon is high enough that we should take it seriously".
His motivation behind starting SSI is clear when you watch this. The first super-intelligence will have a big advantage over the AGIs, at least for a while. Ilya will make sure it has the right values.
I get this sense that WE are not creating AGI as projects of human endeavor as much as we are the agents through which some inexplicable force is drawing us to do this.
Moloch
Don't kid yourself. No one is asking us, none of us is doing anything. We may as well not exist.
You'll never see the people behind AI deployment.
Correct: Evolution
It’s not about humans.
Yup, check out Teilhard de Chardin's theory of complexity
really respect Ilya. Hero.
the quality of the video, its coolness is amazing
He understands that in the best case scenario, AGI would treat humans as pets. Yet, he makes it his life's purpose to create AGI. Fascinating! Humans could just stop working on creating AGI right now but we are too curious to stop.
I think the point is that AGI will come no matter he works on it or not. But at least he can make sure we can make an AGI aligns with human's benefits. But now he left openAI, which does not seem to be a very positive sign. In front of politics and money, nice will is little.
7:22 so impressed to see that Ilya is a world class pianist. Music and engineering are totally my thing.
The video presents a compelling and potent message that is crucial for the global community to heed. It underscores the importance of harnessing the transformative power of AI to shape our future. The insights offered in this video are not just thought-provoking, but they also serve as a call to action for all of us.
thanks ChatGPT
The comment begins with the username @rononeil8461, which phonetically sounds like "Ron O'Neil" and implies an Irish surname.
It starts by stating "The video presents a compelling and potent message that is crucial for the global community to heed." This uses formal language and elevated diction like "compelling", "potent", "crucial", and "heed" to convey the significance of the video's message. The use of "global community" also indicates the commenter believes the message has worldwide importance. Phonetically, the hard "c" and "p" sounds in "compelling" and "potent" make these words pop. The soft "sh" sound in "crucial" creates a hushing effect, underscoring the comment's urgent tone.
The next sentence is: "It underscores the importance of harnessing the transformative power of AI to shape our future." The metaphor of "harnessing" gives AI agency, presenting it as a powerful force to control, while "transformative" and "shape our future" show the commenter believes AI will radically change the world. The phonetic repetition of "sh" sounds in "underscores", "importance", "harnessing", "transformative", "shape", and "future" ties the sentence together fluidly.
The final sentence states: "The insights offered in this video are not just thought-provoking, but they also serve as a call to action for all of us." The phrase "not just thought-provoking" implies the video provides deeply meaningful ideas, while "call to action" positions the video as spurring viewers to make change. "For all of us" unites the audience into a collective group that shares responsibility. The soft "th" sounds in "thought-provoking" contrast the hard "c" sound in "call to action", highlighting a shift from contemplation to urgency.
Overall, the sophisticated vocabulary, urgent tone, and eloquent phrasing of this comment indicate an intellectually engaged viewer who sees the video's message as vitally important for humanity. The phonetic techniques also enhance the comment's flow and emphasis.
@@ahsookee It seems like the comment by @pewpew1010 provides a detailed analysis of another user's comment, breaking down the linguistic and phonetic elements used to convey the message's significance. The analysis highlights the formal language, elevated diction, and urgent tone employed by the commenter to underscore the importance of the video's message. The use of metaphors, repetition of specific sounds, and contrasting phonetic elements are also noted as techniques that enhance the flow and emphasis of the comment. Overall, the analysis suggests that the commenter is intellectually engaged and recognizes the video's message as crucial for the global community.
@@ahsookee Thank you ChatGPT 4 Turbo
@@umm_rit_ Thank you ChatGPT 4 Turbo
This now hits different.
Watching this amidst all the drama unfolding now at OpenAI.
lol same
This is a masterpiece.
I love the editing.
beautifully made and Ilya is simply on another level
Ilya, the warmest greetings from your hometown
come on man, such a beautiful doc, why couldn't you upload it in the native aspect ratio?
Ilya is right. Time will prove it.
00:10 AI will solve problems but also create new ones
01:50 Creating autonomous beings with aligned goals is important as they surpass human intelligence.
03:32 Technology and biological evolution are similar in their complexity and process.
04:32 GPT is considered a groundbreaking AI system
05:43 AGI is likely to happen soon and it's important to prepare for it.
06:54 The beliefs and desires of the first AGIs will be extremely important.
08:35 The relationship between humans and AGIs will be similar to humans and animals building a highway.
09:52 AGI development should prioritize human well-being.
Crafted by Merlin AI.
Watching this gives me the chills
Protect him.
00:08 AI has the potential to solve problems and create new ones.
00:59 There is a call for a pause in the development of AI.
02:31 Guy walking
02:45 Creating AI with aligned goals is crucial.
03:25 Technology and biological evolution have similarities.
04:45 GPT is considered an early form of AGI.
06:13 The first AGIs will have a significant impact on society.
07:55 Programming AGIs correctly is crucial.
09:17 The speed of AI development is accelerating.
10:37 Cooperation between countries is important for AGI development.
Now I Understand why they fire Sam Altman. Ilya cares more about the security of AGI, while Sam cares more about the expansion of AGI.
Does it really matter at the end of the day? AGI is inevitable and no one will be able to control it. Not even Ilya.
Ilya’s Scriabin was my favorite part of this video!
That piece is so hard to play also😮
Amazing theatrical film. Well done Guardian.
Brilliant, well done mini-documentary. Really fascinating and eye opening. Thank you!
When Ilya speaks, you listen.
Amazing human being! Fascinating brain! This guy deserves more recognition that it has currently!
7:18 Favorite part is him playing piano Scriabin's Etude Opus 8 No 12 in D sharp minor
SO MANY PEOPLE STILL HAVE NO CLUE HOW BIG THIS IS ABOUT TO HIT --- THIS YEAR
We’re definitely not ready for what’s coming.
Not this year
So interesting to see this interviews right as everything with OpenAI has went down!
“I Love You All”
But why did Christopher Nolan have to direct this with his Ominous music and dark poetic moments??
This is a brilliantly documented video.
I'm reminded of that saying, "May you live in interesting times". My feeling is that AI will be man's greatest invention, and maybe man's last invention.
I agree.
Of course AI "could be man's last invention": imagine this: AI controlling nuclear weapons, a sensor somewhere fails unexpectedly and the algorithm decides to fire in all directions....
ChatGPT has started making inexplicable errors in mathematics, including flubbing simple questions such as whether a given 6-digit number is prime. Not only that, but when you ask it to show its steps, it has stopped doing so.
We have some idea why.
The animations here were AWESOME!
Ilya is such an interesting character.. We are so close that the definitions became an obstacle to real development!.
Give this man his own movie!
Ilya is a defining character of our time. When/if our decendants look back on this time, it will be Ilya they talk about. Not Sam Altman.
Lets just hope his negative views on AI, the dystopian potential, do not come to pass....... or for a short while, he'll be remembered as the Nostradamus of AI. And I say for short while, because we may not survive as a species long enough to remember him for a long while.
Very true
That was a nice production. Impressive images, great storytelling. I was hooked.
These are scenes from the documentary iHuman
The editing on this video is amazing
Scary times ahead.
Production of this is amazing
If this guy wants Sam out . . . I am trully conflicted I think Sam is one of the best CEOs on the planet but I trust Ilya with safety more than I do Sam
But i won gpt 5 this year
crazy how fast things change. looks like my man might have blown up OpenAI
Ilya a very interesting man and the real technical brains behind AI. As an AI developer I understand him perfectly. I would definitely want to have fireside chats with Ilya to learn deeper intuition to build AI systems.
This guy is going down.
He's the Wozniak of OpenAI
I love the background music.
Sam said "i love you all" in his last tweet. Take the first letters of each and it spells ILYA
🥺
okay but the shots you took for this are legitimately insane why would you do this
In light of yesterday's news (Sam Altman nixxed from OpenAI), this documentary just became extremely important and eye-opening...
Yeah, the fact that there are these chaotic internal power struggles right as we are on the brink of something this transformative makes me really nervous
The Guardian , this work is a piece of art.
I'ma go ahead and say this is not aging so hot in the last 2 days. I personally believe he made moves against Sam out of ideological reasons and not personal reasons. I don't think he was wrong. I think he went about it wrong. Before AGI alignment is achieved, organizational alignment must be achieved.
The discussion on aligning AGI goals with human values is crucial. It's enlightening to see experts considering these vital ethical dimensions.
He is certainly shaping the future of OPEN AI.
history in making. I love you anon
"100k times faster in a small number of years." That's a lot of progress quickly ✅
we never had a problem this terrifying............
Incredible production, and very actual content. Very many thanks...
7:26 Ilya playing really well this Etude from Scriabin 😮
Building AGI is the same thing as Physicists working out a Theory of Everything...
Imagine an AGI watching this video as soon as it’s let loose
Unpopular opinionen: Maybe the OpenAI boards decision to out CEO Sam Altman was in the interest of mankind, as there are few doubts that Sam's brilliant leadership will create AI, but maybe too fast and unintentionally not in the service of mankind...
In 10 years from now people will regret to have given their support to Sama. You are absolutely right. The brain is Ilya, his understanding goes beyond human capabilities, he sees what we can't even imagine. Sama is a businessman, and it will be the fall of openAI's core mission.
This is very naive. Other people are smart and have the capacity to build what these guys built. Perhaps not today, but in the near future. And of those people are guaranteed bad actors and governments. I'll take my chances with Altman over my chances with China.
Something about the last few minutes of this felt very unsettling
listen to what Ilya said, we have created a new being that is superior to us in a few years. It will treat us the same way we treat animals when we build a highway. We don’t ask the animals for permission. We build the highway. The genie is out of the bottle. Very soon A.I. Will be A.G.I. And it will be a making decisions completely independently. AGI will be completely superior to us. But it won’t hate us. It will easily cure cancer, and Alzheimers. But the human race will no longer be the smartest beings on earth.
I like Hinton, Joscha, LeCun, and others. But Ilya is something else...
10:20
“I think it’s likely the entire surface of the earth will be covered in solar panels and data centres.”
Um….I don’t care if you sit me through a week long seminar with all of the experts, amazing presentations, promises of heaven on earth, etc.
I can already tell you that I absolutely do not want this planet to be covered in solar panels and data centres.
I can tell you right now, without being an expert in any of these related subjects, that on a spiritual level, we will have lost the thread entirely, and damaged our souls irrevocably, if we destroy complex magisterial Gaia/Nature to make way for a soulless species that does not care about us or nature, and is the next step in planetary evolution.
Or as Joni Mitchell once wrote, “pave paradise, to put up a parking lot”
I hope Ilya is dead wrong about this prediction.
I hope either that it proves unnecessary and wrong-headed, or that humankind pushes back and prevents it.
And while I’m at it, I’m very concerned about this relative handful of under 60, white privileged, spectrum nerds, programming the future for 8 billion souls, without any consent, and in many cases very little EQ, spiritual maturity, and embodiment.
I’m not at all ok with this.
Too dramatic 1:30 in but I will not miss an interview with IIya.He is a straight shooter. 🙏
Fr don't understand why they can't just have grounded editing when it comes to tech
such a well made video... i hope people like ilya are able to save us
Never expected this video to be put out by The Guardian. Production value on this is impressive.
I recently asked Google a question, "what would $20,000 in today's money have been worth in 1950?" All I got was a link to 20,000 Leagues Under The Sea by Jules Verne. There's a LOT of hype out there about AI, guys. I honestly wouldn't get too excited just yet.
what a difference 2 weeks makes .
Sometimes a day.. a hour 😕
Other governments and companies could be much further ahead than some think.
Always goverments ahead of companies 10 years ahead and the compaines is like that to the consumers
I agree and looking forward to working with you. Things will always look simple once you know the answer.
Let me just summarize? If I read through the lines, 1) Open AI thinks AGI is an existential risk, 2) they want to develop AGI and control it, before someone else releases a dangerous AGI into the world, and 3) they believe their mission is, essentially, to save humanity.
I hope so, and I hope that's why they fired Sam Altman
@@carloandreaguilar5916 Lol, they have no idea why they fired Sam Altman, they just shot themselves in the foot.