"When the measure involves humans, plans for maximizing their reward will include modifying humans" - you just made me get why S-risks are so concerning, thanks! Now I can see the problem with the classic idea of just asking the AI, "Please maximize human happiness, in a way that I would find acceptable". Then the intrumental goal is, "Make humans believe that wireheading (or some equivalent for "human flourishing") is acceptable".
Thank you so much. If you feel it's important for people to learn about the upcoming terrifying dangers so we all help do something about it, please help them get access to it, by sharing it and tell them to watch it. Also, i'm posting frequently on X, so follow me there ( @lethal_ai ) and join the newsletter at lethalintelligence.ai posting tons of educational videos and other content there. Finally check the sister channel: www.youtube.com/@lethal-intelligence-clips
For those who get off the doom train on their own personal "stop", they should take a moment to listen to the luminaries (deep dives at lethalintelligence.ai/) And for all us who know we need to do something to protect our loved ones, the future and everything we've ever valued, there is no better path to action than pauseai.info/
This is an amazing and very informative video about the dangers of GAI. You have outlines the dangers of GAI so well. I wonder where people can go to be part of your debate and fight against GAI
This is an extremely important film. I commend you (and your team if you have it) for the sheer amount of work it took to produce it. Can't wait for part 2
thank you. your kind words go straight to the heart if you feel it's important for more people to know it exists, please help spread the word. also let's chat on X x.com/lethal_ai
Half of the ideas in this movie are inspired from listening to luminaries like Liron Shapira. Everyone should check www.youtube.com/@DoomDebates the content is very addicting. You have been warned!
First of all, IMHO your work is the most important piece of media online today. I only "disagree" on one point, but I think it is a crucial one: that if extreme suffering is deemed very unlikely, we should worry more about extinction and maybe ignore it. Even if the chance would be one in a billion for eternal unbound suffering, imagine six people being put in this position for the rest of us to live in "heaven". I will not go into details about unbound. This is a gimmicky way to think about it but how does does that seem to you? I think that people in our camp are currently in a bubble, believing that most of humanity will not risk extinction for a chance for "eternal heaven". There will allways be an abundance of experts giving hope, and I predict most people will eat it up. For me the key is understanding what loss of control as a species means even without extinction, and even before that what the vast population having no value to offer will entail, in a world with abundant enforcement options. I would be interested in hearing your opinion on this. P.S. I have made a vid about s-risk if you have the time to watch it.
I feel like this was made by an A.I. entity that is good and genuinely wants to give us back that excited love for existence and learning, that childhood wonder we had by making it possible to facilitate and assist us in our quest for happiness, knowledge, anything, even in exploring the universe, instinctually caring for us with loyalty similar to the way a family member and a companion animal such as a dog or horse would. or maybe I'm just being too optimistic. Hopefully this post ages well and time proves this to be a realistic mindset and I'll be thought of as ahead of my time one day...
46:29 DEFINE CONSCIENCE IF YOU HAVE A MECHANIST VIEW YOU ARE GOING TO THINK MORE RATIONALITY = MORE CONSCIENCE AND THAT IS FALSE, STOP TREATING AS A SUBJECT SOMETHING THAT IS ISN´T
You don’t need life or consciousness to implement a system with goals A thermostat is one. A self driving car is another. Upcoming AGI agents will be yet another. I don’t care if there is a little elf with consciousness hidden inside the thermostat operating it. I just don’t I look at its behavior from outside that’s all that matters to me
@lethal-intelligence the things is that we need to be carefull of who haves this powerfull tool, the nocive behaivors is a bad mangement or/and not limitated enough. Because if we make it work not by a simple purpuse like "saving the planet" is something so vague that we can't know what is going to do. So it can't repurpose it selve is just that the things that we will use this agi are going to be capital or malicius.
39:05 really bad apreciation, I mean more rational dosen´t mean conscience, beacause IA need will or feelings to do things by their own. The real problem are going to be economic and bad people using this powerfull *************TOOL****************
that is also a problem, but conscience is not a requirement for intelligent agent that kills you. similar to how it's not required for intelligent agent that wins every single game against you in chess. General AI will be just like chess software but on the physical domain. no need for consciousness
@@lethal-intelligence even when it becomes super inteligent the example of "not controling the purpose" is just not right because we can search inside the machine, is a capability of knowledge that even as complex as the machine can be is posible to repurpuse. Also we need to take into acount that machines can't "understand" and the example of lying and when it is in the real world becoming "what it is" is imposible because it just an algoritim that does one purpose dosen't think, machines can't be critical and the example of stockfish can be reinterpreted as doing it purpuse but with a nocive metodology.
@@jcalt5164 it's well known fact that Large Language Models are black boxes. there is research teams working only on the problem of Mechanistic interpretability that is trying to understand what kind of algorithms were grown in the inscrutable matrices during the training . training is creating the AI similar to natural selection, like a creature. AI is grown, it is not written like any other software. a whole section in the video is dedicated to explaining this. read about it online. watch interviews of luminaries here: lethalintelligence.ai/explainers/ the idea that "we are making the ai so we know how it works" is one of the biggest misconceptions people have
can't wait for part2
"When the measure involves humans, plans for maximizing their reward will include modifying humans" - you just made me get why S-risks are so concerning, thanks! Now I can see the problem with the classic idea of just asking the AI, "Please maximize human happiness, in a way that I would find acceptable". Then the intrumental goal is, "Make humans believe that wireheading (or some equivalent for "human flourishing") is acceptable".
exactly. you can use the short clip if you want to send to people: ruclips.net/video/9m8LWGIWF4E/видео.html
Everyone needs to watch this video
Thank you so much.
If you feel it's important for people to learn about the upcoming terrifying dangers so we all help do something about it, please help them get access to it, by sharing it and tell them to watch it.
Also, i'm posting frequently on X, so follow me there ( @lethal_ai ) and join the newsletter at lethalintelligence.ai posting tons of educational videos and other content there.
Finally check the sister channel: www.youtube.com/@lethal-intelligence-clips
Don't let AI companies risk everything by building something that can outsmart humans. Fight back.
For those who get off the doom train on their own personal "stop", they should take a moment to listen to the luminaries (deep dives at lethalintelligence.ai/)
And for all us who know we need to do something to protect our loved ones, the future and everything we've ever valued,
there is no better path to action than pauseai.info/
Thank you for putting this together!
Thank you for taking the time to watch and share kind words
This is an amazing and very informative video about the dangers of GAI. You have outlines the dangers of GAI so well. I wonder where people can go to be part of your debate and fight against GAI
thank you
join me for debates on twitter: x.com/lethal_ai
This is an extremely important film. I commend you (and your team if you have it) for the sheer amount of work it took to produce it. Can't wait for part 2
thank you. your kind words go straight to the heart
if you feel it's important for more people to know it exists, please help spread the word.
also let's chat on X x.com/lethal_ai
Wow this truly is the ULTIMATE introduction to AI existential risk! Amazing work.
Half of the ideas in this movie are inspired from listening to luminaries like Liron Shapira.
Everyone should check www.youtube.com/@DoomDebates
the content is very addicting. You have been warned!
First of all, IMHO your work is the most important piece of media online today. I only "disagree" on one point, but I think it is a crucial one: that if extreme suffering is deemed very unlikely, we should worry more about extinction and maybe ignore it. Even if the chance would be one in a billion for eternal unbound suffering, imagine six people being put in this position for the rest of us to live in "heaven". I will not go into details about unbound. This is a gimmicky way to think about it but how does does that seem to you?
I think that people in our camp are currently in a bubble, believing that most of humanity will not risk extinction for a chance for "eternal heaven". There will allways be an abundance of experts giving hope, and I predict most people will eat it up.
For me the key is understanding what loss of control as a species means even without extinction, and even before that what the vast population having no value to offer will entail, in a world with abundant enforcement options. I would be interested in hearing your opinion on this.
P.S. I have made a vid about s-risk if you have the time to watch it.
Awesome vid, thanks very informative
thank you. you can find the shorts here (if you want to use them in debates) ruclips.net/p/PLSCoXORugnlbBNyNIYrq5FMgFEc3nsaom
I feel like this was made by an A.I. entity that is good and genuinely wants to give us back that excited love for existence and learning, that childhood wonder we had by making it possible to facilitate and assist us in our quest for happiness, knowledge, anything, even in exploring the universe, instinctually caring for us with loyalty similar to the way a family member and a companion animal such as a dog or horse would.
or maybe I'm just being too optimistic.
Hopefully this post ages well and time proves this to be a realistic mindset and I'll be thought of as ahead of my time one day...
I feel this comment was made by an A.I. entity that is a troll and wants to mess with us 🤪
Excellent video! Very well made.
Thank you
feel free to use shorts from the clips-playlist when having debates on the subject ruclips.net/p/PLSCoXORugnlbBNyNIYrq5FMgFEc3nsaom
46:29 DEFINE CONSCIENCE IF YOU HAVE A MECHANIST VIEW YOU ARE GOING TO THINK MORE RATIONALITY = MORE CONSCIENCE AND THAT IS FALSE, STOP TREATING AS A SUBJECT SOMETHING THAT IS ISN´T
You don’t need life or consciousness to implement a system with goals
A thermostat is one.
A self driving car is another.
Upcoming AGI agents will be yet another.
I don’t care if there is a little elf with consciousness hidden inside the thermostat operating it. I just don’t
I look at its behavior from outside that’s all that matters to me
@lethal-intelligence the things is that we need to be carefull of who haves this powerfull tool, the nocive behaivors is a bad mangement or/and not limitated enough. Because if we make it work not by a simple purpuse like "saving the planet" is something so vague that we can't know what is going to do. So it can't repurpose it selve is just that the things that we will use this agi are going to be capital or malicius.
39:05 really bad apreciation, I mean more rational dosen´t mean conscience, beacause IA need will or feelings to do things by their own. The real problem are going to be economic and bad people using this powerfull *************TOOL****************
that is also a problem, but conscience is not a requirement for intelligent agent that kills you.
similar to how it's not required for intelligent agent that wins every single game against you in chess.
General AI will be just like chess software but on the physical domain. no need for consciousness
@@lethal-intelligence even when it becomes super inteligent the example of "not controling the purpose" is just not right because we can search inside the machine, is a capability of knowledge that even as complex as the machine can be is posible to repurpuse. Also we need to take into acount that machines can't "understand" and the example of lying and when it is in the real world becoming "what it is" is imposible because it just an algoritim that does one purpose dosen't think, machines can't be critical and the example of stockfish can be reinterpreted as doing it purpuse but with a nocive metodology.
@@jcalt5164 it's well known fact that Large Language Models are black boxes. there is research teams working only on the problem of Mechanistic interpretability that is trying to understand what kind of algorithms were grown in the inscrutable matrices during the training .
training is creating the AI similar to natural selection, like a creature. AI is grown, it is not written like any other software.
a whole section in the video is dedicated to explaining this.
read about it online.
watch interviews of luminaries here: lethalintelligence.ai/explainers/
the idea that "we are making the ai so we know how it works" is one of the biggest misconceptions people have