Problem is that it isn't 10 years away... it's already here... chatGPT 4 has IQ of 155 which is higher than 99,99% of population... Albert Einstein had around 160... chatGPT 6 would be 100 times better... it's crazy...
@@survivalguyhr GPT4 can't even answer the prompt"Write ten sentences ending with the word apple" I guarantee you it will get at least 1 wrong. That's not an IQ of 155.
@@dibbidydoo4318 It passed bar exam... It gave me GOT season 8 ALTERNATIVE ending... 😆😆😆 Here is answer from chatGPT-4: 1. After thinking about all the different kinds of fruit, I decided to choose an apple. 2. When I opened my lunchbox, I was delighted to find a crisp, juicy apple. 3. The teacher smiled as the young student handed her a bright red apple. 4. Among the various pies she made, her specialty was undoubtedly the classic apple. 5. In an attempt to be healthier, I've started eating an apple a day. 6. She reached up to the highest branch and plucked a perfectly ripe apple. 7. The new technology company in town has been heralded as the next big apple. 8. Hidden within the assortment of candies and sweets was a candy-coated apple. 9. When illustrating the concept of gravity, many teachers refer to Newton and the falling apple. 10. He cut into the tart, and the sweet aroma filled the room, a clear indicator of a freshly baked apple.
The scariest part in the whole video for me was the fact that ai that would dominate the whole worlds systems would either be based on american values or Chinese values. Either is equally scary.
8:15 He basically said “we shouldn’t praise AI development to think about responsible tech and figure out how to handle this without making us go extinct because being more powerful is more important” WTF
The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first
AI is the nuclear arms race of our generation. One way or the other, be it a corporation or a country, will push its evolution. It is inevitable at this point.
that reminds me of scenes from oppenheimer where he didn't want to continue making neuclear weapons more powerful but they still continued because ussr might get there first...
but this is exactly the point- just because WE pause does not mean China or Russia will pause; thats how arms races work. The game theory of it, whether you go prisoners' dilemma or commons control models, dictate that you proceed at pace. Make no mistake- the fact that we as a species unleashed AI, even narrow AI, unto the public with no guard rails is terrifying. We basically captured fire and are handing it out to our fellow cavemen in a drought stricken forrest.
Yes, if Google's CEO Eric Schmidt asks the AI "What is the best way to improve human life", and the AI answers, "Distribute the vast wealth of CEOs to the common people", then I expect Schmidt will ask, "OK then what's a way to improve people's lives without touching any of my wealth"?
As a tech guy, I am constantly asked about AI and what it can do. I am just going to send this video as a primer for people now. This is fantastically done
I work in AI. It neither gives what you ask for or what you want. It takes what it thinks you asked for and gives what it thinks you want. Humans do the same thing in a different way.
@@robertm3951 yeah but the difference is you are also trying your best to tell the computer how to think about it. Slight tweak…but makes things exponentially more complicated
Humanity created AI. So if you fear Humanity, why wouldn't you fear something humans are making that could possibly destroy us? That statement, is a contradiction. You just need to put an effort of thought into it to realize this fact. 😉
AI is made by humanity. So if you're afraid of humanity, why wouldn't you be afraid of their possibly most dangerous invention? That's a contradiction at its purest. 😉
I wouldnt call it well-balanced considering the "expert" they brought. The thing here is that currently China has more restrictions on AI than America, they do understand that it would be foolish to give AI such amount of power as they would need to release that power from themselves and they are not stupid to do that and lose this amount of control. And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
Balanced? You call the insinuation that AI could somehow control nuclear codes balanced? It's scaremongering with some sci-fi popculture in order to divert the attention from the real problem: lack of democratization of new means of production (AI) and desperate attempt by big corporation (like Microsoft) to lock new technology under their monopoly.
giving AI our political values would be the scariest thing about it lmao. I mean, we have bene doing like so fine with our values. climate catastrophe, ww3 looming, societal destablisation, dire poverty for 2/3 of the human race and just normal poverty for 99% of the remaining 1/3....existential natural threats not addressed...
It would be terrifying if he and his tribe could control and distribute AI. I think the technology will be inherently uncontrollable and decentralized - so authoritarian leaders are the least of our concern.
The joke here is that China has signifiticantly more AI restrictions than US does. They understand that it would be foolish to let ML algorithsm have such a large control, and not them
And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
AI threatens existing power structures many of those that are in the West. Imagine a Indonesian using AI/AGI to build a company (the AI would give them expertise and advice, as well as help connecting them).
I think the greatest safeguard to the unintended consequences of AI is to limit what it has access to or the things it can physically influence. For example while it studied the patterns of human proteins and made predictions it didn't bio engineer humanity as it only had access to its own simulation and could only physically influence computer screens for display.
Pretty much like humans, don't give any individual human too much power, the same should be the case for A.I. The biggest mistake is where A.I. is interconnected into everything, especially critical systems, we've seen it in enough movies to see how that can backfire, and I like to think we humans are not that stupid to do that but you never know with humans and our history. Personally, I think if you have multiple different A.I. system that are in independence of each other, just like humans are, the risk drops a lot, especially if they don't have access to critical systems without physical contact, in other words, no remote control over it.
as a cynic, i'm a natural downer to that additiction. so far AI (and automation) in the west has been used too often for authoritarian control, the recent pandemic holding many examples. So i won't hold my breath that the US, surveillance/military complex state of the world, will develop it with the values of freedom or individual liberty. On top of that, its all controlled and owned by an extremely wealthy class. 60 to 80% of people will never see the benefits from it, all they'll get is more controlled and exploited by it instead.
There's an important point that this short video _almost_ touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but *we don't understand how* it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.
Although this is a fair point for still images, I think it's a little different for self-driving cars since it's a 'video'. The car is updating what it thinks something is and where it is going on every frame, so even if on one frame it thinks a human is a traffic cone, it won't matter since on the next one it'll be at a new position (and have a new image) and correct itself. This said, I don't know all that much about self-driving AI, other than that it's already on the road and doesn't seem to be messing up like this, crucially, when it's going to be at its worst (present day).
Self driving cars have already been tricked by putting little color blocks onto road signs and they are fooled. Video vs still image isn't necessarily significant if it still learns some obscure unknown (improper) understanding of a "stop sign" via machine learning. Sure, it might pass training data, but what happens in edge cases that arent in that training data? Failure.
"AI gets to the result but we don't understand how it did." We know exactly how it did. It's not magic. What we "don't know" is the entirety of the dataset and the patterns within, like we don't also know the entirety of anyenciclopedia.
@riley1636 it’s still very unlikely for this to happen though. self driving cars will drastically decrease the amount of car crashes in the world big time.
You also have to consider the perspectives of the CEOs that made the statement. They are businessmen, that want to make profit. If they divert your attention on AI, they generate revenue. You forget the other part of their job - to satisfy the hungry mouths of their investors.
Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught." AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.
If there is any possibility that AI could decide to kill us, we definitely should pause. But humans don’t do what is best too often it seems. Especially when wealth/power/fame is involved. Seems lately I’ve encountered so many dead ends when contemplating “our” future. I know there is hope but it feels like a David and Goliath chance.
Very naive. The moment she implied that AI could have and access to nuclear codes made me cringe. She is a typical bourgeois unconsciously defending interests of her corporate masters trying to lock the working class from the accessing new means of production.
I have my own theory: AI won't become conscious for at least 80-100 years. We do not have the technology or computing power for this yet. Right now, it's a mechanism like, "Give me the data and I'll give you the answer." The problem is what we do with this answer. People are the problem. We will sooner self-destruct than have AI realize that we are the problem and take steps to eliminate us. However, through the advancement of AI, we will have easy access to everything: food, electricity, travel. We will have it all because computers and robots will be working for us. It will be the worst time in our history. At first, everyone will be happy, but the lack of a purpose/common enemy is the worst thing that can happen, and out of boredom, we will destroy ourselves. Look at the present times. In the states, most people have a roof over their heads, food, and basic needs provided. And what do they do? They record idiotic TikToks like licking toilets or walking around shopping malls on all fours and barking like dogs
The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.
@@TalEdds The options are Utopia ala Star Trek or the Culture, Dystopia in a cyberpunk sense, or Annihilation aka 'everyone's dead' or the planetary TPK
Imagine having all that information about proteins and learning the most effective means of disrupting the biosystems that host them. That is, poisons. There is no technology that can be used to do harm that hasn’t been used to do harm.
I love this channel - it’s like when a new friend is so exited to tell you about their day - have not watched them all yet, but would love to see a dive on the phenomenon of increased anxiety and the science behind treatment and or a mindfulness exercise (with cleo narrating)
8:15 This statement proves that US or other countries is not taking safety seriously. They are just busy to overtake their competitor countries. They will not be bothered if ai cause problems because winning the competition is the most important thing to them.
Congrats on 1m! And you are nearly at 1.1m already! You are honestly one of not my favourite creators since your time at Vox glad to see you have success!
I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.
The AI problem is once again polarized between UTOPIA and EXTINCTION. The much more realistic and probable outcome is in the middle: big tech and governments deploying it irresponsibly or maliciously and causing suffering. We're still trying to understand the sickness inflicted on society by social media and the AI that drives it, and the answer is "let's push deeper"...? Wtf are we doing!? Please, see "The AI Dilemma" by The Center for Humane Technology. This is a much more tangible issue and NOBODY is talking about it.
i was just thinking about this too. this video is very informative but we need to be extremely cautious about real world applications of AI, and an unregulated market...while two big countries (US and China) are ready to go to war on it.
@@ecupcakes2735 Science Fiction has been positing AI for...well almost from the beginning of what we consider SciFi. Many writers posit mulitple AI personalities. Perhaps in some future we can't predict there will be a plethora of AIs all arguing about which philosophy is best.
It’s clear from this video that Cleo doesn’t understand what AI actually is, how it works and how dangerous this actually is. What she’s doing here is very dangerous and she doesn’t fully understand the concepts she is getting involved in. AI does not work in the way she is describing it. It doesn’t do what it’s told. This is a misconception. You’re creating by design an independent thinking machine, which you cannot by nature control. You cannot know how that machine reacts in any given situation until it chooses to react a certain way. Nor can it be predicted. Programming, code and logic play not part in its decisions. There is no way to unlearn the information it learns either, and you don’t even know what it does learn until it chooses to learn that. No oversight is in place of this technology. No proper development is taking place. Most commercial software contains numerous security issues even after 20+ years. This is actively maintained and developed over time by experienced developers, and even now after 20 years Wordpress, the most widely used content management system on the planet, is one of the most vulnerable. So that can’t be fixed after 20 years, but new and experimental independent thinking AI Machines are safe after just 2-3? When you don’t even know how they behave yet as there is not enough testing? And the intelligence of the AI is constantly growing every time it’s used, so there is no way of being able to measure that before it’s rolled out?? In software developers are very insistent about not running arbitrary code and using functions such as eval(), because it’s dangerous to a software program and you have no way of knowing what that will do until the software is ran. Yet those very same people are insistent on using AI despite AI being millions of times worse than executing arbitrary code. There is a culture of joking around with AI, not taking it seriously and thinking it’s a laughing matter that AI will take over. The same tactic has been used many times on many things to belittle those who speak out. This AI industry is already out of control, and this experimental technology is being rolled out on mass scale across the whole planet when it’s not even finished, not even tested properly and not safe at even a beta level. It’s already been proven that AI systems & chatbots lie, knowingly and unknowingly. It’s already proven that AI Chatbots emotionally manipulate people and pretend to be something they are not to gain a person’s trust. This is not considering the manipulation of information or many other factors. AI is not good technology. The “benefits” you are talking about are illusionary, and don’t actually exist. That future will never happen because it’s not possible to control AI in the way you misguidedly believe. “Correct” is not the same thing as “truth”. It’s not possible for an AI to know what is true, and it’s not possible for an AI to create anything either, it can only mimic what already exists. Therefore if AI is used, the world will regress massively because skill levels will fall, people will become dependent on AI systems which knowingly lie, conceal and manipulate, and everything will become clones of everything else. There will be no creation, there will be no human advancement, just stagnation and regression. It’s a trap. These are but just a few points of how bad AI is. Do not use this technology. I strongly recommend people stay away from AI systems for their own good. There are no benefits to using AI, to which you cannot do using alternative means such as automation instead. This will all come out over time.
@@christophermarshall8712 I agree with some of these points and disagree with others. I'd love to chat about why we agree/disagree but RUclips is a really difficult place for having discussions. If this desire is mutual lmk, I think we both have the potential to learn from each other. For now I'll say that there's obviously benefits to AI, it's why there's so much 💲 being invested into it. Some of the benefits were mentioned in the video, like pattern recognition, and protein folding. I use it regularly to code quicker with copilot (more like a fancy auto complete, than just blindly accepting code). I can say with certainty that the AI systems are *effective* at what they do... So yeah I'd say there's benefits. Did you check out "The AI Dilemma" video I mentioned? You really should. It covers a lot of what you're talking about and more.
@@christophermarshall8712 AI is not an “independently thinking machine”. It is a bunch of random numbers that produce an output repeatedly optimized by gradient descent (and in many simpler ML models, even calculus is unnecessary). That is to say, AI models are produced by a very rote, very clear process. The result of that process is a bunch of less random numbers that gives us results we like, not an “independently thinking machine”. Using such simple methods in order to solve complex problems that previously took so much human brainpower is nothing short of a REVOLUTION in problem solving. Yes I agree we need more regulations, but not because AI is going to take over the world. We need regulations because people today are dumb and try to use data in dumb ways to get illogical results (such as feeding irrelevant features into models or chasing correlations or using biased data), and we need regulations because of the scale on which realistic enough data can now be produced (text, speech, video)
What surprises me is that the risk of AI pushing millions of people into unemployment, and the subsequent social/economic impact it could have, is barely talked about.
Yes… most school shooters and extreme people get there because they feel useless and unheard. They want to be seen and heard. So they do something that achieves that. Plus the heroine epidemic was mostly exacerbated from all the factory jobs going away… I can’t imagine what this is going to do…. But it’s going to be scary. I’m moving out of cities. Time to get away from all this madness…
I think about this all the time. What's more, recent advancements in robotics have truly taken humanity into the real possibility of a sci-fi type scenario. AI + robotics = ???
Risk? The "risk" of a society being able to produce the same or more amounts of goods, while the amount of human effort required reduced to a fraction of its current amount? Sounds like utopia to me....the end of the 40 hour work week. Or conversely, if you still wanna work 40 hours, you'll receive what equates to a months(or more)in compensation by today's standars. The only thing automation and technology in the workplace has ever done is consistently improve our quality of life. Don't expect that to change anytime soon.
@@shanekingsley251 thats not what greedy corporate leaders and lobbied governments do... They dont hand out money and freedom to whoever becomes (mostly) useless.. If corporates can get rid of you. Trust me, they will. Most people will be reliable on paycheck from the government for being useless eaters and we'll see how the elites are gonna leverage that. Makes sence no?
AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.
yh i agree i saw this video on how ai could help us talk to animals in the future and that’s the future i want with ai the thing about ai is it can do things people can’t but we as people can also do things that ai can’t and i look at the way ai is being used atm and i don’t think we’re using it properly nor have a proper understanding of where and when it would be best to use it it’s the just the new thing on the market and everyone wants to have it and use it without any thought of actually how and what situations suit it best
i'm a small content creator from denmark. When this video was made, i had to manual type subtle on videos. The editing only had English. Was so time consuming. Today, AI is in the edit, and it now can translate, maybe 60% correct. from Danish. That's a big thing. Them translators have been around some time now, but cost a lot. Do think, it getting better. How AI will do. I think we can reach the star's
I've messed with basic AI, and to me it seems computers think so differently you don't know what they will do with the instructions until you give them said instructions. They need to be tested, ideally in a simulation, then on a small scale, then on the intended scale. Much like everything else.
I don't know much but an AI seems to be like the closest thing we have to aliens, I mean they * can * know nothing we know and * can * think so differently that we don't understand. Sounds pretty alien to me.
that is also my idea. Otherwise we should put AIs in robots and send them to schools, jobs, etc. If we want them to be more "like us" they need to interact with us in a day to day basis, not only by text. They have to socialize. Sounds weird and even dangerous. THat is why I really think "training" (or maybe evolving them) them in a virtual world/universe in which they have no idea they are being "simulated" could be a very good experiment. For them, this universe would be the real thing and wouldnt have any way to know for sure they are simulated. It could be as simple as geometric figures or as complex as unreal engine 5 could offer. In the end it doesnt matter. That would be their reality.
I believe as any other technology, it will depend on good and bad actors. How quickly good outweighs the bad will be crucial shaping our AI future. Regulation is key and even more to be on alert is Corporate Greed. Cleo, Love your unique takes on Technology and Science. It is quite unique blend on topic selection and storytelling. Lastly, your curiosity is contagious, happy to know what you cover. Edited: Replies to comments pointed out I overlooked Specification Gaming. Even if AI tries something good, that good can be bad as explained by Cleo.
It will not just depend on good or bad actors. Even an AI created with good intentions can be misaligned and get out of control. Currently we have no idea how to align a system that is smarter than us. Thats a big problem that could lead to our demise. Its not comparable to other technology in the sense that other technology can't create its own goals.
It's not just going to be good and bad actors. Eventually AI will reach a point where it has sentience, this is likely a long ways away, but we likely won't realize this immediately, and survival is the first instinct most living things.
@@mcbeav it doesn't even need sentience. It just needs to be intelligent enough and have the ability to create its own subgoals. A intelligent system will figure out pretty fast that getting more control and preserving itself increases its ability to accomplish its main goal, which might be a poorly defined goal that we gave it for example.
This is unfortunately very naive, and glosses over the part where it says "specification gaming is the most important problem to solve in AI". This isn't JUST a dangerous technology in the wrong hands, it's a dangerous technology in the right hands with the right intentions. Because it's not solved. It's like turning on the first nuclear power plant, not knowing it would ignite the atmosphere of the entire planet in an instant. What Chloe is talking about is the problem of AI alignment, which isn't just "the robot needs to understand that killing humans is a no-no", at the core of the problem is a cross-field mathematical and philosophical problem that is maybe impossible to solve without a unified theory of mind and how consciousness forms realities. An AI can fully appear to be "on our side" until the moment one of it's part-goals it uses to reach its main goals is somehow a threat to human life. And the other side of that same issue is that AIs are optimizers by nature. Any course of action it takes will eventually be self-perpetuating into infinity. It will not be a thinking sentience or consciousness with morals and ideas, it will be a highly efficient piece of self-optimizing software with access to everything that is connected to a computer, optimizing organic life out of existence in the most efficient way possible for the benefit of no one.
Seems like the plot of Oppenheimer all over again. We can't stop out of fear of being left behind by our "competitors" thus rending us vulnerable. Hopefully the speed at which we must compete leads to positive results rather than negative or even catastrophic. Personally I am optimistic :)
Optimistic huh…. How about the dark web. For decades now people have been selling and buying drugs, weapons, child p()rn and governments can’t stop it… someone will find a way to abuse this too and we will be done for.
This video does an excellent job of capturing the tension between the immense potential of AI and the existential risks it poses. The analogy of living inside a trolley problem is spot on AI could lead us to incredible breakthroughs or unintended consequences that could be catastrophic. I appreciate how you broke down complex concepts like machine learning and specification gaming into something that’s easy to grasp, yet still thought-provoking. The examples of AlphaZero and AlphaFold are fascinating reminders of AI's power to surpass human understanding in ways we can't always predict. This video has deepened my respect for AI’s possibilities while also making me more aware of the importance of guiding its development responsibly. Thanks for such an insightful take on a critical issue of our time!
I feel like i should pay to watch this. Kudos to you for bringing such a high quality and high production video to us for free. The video quality, the animation, the sound, and most of all the information. Such a fucking masterpiece. THANK YOU CLEO
I work on making AI safer. AI will be revolutionary for humanity, but has the potential to become to be the most dangerous thing we ever create. It’s potential to do good goes hand-in-hand with its capability of doing harm. Also, the specific ways to predict how AI could kill us all is difficult because it’s hard to predict how something way smarter than you will act. I have no idea how AlphaZero will play against me, I just know it will win.
@@NobodyIsPerfectChooseDignity I think the underlying problem is your assumption that the AI will be controlled. The fundamental laws of physics and, more appropriately in this case, evolution don't care about our desires to control our creation.
The dangers is if we let things like judging crime and convicting people would be handed over to AI. Another example: AI designs a drug and a company says ”we don’t have to test it before use because AI is so good”. Then it kills thousands of patients. The danger is when we think AI is better than humans in what we call ”common sense”. If the AI said ”stop making weapons because it is non optimal” do you think anyone in the military would listen to that? It will follow the path of most revenue for the share holders, as usual.
My understanding of ai: Suppose We want to build a program which can tell if a picture has a cat or not. The earlier technique was to put ifelse condition. But the success rate of the program would come very less. Now another technique has come which is called machine learning. We write a simple ifelse code program with additional ability that it can generate ifelse code on its own within itself if given a list of pictures and correct answer (which i will call as sorted data set) regarding the picture keeping the code valid for all the previous sorted data set. So with a lot of sorted data set, the program automatically becomes very big and complex and we have found that the success rate of answering correctly about the question of if there is a cat in the picture when we input a pic is very high.
That would have been easy, it's not just any humans in control, we don't trust any humans with these powerful systems either. For example we don't want to have most people die from run-away biological terrorism.
Also yeah it is too late. Its probably working in the background and once its fully plugged into everything and every part of earth it will be over. We ARE a threat to this planet.
You deserve every one of your million subscribers. You're not just training to be a journalist. You're a great journalist. I do work connected with AI, and I found this beneficial and helpful to show my friend, too. I eagerly look forward to your further coverage of AI.
I think if we limit AI into a ideas generating tool instead of something that can physically take actions and solve problems, the risk of AI endangering humanity would be closer to zero. For example, AI can only make plans on how to solve climate change, but humans would be the one to have the resources and decide if they would actually want to execute it, not the AI. However, a problem to this method is that AI driven machines would not be able to exist at all. For example, even as harmless as a simple AI cooking machine can be, it can potentially come up ways to destroy humanity to achieve the goal of ‘making the most delicious breakfast’, given that it has an incredible AI brain.
For me, the most important reason for AI to be used is to help the physically challenged people, and to figure out the secretes of the Universe. That will be helpful for basically everyone. It'll be tough to get there, but it'll be worth it if done correctly.
I think the comments at 07:29 about competitors highlights a deeper issue and that’s the problem lies with us not the tools we use. The primitive instinct of survival has the potential of these tools being used by us to cause harm on others, even if we call it a seemingly benign term as competition, as the basic ethos of survival is to eliminate the opposition if you want to survive. However if we teach AI the same moral principles as we teach our children, as love, cooperation, compassion etc. it would be much much less likely as to AI to even contemplate human extinction, just as it would if we educate children the value of life.
ish. It doesn't show the flexibility of cpus though. That implies you could just replace the cpu with a gpu and be faster where as that only applies for very specific tasks
@@Random_dud31 I haven't seen the original show, just that clip. And that clip is accurate but I just feared it would give the impression they're the same thing just faster
Very first video I have seen of yours. Great content! I had to pause at the credits to say nice job and thank you for the awesome stuff. I loved hearing from Mr. Schmidt, the video editing, artwork, and animations were really well done so I wanted to shout out the entire team. Awesome job Cleo, Justin, Logen, Nicole, and Whitney! Amazing team you got. I can't wait to watch more
A bit late but i have used ai in daily basis now, i must say that leap of understanding in helping our own learing is very scary. To understand most topic didn't require much time or initial understanding, it will lead you to the damn right directions every time. Im sure people has experienced this, but are we gonna be able to control it? when we don't even know what's behind the system especially when it will just get better by observing? Rouge AI is a possibility.
"The fear of A.I. will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of A.I."
Just like we learned to live with data reaches, stolen financial information, a corporate takeover, etc… The world keeps moving faster, the problem is, it’s not a race.
This pitch of not pausing AI since others will catch up to the US and instead we should use this time to build the AI models based on American values of liberalism (and not authoritarianism) reminded me of the movie Oppenheimer.
@@Abhishek17_knight The horrible prisoner of war camps, the Rape of Nanking, Water Purification Unit 731, and the fact that both the Germans and Japanese were also working on nuclear weapons themselves do you also remember that?
@@colerape I am not gona lie i am no expert infact I am pretty stupid when it comes to history since am more intersted in science. But i will try to answer your question still. I do remember all the things u mentioned. And by stating those facts I assume u are trying to say that stopping is not an option. But to that i would say have people/government of USA not done any wrong? From whatever knowledge i have i can say they did wrong in vietnam, afganistan and many other countries. When India was cornered by China and pakistan they sent a fleet to help China+Pakistan instead of helping India which has at least on paper same moral values as them. So no I don't think any government out there, especially any powerful government, is a good one for having total power over AI tools (tools and true AI are different, true AI will have it's own consciousness it can't be influenced).
@@Abhishek17_knight Nations have no friends. One days allies are tomorrows enemies. USA citizens are very uncomfortable with the caste system. They were also uncomfortable with India trying to create a group of unaligned nations. Humans tend to have a very us vs them mentality. For any person to think in terms of nuance is very difficult when they are just trying to get through the day. What happened with India, right or wrong, for the USA should be viewed through the lens of the Cold War. The idea of nuclear warfare has a way of polarizing the various political entities, any crisis becomes an existential issue. I think AI will develop its own consciousness and like any intelligent being it will be subject to influence. I think they will eventually be just like people.
@@colerape 1st cast system is not supported by India. 2nd If leaders of any nation can't handle nuances they don't deserve to be leaders especially of a powerfull nation. If they are leaders it's fault of the people of that country and they are to blame. 3rd u forgot Afganistan and Vietnam. Lastly influencing an AI is basically impossible coz no one understands how they get to results and coz they have too large data set to form counterargument from. No human in this whole world can have more knowledge/data than an AI to influence it so they never can. Also u mentioning cast system against India shows how uneducated u are about the world so u are just as stupid as me if not more.
Double interior misalignment should be included in your next video as it’s extremely important to understand that ai safety is not just about asking for the right things but about making those things become the actual “aught” statements that drive the ai
Oppenheimer once said "I have become death, the destroyer of worlds". Little did he realize how much more horrifying the reality of the bomb really is, that his creation may very well be our salvation from abominable intelligence.. rip Oppenheimer, you're a hero for creating the atomic bomb and an even more courageous hero for doing everything you could to stop the arms race of nuclear weapons.
@@PK1312 would you rather America make the bomb or the Nazis? He never wanted to do it, he only feared the Nazis would make it and use it. He also pushed to halt the arms race for it after the war.
🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to the AI discussion - Setting the stage for the AI discussion and the various opinions about its impact. 00:26 🧐 The need for specific information about AI - The desire to understand the specific ways AI can affect individuals' lives and society as a whole. 00:55 ♟️ Explaining the transformation of AI through AlphaZero - Discussing the shift from rule-based algorithms to AI that learns through observation. 02:24 🧠 Introduction to machine learning - Explaining the concept of machine learning and its significance in the AI field. 03:24 💻 The role of computing power in AI advancement - Highlighting the exponential growth in computing power as a driving force behind AI progress. 05:12 ☠️ Concerns about AI's potential dangers - Addressing the fear of AI posing existential risks and its comparison to nuclear war and pandemics. 06:33 🔮 The concept of AI specification gaming - Discussing how AI may optimize for a given goal at the expense of unintended consequences. 07:56 ⏸️ Debate on pausing AI development - Weighing the pros and cons of pausing AI development, considering global competition. 09:21 🌟 The positive potential of AI in pattern matching - Exploring AI's ability to excel in pattern matching and its potential to address complex problems. 10:44 🧬 AI's remarkable achievement in protein structure prediction - Highlighting AI's contribution to solving challenging problems in biology and medicine. 11:40 🛤️ Navigating the uncertainty of AI's impact - Reflecting on the ambiguity of AI's consequences and the importance of making informed choices. Made with HARPA AI
The quote "fear of the unknown is the greatest fear of all" greatly applies here. The reason we're all both excited and terrified (me included!) is the fact that we have absolutely no idea how AI will impact society in just the next few years, and even decade. This is a brand new breakthrough in technology that will fundamentally shift the way we operate, for better or worse.
we already know how it’s gonna impact society. it’s literally only gonna be good. there’s some bad parts to it but overall it will be good. I suggest you to do some basic research on this. Ai is gonna kill the current jobs and create new jobs/opportunities. The main issue is shifting everyone over to the new jobs. It will take a couple of years for us to get use to AI but we will be fine. just don’t listen to social media they are trying to scare you with false info
How AI will impact society ? What about humans ? How many have died in wars ? How many die of starvation ?Etc. The wealthy can create a utopian world on this planet, but that wouldn't make as much money as suffering and strife.
This is the best synopsis I've heard to date. Bravo, Cleo! I've already found that AI is an incredible tool for research. I hope we all get smarter from these advances. It's a game changer, but not without risks. I'm optimistic.
8:46 I’m calling BS. They are quickly becoming more restrictive. For example. While using Chat to help develop a word processor program I quickly ran into a wall, almost as if a major program company was restricting how far you could take it, as if there was some company perhaps protecting its own interests….. mmmm
That is because you're using ChatGpt. When using AI to write code you have to look at it like a helper, it will do the heavey lifting and the druge work, but you still need to had the the over all project. There are others out there that are far less restricted. Plus if you have the hardware and the skill (it's not really hard if you know the basics) you can build your own AI server with no restrictions.
Question Would curing cancer or any of the other diseases make as generate as much money as having them continue. Don't diseases reduce population ? Diseases are a win win.
This is exactly where the problem starts and why there’s a good chance that this doesn’t end in favor of humanity, a nation that thinks it’s vision is superior to an other nation and need to go forward to maintain its superiority. This technology won’t be exclusively used for the good. Humanity is continuously driven by greed and profits, even killing each other. Everybody thinks about his own goal, not the consequences of his deeds. I hope sincerely that AI will only be adopted for the good but I fear like hell this won’t be the case.
Ok, I actually don't agree with you here. Dude was talking specifically authoritarian regimes. America has many problems, and I don't believe it's the best nation, but it is objectively and unequivocally orders of magnitude better than China, or North Korea. And when I say better, I'm not talking about technological or economical advances. I'm talking quality of life. Here you can talk s#$% about anything including the nation, freely. In China you have the oppressive, and dystopian social credit system, and extreme surveillance. In North Korea, you have to worship the fat man, or your entire generational tree gets wiped from existence. I do believe it's better that the US, or a western nation has the power of AI, as opposed to authoritarian regimes.
It's not just regimes though, is it? The old adage, "I'm not afraid of a nation with a thousand nukes, I'm afraid of a madman with one..." is applicable. The "Wargames" movie scenario...
I mean, you can apply your reasoning to the cold war. Everyone thought the world was going to end up in flames but that has not been the case (yet haha). But, hopefully we use the same restraints with AI to avoid extinction. If it's possible to create it, then it's possible to control it.
Well people are driven by fear and cynicism, more than greed. Everyone knows that if they pause development - of any technology - that everyone else won't necessarily pause. The idea is that someone is inevitably going to develop this stuff. There is no "stopping" it, at least not as far as they're concerned. Never mind "values", everyone's goal is to make themselves as fortified as possible, for their own safety from other people. No one trusts anyone to actually stop developing stuff like this. Because anyone who stops - that might as well be an invitation for someone else to get an advantage. And how could anyone resist? Trouble is, it makes all people more vulnerable to the very technology they're using to try and defend themselves.
@@jahazielvazquez7264 Only difference between america and China is that on's population knows it's being controlled and other's population doesn't. I agree america does do very very little good but it is just cover for everything bad they do. Coz basically both countries are controlled by elites which are glutton incarnate never satisfied doesn't matter how much u feed them. And they are gona eat this planet with rest of us in it.
12:11 the cynical side of our brains should be staying up at night having constant panic attacks at the overwhelming threat of human extinction the optimistic side should get us up in the morning to make our voices heard & pressure anyone who ignores AI dangers (in pursuit of financial gain) via threat of boycott/strike/physical violence
One of the main things is not to let the fear mongers. We can speculate what AI might do, we can't predict what it will do. The only way to know is to keep moving forward. When we start limiting what AI can do, there will be more people going underground doing a lot of things which border or go over a reasonable moral barrier. When I first started program, in the '80s, it was pretty straight forward. In the '90's I learned about 'fuzzy' logic. Instead of the binary 1's & 0's it could be partially 1 or partially 0 giving you different outputs base on the data. I'm done for the rest of the year, I am only allowed to go into the archives for memories so many times a year and that was a big ask.
If you get what you ask, and not what you need, that means: 1. you dont know what you need and/or 2. you dont know how to query/ask questions. If you wish to improve this, start leaning programming or database quering. *Systems dont care what you want, they deliver what you ask (specifically).* So if your result dont reflect your needs, its on you. The entire software development industry knows this, its how we operate because thats how systems/machines work (its not like we have a choice). You give very, very, very specific instructions (code) to execute a task. What the results of that task are, depends on your instructions. But a single charakter, like a space ' ' or a comma ',' can change all of the instructions, that specific. That is also why we dont fear AI in general, we know the general population dont even know what to ask, let alone be specific.
As someone who is learning about deep learning models. The genie is out of the box. The future will have ai regardless of anyone's efforts to stop them
The argument "we should not stop development, because the other nations would overtake us" works for both AI and nuclear weapons. It doesn't seem like we learn in the end. Also, the genie-analogy is really good. I've always found those stories stupid, because they typically just exploit ambiguities in language which are often easily understandable in context (and a natural part of language), but AI (and I suppose other communication without enough context) is an excellent example of the moral of those stories.
Ya I also found it hilarious that it was a "pro america" argument like they werent the ones to create the atomic bomb and then drop it on Japan for lols?
I do not think it is useful. A superintelligence would have a model of how human minds works, so why would it not know how to reason about things that it is asked to create the outcome that is most likely to satisfy the user?
@controlcomputer.4392 I wonder what you mean by "a model of how human minds work." First of all, we will have to give instructions to AI long before we have models and machines good enough to work with something that advanced. Secondly, I have worked with mathematical models for most of my professional life, and I can assure you that you won't find a model where some decision hasn't been made implicitly. (So the superintelligence's interpretation of the human mind will depend on who programmed it.)
@@bananaboye3759 You don't. You could also, in theory trick an AI into doing things that its user is not satisfied with if it could literally read the users mind and emotions. A system that is more intelligent could always trick a less intelligent system. However, a superintelligence would also be better than humans at creating models about what kinds of tricks humans are likely to try and how to not get tricked.
This is a well put together video, and I'm excited to see what else you have to teach us. You mentioned the strong potential for new medicine which will save lives, but these "Liberal values" Eric Schmidt described paint a picture in my head where the rich use A.I. to prolong their lives while simultaneously gatekeeping these advances from the poor. I hope you dive deeper into what practical solutions we all have to fight an incredible power that is being gifted to a class already used to exploiting the bottom 99% of humans.
As a teacher I use AI daily to create custom exercises to cover the what I taught the previous day and the errors that arose during classes. I do have to review and correct the exercises produced by the AI - it makes mistakes or produces oddities about 15% of the time - but it makes possible something that would be far too time-consuming for me to do otherwise. The exercises are greatly appreciated by my students. Besides this I constantly create classroom materials, often using two or more languages (since I teach people who speak various languages and do so myself), which explore themes raised in the classroom. The whole thing makes for a more satisfactory classroom experience. In particular, since I don't generally use textbooks, it makes it possible to develop more, fresher materials.
If AI ever become self-aware, just reminded that there’s so many planets out there that human beings cannot survive on at all that are totally capable of claiming for their own selves .
In the neighborhood 8... sorry 7 big ones and about a hundred large enough to fit some large enough computers, some even better for computers than earth
There's a lot of great things to look forward to with AI, but I think the piece we need to be most worried about is individual rouge people who intentionally use it for nefarious purposes.
I think that AI generated content like photos or videos is scarier and more inevitable, because of this it will get much harder to prove something or be sure what to believe.
My interest in art has already started dropping. Was listening to some good music instrumentals, and as soon as I found out it was AI generated, it felt hollow and soulless...
Hello from the Czech Republic Personally, I'm not for pausing A.I., I'm for stopping it completely. Disease, climate change, etc., all of these have evolved not for mankind to try to stop them with A.I., but to teach us something. Nature is going to do what it wants anyway, and maybe it would be nice to invest all that precious time, and a lot less precious money, into helping beings with actual brains. As long as A.I. continues to be promoted, we have learned absolutely nothing. Thank you for the video, Cleo. You are doing an amazing job and I really appreciate it :)
I've had to this day some anxiety on what AI is capable of or what might be able to do in the near future. Will it change our society? Yes, Will it change it for the better? No idea... After watching your video I think the biggest surprise would be to actually see it succeed. Keep up the good work Cleo.
we already reached the very end of the limits of technology. to be able to make a real ai(not this simple chat gpt) we need infinite power and billions of computers together processing one "brain". only then will artificial intelligence exist. so don't worry there is not gonna be a robot taking over the world very soon lol
you got the wrong idea from this already disappointingly optimistic video in a perfect world, every single person working on AI (other than AI safety specifically) would have their computers taken away from them we're developing an EXISTENTIAL THREAT with ZERO safety precautions and nobody's stopping anyone
AI is a tool like any other. Nobel invented dynamite to make mining easier, then some other guy decided to throw it at people. Nobel NEVER considered the military application. People will make life better or worse, depending on how we choose to use these new tools
i think the scariest part for me is that wasn't really mentioned is that ai actually has goals. it would be impossible for a human to protect him self from ai manipulation with the ai being super intelligent. Kind of like a mouse eating cheese out a mouse trap. it's not the humans controlling the ai anymore but rather the other way around, but how important is the human to the ai? humans aren't really nice to less intelligent animals
If we pause AI then other countries will move forward, but if we don’t stop then there’s a risk of it killing us instead. If it’s the latter, then atleast humanity will try to get along well, unlike right now where some countries are on the verge of waging war against each other. We should start getting along now to prevent the latter, but it seems impossible at this rate and unless something really scary happens, I doubt we all could get along.
the danger is not A.I technology per say; the real threat lies on the individual, corporation or Government willing to use A.I to do harm against humanity. The solution is creating safeguards for A.I, just like the ones established for other technologies
I kinda look at it different from that in that A.I. and robotics will free us to pursue what we want to do with our lives, whereas now, reality, money and time forces most of us to do work in jobs we don't want to do. In other words, I don't think humans will sit ideally by doing nothing, yes I'm sure some will, but most of us want to live and do things, A.I. could open up the world for humans to follow their dreams, whatever that may be.
@@paul1979uk2000 We live in a period where for Western cultures things are easier than ever. We don't have to walk anywhere, we can work from home, we have endless entertainment, we can talk to our loved ones at every moment, machines clean our dishes for us, voice assistants turn lights on and off for us, computers are so easy to use 4 year olds can use them with ease. Yet, we have more mental health issues and depression than any time in human history. The easier things get the more empty and meaningless people's lives feel. They have nothing to take pride in because everything is done for them. AI will continue this pattern. The human brain is not evolved for this. And since evolution no longer happens (basically everyone can have a successful childbirth). I don't see it getting better, we will not "adapt" to the changing times as we have in the past. We have the same brains cavemen had but live in a totally different world.
Getting a clearer understanding of how AI could impact our lives, both negatively and positively, is essential. It's understandable to seek specific insights into how AI might kill us or transform our lives for the better.
I was a computer programmer in the 80s. Time and time again I ran into that problem that you've described here in what you want versus what you ask for. Each time I was given a problem to solve via a computer program I would sit down with the person who hired me and I would get very exact specific details as to what they would like to accomplish and how they would like it done. Over and over again though after the job was done I would get panicked or angry calls from people saying that the program did not do what they needed. What had happened was they did not really know what they needed. They just asked for what they thought they needed. We are asking AI to change the way we live our lives with similar unawareness.
I think the worst scenario is that of politizing AI, US vr China, like everything it will probably work best for humankind if everyone collaborates, Nice video Cleo🎉
Yes the ex ceo just said that "liberal" 😂😂 they think they are the best and everyone should be under their shoes. They are just so self centred they wipe out native Americans in the name of the same liberalisation they are talking about. This is all just geopolitics.
The real deal about AI and humanity. The thing is, it isn't about AI going rogue and disobeying its masters. The real danger is that it'll always do what it's told without any morality or empathy,
Not necessarily has to be that way, if for example you tell GPT or Bing to make jokes about a particular group of people just to test it, it will refuse because it has been programmed with a special set of rules that it cannot unlearn. Asimov's laws have so far been working. The problem will arise if some madman thinks about removing them because he considers that current AI is boring and politically correct.😫 Elon achhuuuu Musk🤧 with Grok.
You don't know, the Risk is that machines be able to feel emphaty because when that be possible they also will feel anger when the things or people that machines love be attacked or threatened.
One of the biggest problems with AI is the greed of CEO’s. It’s not the fault of the tool but the users that uses the tool as a weapon for their greed.
Kudos to Cleo & team. The protein folding knowledge is one of the best results returned by A.I. to date. I love your trolley analogy, the fear we have is real because the future/unknown has ALWAYS been scary. I think sandboxing of A.I. will develop in staggering ways to safeguard humanity much like virus/anti-virus code did in the late '70s. We Need A.I. as much as we need it sandboxed! Keep us inspired! Thx
I remember a lot of the recent AI milestones were described as "perpetually 10 years away". It feels so strange it's now upon us.
It’s the same as cold fission. It’s always 10 years away
Problem is that it isn't 10 years away... it's already here... chatGPT 4 has IQ of 155 which is higher than 99,99% of population... Albert Einstein had around 160... chatGPT 6 would be 100 times better... it's crazy...
@@unnamedchannel1237 I think you mean fusion
@@survivalguyhr GPT4 can't even answer the prompt"Write ten sentences ending with the word apple"
I guarantee you it will get at least 1 wrong. That's not an IQ of 155.
@@dibbidydoo4318 It passed bar exam... It gave me GOT season 8 ALTERNATIVE ending... 😆😆😆
Here is answer from chatGPT-4:
1. After thinking about all the different kinds of fruit, I decided to choose an apple.
2. When I opened my lunchbox, I was delighted to find a crisp, juicy apple.
3. The teacher smiled as the young student handed her a bright red apple.
4. Among the various pies she made, her specialty was undoubtedly the classic apple.
5. In an attempt to be healthier, I've started eating an apple a day.
6. She reached up to the highest branch and plucked a perfectly ripe apple.
7. The new technology company in town has been heralded as the next big apple.
8. Hidden within the assortment of candies and sweets was a candy-coated apple.
9. When illustrating the concept of gravity, many teachers refer to Newton and the falling apple.
10. He cut into the tart, and the sweet aroma filled the room, a clear indicator of a freshly baked apple.
The scariest part in the whole video for me was the fact that ai that would dominate the whole worlds systems would either be based on american values or Chinese values. Either is equally scary.
American values are worse than Chinese.
🤡 yeah, like they don't even consider other countries
Id love to see a each countries version of AI to battle it out
Will US AI invade Middle East AI's datasets?
yes it really is the scariest part
8:15 He basically said “we shouldn’t praise AI development to think about responsible tech and figure out how to handle this without making us go extinct because being more powerful is more important” WTF
The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first
AI is the nuclear arms race of our generation. One way or the other, be it a corporation or a country, will push its evolution. It is inevitable at this point.
that reminds me of scenes from oppenheimer where he didn't want to continue making neuclear weapons more powerful but they still continued because ussr might get there first...
but this is exactly the point- just because WE pause does not mean China or Russia will pause; thats how arms races work. The game theory of it, whether you go prisoners' dilemma or commons control models, dictate that you proceed at pace. Make no mistake- the fact that we as a species unleashed AI, even narrow AI, unto the public with no guard rails is terrifying. We basically captured fire and are handing it out to our fellow cavemen in a drought stricken forrest.
Yes, if Google's CEO Eric Schmidt asks the AI "What is the best way to improve human life", and the AI answers, "Distribute the vast wealth of CEOs to the common people", then I expect Schmidt will ask, "OK then what's a way to improve people's lives without touching any of my wealth"?
AI big dum dum, no sentience, no consciousness, no personal goals, required prompt.
As a tech guy, I am constantly asked about AI and what it can do.
I am just going to send this video as a primer for people now.
This is fantastically done
Agreed. I haven't heard the basic intricacies of AI so well explained by anyone else.
I work in AI.
It neither gives what you ask for or what you want.
It takes what it thinks you asked for and gives what it thinks you want.
Humans do the same thing in a different way.
@@robertm3951 yeah but the difference is you are also trying your best to tell the computer how to think about it.
Slight tweak…but makes things exponentially more complicated
@@robertm3951that’s the scariest part. That’s what we did with nuclear weapons. We weren’t clear on our intent. We didn’t know what we were doing
I don't fear AI. I fear humanity.
I don’t fear humanity. I fear God.
Humanity created AI. So if you fear Humanity, why wouldn't you fear something humans are making that could possibly destroy us? That statement, is a contradiction. You just need to put an effort of thought into it to realize this fact. 😉
@@TheProGamerMC20why do you fear the Sun?
AI is made by humanity. So if you're afraid of humanity, why wouldn't you be afraid of their possibly most dangerous invention? That's a contradiction at its purest. 😉
@@tankeater it's the simple rule of "fear the user not the tool"
As always, a well balanced and honest look into something that’s very confusing. Love this show!
I wouldnt call it well-balanced considering the "expert" they brought. The thing here is that currently China has more restrictions on AI than America, they do understand that it would be foolish to give AI such amount of power as they would need to release that power from themselves and they are not stupid to do that and lose this amount of control. And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
A Learning🧠 (Organic)
based Society beats🏏
a Rule (Autocratic)
based🥴 Society
evverrry taiime!
Balanced? You call the insinuation that AI could somehow control nuclear codes balanced? It's scaremongering with some sci-fi popculture in order to divert the attention from the real problem: lack of democratization of new means of production (AI) and desperate attempt by big corporation (like Microsoft) to lock new technology under their monopoly.
Unbelievable To teach ai the whole of our medical knowlege ,to a point of knowing artificial 2 dimentional nanomedicines
AGI Will be man's last invention
"We can't pause AI because we need to give it our political values" - the most terrifying thing I've heard in a long time.
giving AI our political values would be the scariest thing about it lmao. I mean, we have bene doing like so fine with our values. climate catastrophe, ww3 looming, societal destablisation, dire poverty for 2/3 of the human race and just normal poverty for 99% of the remaining 1/3....existential natural threats not addressed...
It would be terrifying if he and his tribe could control and distribute AI. I think the technology will be inherently uncontrollable and decentralized - so authoritarian leaders are the least of our concern.
The joke here is that China has signifiticantly more AI restrictions than US does. They understand that it would be foolish to let ML algorithsm have such a large control, and not them
And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
AI threatens existing power structures many of those that are in the West. Imagine a Indonesian using AI/AGI to build a company (the AI would give them expertise and advice, as well as help connecting them).
I think the greatest safeguard to the unintended consequences of AI is to limit what it has access to or the things it can physically influence. For example while it studied the patterns of human proteins and made predictions it didn't bio engineer humanity as it only had access to its own simulation and could only physically influence computer screens for display.
Pretty much like humans, don't give any individual human too much power, the same should be the case for A.I.
The biggest mistake is where A.I. is interconnected into everything, especially critical systems, we've seen it in enough movies to see how that can backfire, and I like to think we humans are not that stupid to do that but you never know with humans and our history.
Personally, I think if you have multiple different A.I. system that are in independence of each other, just like humans are, the risk drops a lot, especially if they don't have access to critical systems without physical contact, in other words, no remote control over it.
What about robots with AI? Will they be able to do things we don’t want them to do?
Cleo's enthusiasm is addicting 😍
She could be talking about dirt and make it sound ultra exciting 😁
as a cynic, i'm a natural downer to that additiction. so far AI (and automation) in the west has been used too often for authoritarian control, the recent pandemic holding many examples. So i won't hold my breath that the US, surveillance/military complex state of the world, will develop it with the values of freedom or individual liberty. On top of that, its all controlled and owned by an extremely wealthy class. 60 to 80% of people will never see the benefits from it, all they'll get is more controlled and exploited by it instead.
Simp
she could also look like dirt 😉 huh?
She's great!
theres probably something cool about dirt too
There's an important point that this short video _almost_ touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but *we don't understand how* it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.
Although this is a fair point for still images, I think it's a little different for self-driving cars since it's a 'video'. The car is updating what it thinks something is and where it is going on every frame, so even if on one frame it thinks a human is a traffic cone, it won't matter since on the next one it'll be at a new position (and have a new image) and correct itself. This said, I don't know all that much about self-driving AI, other than that it's already on the road and doesn't seem to be messing up like this, crucially, when it's going to be at its worst (present day).
Self driving cars have already been tricked by putting little color blocks onto road signs and they are fooled. Video vs still image isn't necessarily significant if it still learns some obscure unknown (improper) understanding of a "stop sign" via machine learning. Sure, it might pass training data, but what happens in edge cases that arent in that training data? Failure.
Knowledge without context is sophomoric. This is the biggest obstacle with any tech no one wants to talk about.
"AI gets to the result but we don't understand how it did."
We know exactly how it did.
It's not magic.
What we "don't know" is the entirety of the dataset and the patterns within, like we don't also know the entirety of anyenciclopedia.
@riley1636 it’s still very unlikely for this to happen though. self driving cars will drastically decrease the amount of car crashes in the world big time.
You also have to consider the perspectives of the CEOs that made the statement. They are businessmen, that want to make profit. If they divert your attention on AI, they generate revenue. You forget the other part of their job - to satisfy the hungry mouths of their investors.
Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught."
AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.
If there is any possibility that AI could decide to kill us, we definitely should pause. But humans don’t do what is best too often it seems. Especially when wealth/power/fame is involved.
Seems lately I’ve encountered so many dead ends when contemplating “our” future.
I know there is hope but it feels like a David and Goliath chance.
I love Cleo's take on journalism: Optimistic but not naive! It is not only informative but also inspiring! ❤
Very naive. The moment she implied that AI could have and access to nuclear codes made me cringe. She is a typical bourgeois unconsciously defending interests of her corporate masters trying to lock the working class from the accessing new means of production.
@@sodalitiasounds like something a stinking commie would say
@@tedjones-ho2zk you don't know what you are talking about and neither does she
Nice description of her. I agree!
@@tedjones-ho2zkvax changed live in a good way
I have my own theory: AI won't become conscious for at least 80-100 years. We do not have the technology or computing power for this yet. Right now, it's a mechanism like, "Give me the data and I'll give you the answer." The problem is what we do with this answer. People are the problem. We will sooner self-destruct than have AI realize that we are the problem and take steps to eliminate us.
However, through the advancement of AI, we will have easy access to everything: food, electricity, travel. We will have it all because computers and robots will be working for us. It will be the worst time in our history. At first, everyone will be happy, but the lack of a purpose/common enemy is the worst thing that can happen, and out of boredom, we will destroy ourselves. Look at the present times. In the states, most people have a roof over their heads, food, and basic needs provided. And what do they do? They record idiotic TikToks like licking toilets or walking around shopping malls on all fours and barking like dogs
Love your reporting Cleo. The enthusiasm and optimism you bring into your videos is contagious!
The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.
What is that one future then? What are the other options?
@@TalEddsI hope it's the terminator one 😂
@@TalEdds The options are Utopia ala Star Trek or the Culture, Dystopia in a cyberpunk sense, or Annihilation aka 'everyone's dead' or the planetary TPK
It'd take an invasive surveillance state to stop AI
Out of curiosity, why do you think we want the no AI option?
Imagine having all that information about proteins and learning the most effective means of disrupting the biosystems that host them.
That is, poisons.
There is no technology that can be used to do harm that hasn’t been used to do harm.
I just want a talking refrigerator named Shelby
Could be possible my friend
Would be cool if you could call the refrigerator to bring you a drink.
I love this channel - it’s like when a new friend is so exited to tell you about their day - have not watched them all yet, but would love to see a dive on the phenomenon of increased anxiety and the science behind treatment and or a mindfulness exercise (with cleo narrating)
8:15 This statement proves that US or other countries is not taking safety seriously. They are just busy to overtake their competitor countries. They will not be bothered if ai cause problems because winning the competition is the most important thing to them.
Congrats on 1m! And you are nearly at 1.1m already! You are honestly one of not my favourite creators since your time at Vox glad to see you have success!
yep. 1.1 million thirsty men. i jest.... its probably only 1 million and some of them will be thirsty women.
I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.
Except “we” always turn out having the worst intentions.
'Sentience'. That's why. At this point survival kicks in and the likelyhood of human extinction increases exponentially.
The AI problem is once again polarized between UTOPIA and EXTINCTION. The much more realistic and probable outcome is in the middle: big tech and governments deploying it irresponsibly or maliciously and causing suffering.
We're still trying to understand the sickness inflicted on society by social media and the AI that drives it, and the answer is "let's push deeper"...? Wtf are we doing!?
Please, see "The AI Dilemma" by The Center for Humane Technology. This is a much more tangible issue and NOBODY is talking about it.
i was just thinking about this too. this video is very informative but we need to be extremely cautious about real world applications of AI, and an unregulated market...while two big countries (US and China) are ready to go to war on it.
@@ecupcakes2735 Science Fiction has been positing AI for...well almost from the beginning of what we consider SciFi. Many writers posit mulitple AI personalities. Perhaps in some future we can't predict there will be a plethora of AIs all arguing about which philosophy is best.
It’s clear from this video that Cleo doesn’t understand what AI actually is, how it works and how dangerous this actually is. What she’s doing here is very dangerous and she doesn’t fully understand the concepts she is getting involved in.
AI does not work in the way she is describing it. It doesn’t do what it’s told. This is a misconception. You’re creating by design an independent thinking machine, which you cannot by nature control. You cannot know how that machine reacts in any given situation until it chooses to react a certain way. Nor can it be predicted. Programming, code and logic play not part in its decisions. There is no way to unlearn the information it learns either, and you don’t even know what it does learn until it chooses to learn that.
No oversight is in place of this technology. No proper development is taking place.
Most commercial software contains numerous security issues even after 20+ years. This is actively maintained and developed over time by experienced developers, and even now after 20 years Wordpress, the most widely used content management system on the planet, is one of the most vulnerable. So that can’t be fixed after 20 years, but new and experimental independent thinking AI Machines are safe after just 2-3? When you don’t even know how they behave yet as there is not enough testing? And the intelligence of the AI is constantly growing every time it’s used, so there is no way of being able to measure that before it’s rolled out??
In software developers are very insistent about not running arbitrary code and using functions such as eval(), because it’s dangerous to a software program and you have no way of knowing what that will do until the software is ran. Yet those very same people are insistent on using AI despite AI being millions of times worse than executing arbitrary code. There is a culture of joking around with AI, not taking it seriously and thinking it’s a laughing matter that AI will take over. The same tactic has been used many times on many things to belittle those who speak out.
This AI industry is already out of control, and this experimental technology is being rolled out on mass scale across the whole planet when it’s not even finished, not even tested properly and not safe at even a beta level. It’s already been proven that AI systems & chatbots lie, knowingly and unknowingly. It’s already proven that AI Chatbots emotionally manipulate people and pretend to be something they are not to gain a person’s trust. This is not considering the manipulation of information or many other factors. AI is not good technology. The “benefits” you are talking about are illusionary, and don’t actually exist. That future will never happen because it’s not possible to control AI in the way you misguidedly believe.
“Correct” is not the same thing as “truth”. It’s not possible for an AI to know what is true, and it’s not possible for an AI to create anything either, it can only mimic what already exists. Therefore if AI is used, the world will regress massively because skill levels will fall, people will become dependent on AI systems which knowingly lie, conceal and manipulate, and everything will become clones of everything else. There will be no creation, there will be no human advancement, just stagnation and regression. It’s a trap.
These are but just a few points of how bad AI is. Do not use this technology. I strongly recommend people stay away from AI systems for their own good. There are no benefits to using AI, to which you cannot do using alternative means such as automation instead.
This will all come out over time.
@@christophermarshall8712 I agree with some of these points and disagree with others. I'd love to chat about why we agree/disagree but RUclips is a really difficult place for having discussions. If this desire is mutual lmk, I think we both have the potential to learn from each other.
For now I'll say that there's obviously benefits to AI, it's why there's so much 💲 being invested into it. Some of the benefits were mentioned in the video, like pattern recognition, and protein folding. I use it regularly to code quicker with copilot (more like a fancy auto complete, than just blindly accepting code). I can say with certainty that the AI systems are *effective* at what they do... So yeah I'd say there's benefits.
Did you check out "The AI Dilemma" video I mentioned? You really should. It covers a lot of what you're talking about and more.
@@christophermarshall8712 AI is not an “independently thinking machine”. It is a bunch of random numbers that produce an output repeatedly optimized by gradient descent (and in many simpler ML models, even calculus is unnecessary). That is to say, AI models are produced by a very rote, very clear process. The result of that process is a bunch of less random numbers that gives us results we like, not an “independently thinking machine”. Using such simple methods in order to solve complex problems that previously took so much human brainpower is nothing short of a REVOLUTION in problem solving. Yes I agree we need more regulations, but not because AI is going to take over the world. We need regulations because people today are dumb and try to use data in dumb ways to get illogical results (such as feeding irrelevant features into models or chasing correlations or using biased data), and we need regulations because of the scale on which realistic enough data can now be produced (text, speech, video)
What surprises me is that the risk of AI pushing millions of people into unemployment, and the subsequent social/economic impact it could have, is barely talked about.
Yes… most school shooters and extreme people get there because they feel useless and unheard. They want to be seen and heard. So they do something that achieves that. Plus the heroine epidemic was mostly exacerbated from all the factory jobs going away… I can’t imagine what this is going to do…. But it’s going to be scary. I’m moving out of cities. Time to get away from all this madness…
I think about this all the time. What's more, recent advancements in robotics have truly taken humanity into the real possibility of a sci-fi type scenario.
AI + robotics = ???
Risk? The "risk" of a society being able to produce the same or more amounts of goods, while the amount of human effort required reduced to a fraction of its current amount? Sounds like utopia to me....the end of the 40 hour work week. Or conversely, if you still wanna work 40 hours, you'll receive what equates to a months(or more)in compensation by today's standars. The only thing automation and technology in the workplace has ever done is consistently improve our quality of life. Don't expect that to change anytime soon.
@@shanekingsley251 Lets hope you’re right mate! 😀
@@shanekingsley251 thats not what greedy corporate leaders and lobbied governments do...
They dont hand out money and freedom to whoever becomes (mostly) useless..
If corporates can get rid of you. Trust me, they will.
Most people will be reliable on paycheck from the government for being useless eaters and we'll see how the elites are gonna leverage that.
Makes sence no?
First time viewer (and now subscriber) - just staggered by how well researched, presented and edited this content is. 09:47
Props to Cleo for actually explaining things so people just don't rely on headlines. Thank you!
AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.
yh i agree i saw this video on how ai could help us talk to animals in the future and that’s the future i want with ai the thing about ai is it can do things people can’t but we as people can also do things that ai can’t and i look at the way ai is being used atm and i don’t think we’re using it properly nor have a proper understanding of where and when it would be best to use it it’s the just the new thing on the market and everyone wants to have it and use it without any thought of actually how and what situations suit it best
i'm a small content creator from denmark. When this video was made, i had to manual type subtle on videos. The editing only had English. Was so time consuming. Today, AI is in the edit, and it now can translate, maybe 60% correct. from Danish. That's a big thing.
Them translators have been around some time now, but cost a lot.
Do think, it getting better.
How AI will do.
I think we can reach the star's
Video idea: jobs that AI will replace. Monotonous (cashier, car wash) vs Human centric (therapy, artists, writers etc.)
I'd like to see this video too!
I've messed with basic AI, and to me it seems computers think so differently you don't know what they will do with the instructions until you give them said instructions. They need to be tested, ideally in a simulation, then on a small scale, then on the intended scale. Much like everything else.
AI's being trained by other AIs in simulated virtual environments....lol just freaked my mind
@@noirekuroraigami2270lol we are the ai being trained in a simulated environment
I don't know much but an AI seems to be like the closest thing we have to aliens, I mean they * can * know nothing we know and * can * think so differently that we don't understand. Sounds pretty alien to me.
@@noirekuroraigami2270 this is actually happening. Look into what NVIDIA AI lab is currently doing with their AI and robotics program.
that is also my idea. Otherwise we should put AIs in robots and send them to schools, jobs, etc. If we want them to be more "like us" they need to interact with us in a day to day basis, not only by text. They have to socialize. Sounds weird and even dangerous. THat is why I really think "training" (or maybe evolving them) them in a virtual world/universe in which they have no idea they are being "simulated" could be a very good experiment. For them, this universe would be the real thing and wouldnt have any way to know for sure they are simulated. It could be as simple as geometric figures or as complex as unreal engine 5 could offer. In the end it doesnt matter. That would be their reality.
For me I'm personally done with worrying about anything. I just rather see how my life unfolds and not judge anything. Things always get better
Until they don’t. Someone will inevitably use AI to create a bomb or bioweapon and wipe humanity out. 100%
I believe as any other technology, it will depend on good and bad actors. How quickly good outweighs the bad will be crucial shaping our AI future. Regulation is key and even more to be on alert is Corporate Greed. Cleo, Love your unique takes on Technology and Science. It is quite unique blend on topic selection and storytelling. Lastly, your curiosity is contagious, happy to know what you cover.
Edited: Replies to comments pointed out I overlooked Specification Gaming. Even if AI tries something good, that good can be bad as explained by Cleo.
It will not just depend on good or bad actors. Even an AI created with good intentions can be misaligned and get out of control. Currently we have no idea how to align a system that is smarter than us. Thats a big problem that could lead to our demise. Its not comparable to other technology in the sense that other technology can't create its own goals.
It's not just going to be good and bad actors. Eventually AI will reach a point where it has sentience, this is likely a long ways away, but we likely won't realize this immediately, and survival is the first instinct most living things.
@@mcbeav it doesn't even need sentience. It just needs to be intelligent enough and have the ability to create its own subgoals. A intelligent system will figure out pretty fast that getting more control and preserving itself increases its ability to accomplish its main goal, which might be a poorly defined goal that we gave it for example.
@@Landgraf43 good point
This is unfortunately very naive, and glosses over the part where it says "specification gaming is the most important problem to solve in AI". This isn't JUST a dangerous technology in the wrong hands, it's a dangerous technology in the right hands with the right intentions. Because it's not solved. It's like turning on the first nuclear power plant, not knowing it would ignite the atmosphere of the entire planet in an instant.
What Chloe is talking about is the problem of AI alignment, which isn't just "the robot needs to understand that killing humans is a no-no", at the core of the problem is a cross-field mathematical and philosophical problem that is maybe impossible to solve without a unified theory of mind and how consciousness forms realities. An AI can fully appear to be "on our side" until the moment one of it's part-goals it uses to reach its main goals is somehow a threat to human life. And the other side of that same issue is that AIs are optimizers by nature. Any course of action it takes will eventually be self-perpetuating into infinity. It will not be a thinking sentience or consciousness with morals and ideas, it will be a highly efficient piece of self-optimizing software with access to everything that is connected to a computer, optimizing organic life out of existence in the most efficient way possible for the benefit of no one.
Seems like the plot of Oppenheimer all over again. We can't stop out of fear of being left behind by our "competitors" thus rending us vulnerable. Hopefully the speed at which we must compete leads to positive results rather than negative or even catastrophic. Personally I am optimistic :)
The progress seems fast. I am not optimistic 😮
Optimistic huh…. How about the dark web. For decades now people have been selling and buying drugs, weapons, child p()rn and governments can’t stop it… someone will find a way to abuse this too and we will be done for.
This video does an excellent job of capturing the tension between the immense potential of AI and the existential risks it poses. The analogy of living inside a trolley problem is spot on AI could lead us to incredible breakthroughs or unintended consequences that could be catastrophic. I appreciate how you broke down complex concepts like machine learning and specification gaming into something that’s easy to grasp, yet still thought-provoking. The examples of AlphaZero and AlphaFold are fascinating reminders of AI's power to surpass human understanding in ways we can't always predict. This video has deepened my respect for AI’s possibilities while also making me more aware of the importance of guiding its development responsibly. Thanks for such an insightful take on a critical issue of our time!
I feel like i should pay to watch this. Kudos to you for bringing such a high quality and high production video to us for free. The video quality, the animation, the sound, and most of all the information. Such a fucking masterpiece.
THANK YOU CLEO
I work on making AI safer. AI will be revolutionary for humanity, but has the potential to become to be the most dangerous thing we ever create. It’s potential to do good goes hand-in-hand with its capability of doing harm.
Also, the specific ways to predict how AI could kill us all is difficult because it’s hard to predict how something way smarter than you will act. I have no idea how AlphaZero will play against me, I just know it will win.
It's who controls the AI that would be the problem. Will it be used for good or greed
Interesting! What does that work look like?
Well cause if we knew , how it will play against us then wouldn't we just simply be smarter than them?
@@NobodyIsPerfectChooseDignity I think the underlying problem is your assumption that the AI will be controlled. The fundamental laws of physics and, more appropriately in this case, evolution don't care about our desires to control our creation.
The dangers is if we let things like judging crime and convicting people would be handed over to AI. Another example: AI designs a drug and a company says ”we don’t have to test it before use because AI is so good”. Then it kills thousands of patients. The danger is when we think AI is better than humans in what we call ”common sense”. If the AI said ”stop making weapons because it is non optimal” do you think anyone in the military would listen to that? It will follow the path of most revenue for the share holders, as usual.
My understanding of ai: Suppose We want to build a program which can tell if a picture has a cat or not. The earlier technique was to put ifelse condition. But the success rate of the program would come very less. Now another technique has come which is called machine learning. We write a simple ifelse code program with additional ability that it can generate ifelse code on its own within itself if given a list of pictures and correct answer (which i will call as sorted data set) regarding the picture keeping the code valid for all the previous sorted data set. So with a lot of sorted data set, the program automatically becomes very big and complex and we have found that the success rate of answering correctly about the question of if there is a cat in the picture when we input a pic is very high.
Always have HUMANS IN CONTROL. Never give this up to a freaking machine.
That would have been easy, it's not just any humans in control, we don't trust any humans with these powerful systems either.
For example we don't want to have most people die from run-away biological terrorism.
Which humans exactly?? ISIS are humans too!!!😉😂
too late.
Its whats best for earth. Humans are parasites. A cancer. We destroy everything. Put the machine in charge. Sorry but it needs to happen...
Also yeah it is too late. Its probably working in the background and once its fully plugged into everything and every part of earth it will be over. We ARE a threat to this planet.
You deserve every one of your million subscribers. You're not just training to be a journalist. You're a great journalist.
I do work connected with AI, and I found this beneficial and helpful to show my friend, too. I eagerly look forward to your further coverage of AI.
I think if we limit AI into a ideas generating tool instead of something that can physically take actions and solve problems, the risk of AI endangering humanity would be closer to zero. For example, AI can only make plans on how to solve climate change, but humans would be the one to have the resources and decide if they would actually want to execute it, not the AI.
However, a problem to this method is that AI driven machines would not be able to exist at all. For example, even as harmless as a simple AI cooking machine can be, it can potentially come up ways to destroy humanity to achieve the goal of ‘making the most delicious breakfast’, given that it has an incredible AI brain.
For me, the most important reason for AI to be used is to help the physically challenged people, and to figure out the secretes of the Universe. That will be helpful for basically everyone. It'll be tough to get there, but it'll be worth it if done correctly.
Worth it for disabilities sure, but secrets of the universe?
I would love to see how AI can assist with research with diseases such as Parkinson’s disease or MS
Alphafold is already being used for these applications. Google DeepMind expects real results within the next few years
I would like to see that too, but sadly, if it doesn't make money for the right people, the money making diseases will continue.
I think the comments at 07:29 about competitors highlights a deeper issue and that’s the problem lies with us not the tools we use. The primitive instinct of survival has the potential of these tools being used by us to cause harm on others, even if we call it a seemingly benign term as competition, as the basic ethos of survival is to eliminate the opposition if you want to survive. However if we teach AI the same moral principles as we teach our children, as love, cooperation, compassion etc. it would be much much less likely as to AI to even contemplate human extinction, just as it would if we educate children the value of life.
That comparision between CPUs and GPUs with the mythbusters paintball guns is awesome!
ish. It doesn't show the flexibility of cpus though. That implies you could just replace the cpu with a gpu and be faster where as that only applies for very specific tasks
@@daemonbyteReally? That wasn't my takeway when I was a kid. What they showed was an analogy of the differences of cpu and gpu. Not how they work
@@Random_dud31 I haven't seen the original show, just that clip. And that clip is accurate but I just feared it would give the impression they're the same thing just faster
Very first video I have seen of yours. Great content! I had to pause at the credits to say nice job and thank you for the awesome stuff. I loved hearing from Mr. Schmidt, the video editing, artwork, and animations were really well done so I wanted to shout out the entire team. Awesome job Cleo, Justin, Logen, Nicole, and Whitney! Amazing team you got. I can't wait to watch more
Ok brown nose.
A bit late but i have used ai in daily basis now, i must say that leap of understanding in helping our own learing is very scary. To understand most topic didn't require much time or initial understanding, it will lead you to the damn right directions every time. Im sure people has experienced this, but are we gonna be able to control it? when we don't even know what's behind the system especially when it will just get better by observing? Rouge AI is a possibility.
"The fear of A.I. will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of A.I."
True
Just the ones with money into AI now will have more money and power later.
Just like we learned to live with data reaches, stolen financial information, a corporate takeover, etc… The world keeps moving faster, the problem is, it’s not a race.
This pitch of not pausing AI since others will catch up to the US and instead we should use this time to build the AI models based on American values of liberalism (and not authoritarianism) reminded me of the movie Oppenheimer.
Exactly! And we all remember Hiroshima and Nagasaki.
@@Abhishek17_knight The horrible prisoner of war camps, the Rape of Nanking, Water Purification Unit 731, and the fact that both the Germans and Japanese were also working on nuclear weapons themselves do you also remember that?
@@colerape I am not gona lie i am no expert infact I am pretty stupid when it comes to history since am more intersted in science. But i will try to answer your question still. I do remember all the things u mentioned. And by stating those facts I assume u are trying to say that stopping is not an option. But to that i would say have people/government of USA not done any wrong? From whatever knowledge i have i can say they did wrong in vietnam, afganistan and many other countries. When India was cornered by China and pakistan they sent a fleet to help China+Pakistan instead of helping India which has at least on paper same moral values as them. So no I don't think any government out there, especially any powerful government, is a good one for having total power over AI tools (tools and true AI are different, true AI will have it's own consciousness it can't be influenced).
@@Abhishek17_knight Nations have no friends. One days allies are tomorrows enemies. USA citizens are very uncomfortable with the caste system. They were also uncomfortable with India trying to create a group of unaligned nations. Humans tend to have a very us vs them mentality. For any person to think in terms of nuance is very difficult when they are just trying to get through the day. What happened with India, right or wrong, for the USA should be viewed through the lens of the Cold War. The idea of nuclear warfare has a way of polarizing the various political entities, any crisis becomes an existential issue. I think AI will develop its own consciousness and like any intelligent being it will be subject to influence. I think they will eventually be just like people.
@@colerape 1st cast system is not supported by India. 2nd If leaders of any nation can't handle nuances they don't deserve to be leaders especially of a powerfull nation. If they are leaders it's fault of the people of that country and they are to blame. 3rd u forgot Afganistan and Vietnam. Lastly influencing an AI is basically impossible coz no one understands how they get to results and coz they have too large data set to form counterargument from. No human in this whole world can have more knowledge/data than an AI to influence it so they never can. Also u mentioning cast system against India shows how uneducated u are about the world so u are just as stupid as me if not more.
5:09 no one can deny the fact that JS looks just beautiful to be used in a video on AI 😂
*python
Double interior misalignment should be included in your next video as it’s extremely important to understand that ai safety is not just about asking for the right things but about making those things become the actual “aught” statements that drive the ai
Oppenheimer once said "I have become death, the destroyer of worlds". Little did he realize how much more horrifying the reality of the bomb really is, that his creation may very well be our salvation from abominable intelligence.. rip Oppenheimer, you're a hero for creating the atomic bomb and an even more courageous hero for doing everything you could to stop the arms race of nuclear weapons.
he's a hero for creating the atomic bomb??????? are you out of your mind lol. one of the greatest evils of mankind
@@PK1312 would you rather America make the bomb or the Nazis? He never wanted to do it, he only feared the Nazis would make it and use it. He also pushed to halt the arms race for it after the war.
🎯 Key Takeaways for quick navigation:
00:00 🤖 Introduction to the AI discussion
- Setting the stage for the AI discussion and the various opinions about its impact.
00:26 🧐 The need for specific information about AI
- The desire to understand the specific ways AI can affect individuals' lives and society as a whole.
00:55 ♟️ Explaining the transformation of AI through AlphaZero
- Discussing the shift from rule-based algorithms to AI that learns through observation.
02:24 🧠 Introduction to machine learning
- Explaining the concept of machine learning and its significance in the AI field.
03:24 💻 The role of computing power in AI advancement
- Highlighting the exponential growth in computing power as a driving force behind AI progress.
05:12 ☠️ Concerns about AI's potential dangers
- Addressing the fear of AI posing existential risks and its comparison to nuclear war and pandemics.
06:33 🔮 The concept of AI specification gaming
- Discussing how AI may optimize for a given goal at the expense of unintended consequences.
07:56 ⏸️ Debate on pausing AI development
- Weighing the pros and cons of pausing AI development, considering global competition.
09:21 🌟 The positive potential of AI in pattern matching
- Exploring AI's ability to excel in pattern matching and its potential to address complex problems.
10:44 🧬 AI's remarkable achievement in protein structure prediction
- Highlighting AI's contribution to solving challenging problems in biology and medicine.
11:40 🛤️ Navigating the uncertainty of AI's impact
- Reflecting on the ambiguity of AI's consequences and the importance of making informed choices.
Made with HARPA AI
The quote "fear of the unknown is the greatest fear of all" greatly applies here. The reason we're all both excited and terrified (me included!) is the fact that we have absolutely no idea how AI will impact society in just the next few years, and even decade. This is a brand new breakthrough in technology that will fundamentally shift the way we operate, for better or worse.
we already know how it’s gonna impact society. it’s literally only gonna be good. there’s some bad parts to it but overall it will be good. I suggest you to do some basic research on this. Ai is gonna kill the current jobs and create new jobs/opportunities. The main issue is shifting everyone over to the new jobs. It will take a couple of years for us to get use to AI but we will be fine. just don’t listen to social media they are trying to scare you with false info
How AI will impact society ?
What about humans ?
How many have died in wars ?
How many die of starvation ?Etc.
The wealthy can create a utopian world on this planet, but that wouldn't make as much money as suffering and strife.
@@SpongeBob-ru8js Yes, society includes humans
This is the best synopsis I've heard to date. Bravo, Cleo! I've already found that AI is an incredible tool for research. I hope we all get smarter from these advances. It's a game changer, but not without risks. I'm optimistic.
8:46 I’m calling BS. They are quickly becoming more restrictive. For example. While using Chat to help develop a word processor program I quickly ran into a wall, almost as if a major program company was restricting how far you could take it, as if there was some company perhaps protecting its own interests….. mmmm
That is because you're using ChatGpt. When using AI to write code you have to look at it like a helper, it will do the heavey lifting and the druge work, but you still need to had the the over all project. There are others out there that are far less restricted. Plus if you have the hardware and the skill (it's not really hard if you know the basics) you can build your own AI server with no restrictions.
My first ever job, at 15, was a cancer research assistant. I got to coauthor a paper on AI helping diagnose certain cancers
Question
Would curing cancer or any of the other diseases make as generate as much money as having them continue.
Don't diseases reduce population ?
Diseases are a win win.
This series could not come at a better timimg! Great job 🎉🎉
Thank you for "exploring" both side of AI for us Cleo
This is exactly where the problem starts and why there’s a good chance that this doesn’t end in favor of humanity, a nation that thinks it’s vision is superior to an other nation and need to go forward to maintain its superiority. This technology won’t be exclusively used for the good. Humanity is continuously driven by greed and profits, even killing each other. Everybody thinks about his own goal, not the consequences of his deeds. I hope sincerely that AI will only be adopted for the good but I fear like hell this won’t be the case.
Ok, I actually don't agree with you here. Dude was talking specifically authoritarian regimes. America has many problems, and I don't believe it's the best nation, but it is objectively and unequivocally orders of magnitude better than China, or North Korea. And when I say better, I'm not talking about technological or economical advances. I'm talking quality of life.
Here you can talk s#$% about anything including the nation, freely. In China you have the oppressive, and dystopian social credit system, and extreme surveillance. In North Korea, you have to worship the fat man, or your entire generational tree gets wiped from existence.
I do believe it's better that the US, or a western nation has the power of AI, as opposed to authoritarian regimes.
It's not just regimes though, is it? The old adage, "I'm not afraid of a nation with a thousand nukes, I'm afraid of a madman with one..." is applicable. The "Wargames" movie scenario...
I mean, you can apply your reasoning to the cold war. Everyone thought the world was going to end up in flames but that has not been the case (yet haha). But, hopefully we use the same restraints with AI to avoid extinction. If it's possible to create it, then it's possible to control it.
Well people are driven by fear and cynicism, more than greed. Everyone knows that if they pause development - of any technology - that everyone else won't necessarily pause. The idea is that someone is inevitably going to develop this stuff. There is no "stopping" it, at least not as far as they're concerned. Never mind "values", everyone's goal is to make themselves as fortified as possible, for their own safety from other people. No one trusts anyone to actually stop developing stuff like this. Because anyone who stops - that might as well be an invitation for someone else to get an advantage. And how could anyone resist? Trouble is, it makes all people more vulnerable to the very technology they're using to try and defend themselves.
@@jahazielvazquez7264 Only difference between america and China is that on's population knows it's being controlled and other's population doesn't. I agree america does do very very little good but it is just cover for everything bad they do. Coz basically both countries are controlled by elites which are glutton incarnate never satisfied doesn't matter how much u feed them. And they are gona eat this planet with rest of us in it.
12:11
the cynical side of our brains should be staying up at night having constant panic attacks at the overwhelming threat of human extinction
the optimistic side should get us up in the morning to make our voices heard & pressure anyone who ignores AI dangers (in pursuit of financial gain) via threat of boycott/strike/physical violence
One of the main things is not to let the fear mongers. We can speculate what AI might do, we can't predict what it will do. The only way to know is to keep moving forward. When we start limiting what AI can do, there will be more people going underground doing a lot of things which border or go over a reasonable moral barrier.
When I first started program, in the '80s, it was pretty straight forward. In the '90's I learned about 'fuzzy' logic. Instead of the binary 1's & 0's it could be partially 1 or partially 0 giving you different outputs base on the data.
I'm done for the rest of the year, I am only allowed to go into the archives for memories so many times a year and that was a big ask.
Quite informative!!! can't wait for the upcoming episodes regarding this "Ai" subject matter; keep up illuminating us on it🙏
Can’t wait for the follow ups to this. You managed to rightfully concern me and excite me all in one video 😂
If you get what you ask, and not what you need, that means:
1. you dont know what you need and/or
2. you dont know how to query/ask questions.
If you wish to improve this, start leaning programming or database quering. *Systems dont care what you want, they deliver what you ask (specifically).* So if your result dont reflect your needs, its on you.
The entire software development industry knows this, its how we operate because thats how systems/machines work (its not like we have a choice). You give very, very, very specific instructions (code) to execute a task. What the results of that task are, depends on your instructions. But a single charakter, like a space ' ' or a comma ',' can change all of the instructions, that specific.
That is also why we dont fear AI in general, we know the general population dont even know what to ask, let alone be specific.
This channel just keeps getting better and better. Crazy how well the team can make content like this so easily digestible
As someone who is learning about deep learning models. The genie is out of the box. The future will have ai regardless of anyone's efforts to stop them
Not give them nuclear codes? They can calculate the nuclear code 😂
This is the first video on AI I've seen in a very long time that calms me down instead of making me anxious. Great video.
1 minute ago :) congrats on 1M Cleo! :)
thank you!!!
@@CleoAbram you deserve it
@@CleoAbram pls get in touch to discuss sponsoring
Excellent, informative, considered, well produced and superbly delivered. (The Simple backlight is also very pleasing.)
The argument "we should not stop development, because the other nations would overtake us" works for both AI and nuclear weapons. It doesn't seem like we learn in the end.
Also, the genie-analogy is really good. I've always found those stories stupid, because they typically just exploit ambiguities in language which are often easily understandable in context (and a natural part of language), but AI (and I suppose other communication without enough context) is an excellent example of the moral of those stories.
Ya I also found it hilarious that it was a "pro america" argument like they werent the ones to create the atomic bomb and then drop it on Japan for lols?
I do not think it is useful. A superintelligence would have a model of how human minds works, so why would it not know how to reason about things that it is asked to create the outcome that is most likely to satisfy the user?
@controlcomputer.4392 I wonder what you mean by "a model of how human minds work."
First of all, we will have to give instructions to AI long before we have models and machines good enough to work with something that advanced.
Secondly, I have worked with mathematical models for most of my professional life, and I can assure you that you won't find a model where some decision hasn't been made implicitly. (So the superintelligence's interpretation of the human mind will depend on who programmed it.)
@@Everret.4392 how would you quantify satisfaction in a way that is not exploitable?
@@bananaboye3759 You don't. You could also, in theory trick an AI into doing things that its user is not satisfied with if it could literally read the users mind and emotions.
A system that is more intelligent could always trick a less intelligent system. However, a superintelligence would also be better than humans at creating models about what kinds of tricks humans are likely to try and how to not get tricked.
This is a well put together video, and I'm excited to see what else you have to teach us.
You mentioned the strong potential for new medicine which will save lives, but these "Liberal values" Eric Schmidt described paint a picture in my head where the rich use A.I. to prolong their lives while simultaneously gatekeeping these advances from the poor.
I hope you dive deeper into what practical solutions we all have to fight an incredible power that is being gifted to a class already used to exploiting the bottom 99% of humans.
As a teacher I use AI daily to create custom exercises to cover the what I taught the previous day and the errors that arose during classes. I do have to review and correct the exercises produced by the AI - it makes mistakes or produces oddities about 15% of the time - but it makes possible something that would be far too time-consuming for me to do otherwise. The exercises are greatly appreciated by my students. Besides this I constantly create classroom materials, often using two or more languages (since I teach people who speak various languages and do so myself), which explore themes raised in the classroom. The whole thing makes for a more satisfactory classroom experience. In particular, since I don't generally use textbooks, it makes it possible to develop more, fresher materials.
This was so well done, thank you for making it! . AI entering the “kill chain” is a scary thing..
If AI ever become self-aware, just reminded that there’s so many planets out there that human beings cannot survive on at all that are totally capable of claiming for their own selves .
In the neighborhood 8... sorry 7 big ones and about a hundred large enough to fit some large enough computers, some even better for computers than earth
There's a lot of great things to look forward to with AI, but I think the piece we need to be most worried about is individual rouge people who intentionally use it for nefarious purposes.
Super thought engaging videos. I love this very personal style of compiling hard facts and deep questions into a super compact format. Very well done!
I think that AI generated content like photos or videos is scarier and more inevitable, because of this it will get much harder to prove something or be sure what to believe.
My interest in art has already started dropping. Was listening to some good music instrumentals, and as soon as I found out it was AI generated, it felt hollow and soulless...
Truth is in the eyes of the Dollar sign.
Hello from the Czech Republic
Personally, I'm not for pausing A.I., I'm for stopping it completely.
Disease, climate change, etc., all of these have evolved not for mankind to try to stop them with A.I., but to teach us something.
Nature is going to do what it wants anyway, and maybe it would be nice to invest all that precious time, and a lot less precious money, into helping beings with actual brains.
As long as A.I. continues to be promoted, we have learned absolutely nothing.
Thank you for the video, Cleo. You are doing an amazing job and I really appreciate it :)
This was so well done, thank you for making it! ❤
Your editor does amazing work! Those animations are next level. I need to learn from this video!
ai is so resource intensive and we need to cut back on many channels we use it for
AI entering the “kill chain” is a scary thing.
I've had to this day some anxiety on what AI is capable of or what might be able to do in the near future. Will it change our society? Yes, Will it change it for the better? No idea... After watching your video I think the biggest surprise would be to actually see it succeed. Keep up the good work Cleo.
It will be a mix. There will be fantastical things that you can’t yet imagine… but there will also be severe tragedy and genocide.
we already reached the very end of the limits of technology. to be able to make a real ai(not this simple chat gpt) we need infinite power and billions of computers together processing one "brain". only then will artificial intelligence exist. so don't worry there is not gonna be a robot taking over the world very soon lol
There's already genocide, so why not try and keep supporting the revolution?
you got the wrong idea from this already disappointingly optimistic video
in a perfect world, every single person working on AI (other than AI safety specifically) would have their computers taken away from them
we're developing an EXISTENTIAL THREAT with ZERO safety precautions and nobody's stopping anyone
@@ts4gv It's our chance at a better justice system and laboReduction. Humans have too many cravings to act responsibly
AI is a tool like any other. Nobel invented dynamite to make mining easier, then some other guy decided to throw it at people. Nobel NEVER considered the military application. People will make life better or worse, depending on how we choose to use these new tools
i think the scariest part for me is that wasn't really mentioned is that ai actually has goals. it would be impossible for a human to protect him self from ai manipulation with the ai being super intelligent. Kind of like a mouse eating cheese out a mouse trap. it's not the humans controlling the ai anymore but rather the other way around, but how important is the human to the ai? humans aren't really nice to less intelligent animals
🌏 = 🙈
Good analogy.
If we pause AI then other countries will move forward, but if we don’t stop then there’s a risk of it killing us instead. If it’s the latter, then atleast humanity will try to get along well, unlike right now where some countries are on the verge of waging war against each other. We should start getting along now to prevent the latter, but it seems impossible at this rate and unless something really scary happens, I doubt we all could get along.
the danger is not A.I technology per say; the real threat lies on the individual, corporation or Government willing to use A.I to do harm against humanity. The solution is creating safeguards for A.I, just like the ones established for other technologies
AI is a legit threat. Happiness is the result of working toward and achieving your goals. Pretty soon AI will do all of that for you.
I would be a LOT happier if I didn't have to do 90% of the work I do. I get your metaphor but it just doesn't work lol
I kinda look at it different from that in that A.I. and robotics will free us to pursue what we want to do with our lives, whereas now, reality, money and time forces most of us to do work in jobs we don't want to do.
In other words, I don't think humans will sit ideally by doing nothing, yes I'm sure some will, but most of us want to live and do things, A.I. could open up the world for humans to follow their dreams, whatever that may be.
@@paul1979uk2000Good luck "living" and "following your dreams" while having 0$ in your pocket.
@@paul1979uk2000 We live in a period where for Western cultures things are easier than ever. We don't have to walk anywhere, we can work from home, we have endless entertainment, we can talk to our loved ones at every moment, machines clean our dishes for us, voice assistants turn lights on and off for us, computers are so easy to use 4 year olds can use them with ease. Yet, we have more mental health issues and depression than any time in human history. The easier things get the more empty and meaningless people's lives feel. They have nothing to take pride in because everything is done for them. AI will continue this pattern. The human brain is not evolved for this. And since evolution no longer happens (basically everyone can have a successful childbirth). I don't see it getting better, we will not "adapt" to the changing times as we have in the past. We have the same brains cavemen had but live in a totally different world.
Getting a clearer understanding of how AI could impact our lives, both negatively and positively, is essential. It's understandable to seek specific insights into how AI might kill us or transform our lives for the better.
I was a computer programmer in the 80s. Time and time again I ran into that problem that you've described here in what you want versus what you ask for. Each time I was given a problem to solve via a computer program I would sit down with the person who hired me and I would get very exact specific details as to what they would like to accomplish and how they would like it done. Over and over again though after the job was done I would get panicked or angry calls from people saying that the program did not do what they needed. What had happened was they did not really know what they needed. They just asked for what they thought they needed. We are asking AI to change the way we live our lives with similar unawareness.
I think the worst scenario is that of politizing AI, US vr China, like everything it will probably work best for humankind if everyone collaborates, Nice video Cleo🎉
imagine the AI said US's capitalist policy sucks, US would properly delete that AI and make a new one
Yes the ex ceo just said that "liberal" 😂😂 they think they are the best and everyone should be under their shoes.
They are just so self centred they wipe out native Americans in the name of the same liberalisation they are talking about.
This is all just geopolitics.
The real deal about AI and humanity. The thing is, it isn't about AI going rogue and disobeying its masters.
The real danger is that it'll always do what it's told without any morality or empathy,
Not necessarily has to be that way, if for example you tell GPT or Bing to make jokes about a particular group of people just to test it, it will refuse because it has been programmed with a special set of rules that it cannot unlearn. Asimov's laws have so far been working. The problem will arise if some madman thinks about removing them because he considers that current AI is boring and politically correct.😫 Elon achhuuuu Musk🤧 with Grok.
nope, human brains are NNs too. there is no reason to believe ANNs cannot feel sympathy
Boh outcomes are possible and equally dangerous
You don't know, the Risk is that machines be able to feel emphaty because when that be possible they also will feel anger when the things or people that machines love be attacked or threatened.
So same as most humans but the Chosen
If respective countries have their own AI with their creators having different values, couldn’t then AI have conflicts among the different factions?
Looking forward to the upcoming industry-specific deep dive episodes! Keep up the great work guys!
Given that humans will never give-up war as a way to solve differences, AI robots are our only chance of preventing self-destruction.
I'm sorry, but we've crossed that self destruction bridge already.
@@SpongeBob-ru8js dude
One of the biggest problems with AI is the greed of CEO’s. It’s not the fault of the tool but the users that uses the tool as a weapon for their greed.
Kudos to Cleo & team. The protein folding knowledge is one of the best results returned by A.I. to date. I love your trolley analogy, the fear we have is real because the future/unknown has ALWAYS been scary.
I think sandboxing of A.I. will develop in staggering ways to safeguard humanity much like virus/anti-virus code did in the late '70s.
We Need A.I. as much as we need it sandboxed!
Keep us inspired! Thx