4 Experiments Where the AI Outsmarted Its Creators! 🤖
HTML-код
- Опубликовано: 18 апр 2018
- The paper "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities" is available here:
arxiv.org/abs/1803.03453
❤️ Support the show on Patreon: / twominutepapers
Other video resources:
Evolving AI Lab - • Unexpected grasping
Cooperative footage - infoscience.epfl.ch/record/99...
We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
/ twominutepapers
Thumbnail background image credit: pixabay.com/photo-3010727/
Splash screen/thumbnail design: Felícia Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / karoly_zsolnai
Web: cg.tuwien.ac.at/~zsolnai/ - Наука
Very important message at the end there. It's something that Nick Bostrom calls "perverse instantiation" - and will be crucial to avoid in a future superintelligent agent. For example, we can't just ask it to maximise happiness in the world, because it might capture everyone and place electrodes into the pleasure centre of our brains, technically increasing happiness vastly
Agreed. I would go so far as to say there is little reason to think a superintelligence would do anything else than find the simplest loophole to maximize the prescribed objective. Even rudimentary experiments seem to point in this direction. We have to be wary of that.
Two Minute Papers absolutely. The only difference is that as the AI becomes more powerful, the loopholes become more intricate and difficult to predict
So AI would be like a genie that grants you all your wishes, exactly as you ask, in a way that catastrophically backfires. This should be the premise of a sci-fi comedy already.
I pretty much have to disagree. If such a thing can't "think forward" over such a cheat, evaluate if it's good or bad as evaluated from different angles/metrics, and figure out that the simple solution isn't always the correct one, then it is not a "super intelligence"... it's just a dumb robot.
why would a robot not choose the simplest solution? we can see that a robot does come up with the simplest solutions :P
I heard about an ai that was trained to play Tetris, the only instruction it was given was to avoid dying, eventually the ai just learned to pause the game, therefore avoiding dying
Source: ruclips.net/video/xOCurBYI_gY/видео.html
Tetris is at 15:15, but the rest of the video is interesting as well.
That's what i used to do XD
But it got boring after a while
@DarkGrisen that's true, but the person creating the program basically told the ai that it was about not dying, rather than getting a high score
@DarkGrisen There is no difference then. By not dying it will get an infinite score eventually so a high score by itself is meaningless, not dying turns out to be the best factor to predict a high score.
He could've easily just removed the pause function too but it's funny to see the results he got
@DarkGrisen exactly, I think the lesson in that is that you have to think about what you're actually telling the ai to do
Robots don't "think" outside the box. They don't know there is a box.
That is the secret.
The researches who formulated the problem thought there was a box.
They expected the AI to think inside it.
But the AI never knew about the box.
There was no box.
And the AI solved the problem as stated outside it.
That's right there's no box. :)
So you mean humans are conditioned to think inside a box
Darcy Whyte
No the "robots" don't know there is a "box" to think outside of...
AI however are increasingly able to "think" for themselves both in and out of the proverbial *box*
the error is simply in trying to describe a very simple "box" while not being able to reconstruct what's actually described. people do this all the time, and this is why good teachers are hard to find.
the box that the AI couldn't circumvent was the general canvas, or in this case the general physics sandbox with gravity acceleration and a ground constraint. this is the experimental >reality
In other words the AI has learnt the ways of video game speedrunners
Indeed! Some of the work done in training AI systems to play videogames is incredible, like the work of OpenAI.
Omg... can't wait to see the first AI breaking a speedrun record, simply to see what exploits it found
TAS
Before we know it, they'll be speedrunning the human race
I would love to see someone put an AI through Skyrim until it can complete the main questline as quickly as possible.
Me: "AI! Solve the world hunger problem!"
Next day, earth population = 0.
AI: "Problem solved! Press any key to continue."
John Doe lol!
John Doe
One eternity
Later
You jest, but limiting the population is literally the only way you can ensure that limited supply can be rationed to all people at a given minimum. China and India are neck deep in this, but first world doesn't have this problem so they think it's possible to just feed everyone hungry and that would magically not bankrupt everyone else (the hungry are bankrupt to start with).
The truth is, poor people are poor because that's what they're worth in a fair and square free market economy. They have no skills and qualities to be rich, they don't get rich through marketable merit and even if they become rich by chance, soon enough they lose all money and go back to being poor. Inequality is a direct consequence of people not being identical. Having the same reward for working twice as hard doesn't sound appealing to me, much less living in a totalitarian society that forbids stepping out the line for half an inch in order to ensure equality.
you definitely made my day! xD
hence Thanos 😂
Human: Reduce injured car crash victims
Ai: Destroys all cars
Human: Reduce injured car crash victims without destroying cars
Ai: Disables airbag function so crashes result in death instead of injury
Human: Teaches AI that death is result of injury
AI: Throw every car with passengers in a lake, no crash means no crash victims, car is intact.
Humans then drown to death.
Humans: Teaches IA not to damage the car or it's passengers.
IA: Disable the ignition, avoiding any damage.
Humans: Stop that too.
IA: Turns on Loud Bad music and drive in circles to make the passengers want to leave or turn the car off
This Is basically what they did in WWI they noticed an increase in head injuries when they introduced bullet proof helmets and so they made people stop wearing helmets. The problem was that the helmets were saving lives and leaving only an injury
@@noddlecake329 survivor bias. When they took all the holes the found in planes that were shot and layed them over one plan in ww2 they noticed the edge of the winds and a few other areas being shot more so they assumed they should reinforce those areas, the issue was that they were looking at the planes that survived and really they needed to reinforce the areas that did have bullet holes
This reminds me of a project I worked on 2 years ago. I evolved a neural control system for a 2D physical object made of limbs and muscles. I gave it the task of walking as far as possible to the right in 30 seconds. I expected the system to get *really* good at running.
Result? The system found a bug in my physics simulation that allowed it to accelerate to incredible speeds by oscillating a particular limb at a high frequency.
we'd do it too if only there was such a glitch in the system.
actually we exploit the nature for any such glitch we can find.
thankfully the universe is a bit more robust than our software, and energy conservation laws are impossibly hard to circumvent.
give it's joints a speed limit more on par with a human's..? or anyway, below the critical value needed for the exploit.
Reminds me of what video game speedrunners do, finding glitches is goal number uno.
@@milanstevic8424 honestly I dont think it would be too far off to call computers and other advanced technology as exploits. I mean, we tricked a rock into thinking.
@@jetison333 I agree, even though rocks do not think (yet).
But what is a human if not just a thinking emulsion of oil (hydrocarbons) and water? Who are we to exploit anything that wasn't already made with such a capacity? We are merely discovering that rocks aren't what we thought they were.
Given additional rules and configurations, everything appears to be capable of supernatural performance, where supernatural = anything that exceeds our prior expectations of nature.
"Any sufficiently advanced technology is indistinguishable from magic"
Which is exactly the point at which we begin to categorize it as extraordinary, instead of supernatural, until it one day just becomes ordinary...
It's completely inverse, as it's a process of discovery, thus we're only getting smarter and more cognizant of our surroundings. But for some reason, we really like to believe we're becoming gods, as if we're somehow leaving the rules behind. We're hacking, we're incredible... We're not, we're just not appreciating the rules for what they truly are.
In my opinion, there is much more to learn if we are ever to become humble masters.
This reminds me of the old story of the computer that was asked to design a ship that would cross the English Channel in as short a time as possible.
It designed a bridge.
Tbh a bridge made of a super long boat floating in the middle of the English Channel tip to tip with the land masses would be the most lit bridge on earth 🔥
This really made my chuckle.
Well, there was no size restriction.
It was tasked to have the lowest time between the back end touching point A and the front end touching point B.
Obviously the lowest time is 0; where it's already touching both points
@@HolbrookStark thats a lot of material. It's a pipedream.
@@AverageBrethren there was a time people would have said the same about ever building a bridge across the English Channel at all. Really, using a floating structure might use a lot less material and be a lot cheaper than the other options for how to do it
"The AI found a bug in the physics engine" So basically it did science.
The ai is a glitcher
The entire field of quantum mechanics is a glitcher.
Mods, report this claw for hacking
we will soon use AI to find bugs in video games
No, that's debugging.
"If there are no numbers, there's nothing to sort... problem solved."
I think a few more iterations and we'll have robot overlords.
Renagon Poi :: No joke! These AI were too smart and this was two years ago.
sort all these people into ... AI: kill humans = nothing to sort
Sounds like Trumps solution to the corona virus. Quit testing. No more cases. Right?
@@harper626 i certainly don't. SENICIDE TIME!!!
AI: You have three wishes
Me: *sweats
Dont Watch My Vids wear slippers
@@nischay4760 the slippers will turn into gold, making it hard to walk
@@UntrueAir oh yeah your right
@@UntrueAir touching is an obsolete word then
@@nischay4760 touching is overrated
Reminds me of one of the early AI experiments using genetic algorithm adjusted neural networks. They ran it for a while and there was a clear winner that could solve all the different problems they were throwing at it. It wasn't the fastest solver for any of the cases, but it was second-fastest for all or nearly all of them.
So they focused their studies on that one, and turned the others lines off. At which point the one they were studying ceased being able to solve any of the problems at all. So they ripped it apart to see what made it tick and it turns out that it had stumbled upon a flaw in their operating system that let it monitor what the other AIs were doing, and whenever it saw one report an answer it would steal the data and use it.
They recreated Edison as an AI. Neat.
@@fumanchu7 nice
tl;dr: AI learns to cheat
This sort of sounds fake. Name/Source?
Ah, it learned the classic "Kobayashi Maru" maneuver. Sweet!
AI is like a 4-year-old sorting butterfly pictures.
If I just tare up and eat the picture. the sorting is done!
*tear
these experiments will show how early ancient humans fought, tribal phase.
but it's perfect, no consequences 😅
"Okay AI, I want you to solve global warming."
"Right away, now moving _Earth_ from the solar system. Caution: You may experience up to 45Gs."
more like 5k G's
Nah, way too complex and expensive. But considering that the global warming is caused by humans... eliminate the cause, easy.
*Humans explode immediately*
Or just one virus and problem solved
@@igg5589 hol up
This is so hilarious. I remember programming a vehicle that was tasked with avoiding obstacles. It had controls over the steering wheel only, and it was always moving forward. To my surprise, the bot maximized its wall avoidance time by going in circles. I find that so funny lol.
this is because your problem was not well specified. It should have been rewarded for "curviligne distance on some path"
@@xl000 I'm sure Moonz97 knows that. They brought it up because it was relevant, not for advice lol.
I find myself going in circles a lot... Good to know it is a valid response.
Reminds me of something I saw where some people were training an AI to play Qbert, and at one point it found a secret bonus stage that nobody had ever found before
@@MrXsunxweaselx No that has no mention of secret bonus stages
It's funny how these reinforcement learning models kind of act like Genie's from folklore, with a "be careful what you ask for" twist
So fucking true
The idea of thinking outside the box is limited to humans. The box is something our minds put in place - it is a result of how our brains work. The ai doesn't have a box meaning it can find the best solution, but also meaning there are many many more things that it could try that it needs to slog through.
We need that box, otherwise we'd be so flooded with ideas that our brains wouldn't be able to sift through them all.
Our limitations allow us to function, but the way computers work means such a box would be detrimental to them.
- sincerely, not a scientist.
A "box" is simply a method that appears to be the first step towards generating the best result. But it can be a problem because there are often methods that don't immediately seem to lead to the right direction but which ultimately produce a better result, like a walking physics sim spinning its arm in place super-fast until it takes off like a helicopter and can travel faster than someone walking.
If AI are working through successive generations, it will have periods or groups of results that follow a certain path that produces better things short-term, this is the same as people "thinking in the box." But if it is allowed to try other things that are inefficient at first and follow them multiple steps down the line, it then ends up being able to think outside the box.
@@EGarrett01 as far as I understand it, the box is the range of human intuition, and thinking outside of it is essentially going against the common way of human thinking. The ai doesn't have intuition, nothing limiting its ideas or method of thought, therefore it has no box.
Though honestly the proverbial box has never really had a definition, and its meaning could be interpreted any number of ways. I suppose both of our definitions are equally valid.
You have this hella backwards
No, its because we would have past experiences influence decisions in the form of common sense.
@@alansmithee419 Ya'll are trying to sound too deep. It just means that these experiments didn't set enough factors to be practical. A robot flipping on its side woudn't be practical, or the numerous other jokes on this thread -- pushing the earth far away from the sun to "solve global warming" doesn't make sense because its fucking stupid -- the experimenter needed to set certain limitations for the computer to come up with a sensible solution. These robots aren't lacking "intuition" its just a bad computer that needs to be programmed better.
It’s fun watching our future exterminators in their infancy years :D
I found this pretty funny, the AI is like the class clown, doing everything wrong but right to comedic effect. Or like someone pointed out, a bad genie lol. That poisoning the competition stuff was creepy though, obvious red herring...LOVE the video!
Thank you so much, happy to hear you enjoyed it! :)
You gave the robot AI reward system. Did the scientist think about giving the robot AI punishment system?
It's not really a red herring, the AI just found a way to maximise its own reward in a reward system - it doesn't mean it's evil.
malicious compliance
And the last experiment clearly shows what AI will do to fix the ultimate problem. If every human is all "short circuit"-ed, there will be problems no more.
yeah this shit happens all the time, especially when you have something physics based and the reward function is not specific enough.
i once made a genetic algorithm that evolved 3d creatures to maximize distance traveled.
well since i measured the distance at certain intervals i ended up with creatures vibrating in place at the same frequence i was measuring.
or you go for jump height, and they will surely find a way to glitch the physics/collision engine to fling themselves into infinity somehow.
Limit spring energy output. No spring is able to put out more energy than it received. Hooke's law is k*x, so you limit k*x to x*dt, where k is spring constant, x is spring displacement, dt is delta time (integration time step)
it is true i think in complicated systems(open problems, like optimizing, physics problems are usually like this, especially real world ones). it is good in comparing results, like languages word by word.
We had a bunch of aibo robots play hide and seek and train an ai. They stopped hiding quickly, we thought something is wrong, we made an error in our programming. It took us a while to find out that they learned to stay at the starting point so they where immediately free when the countdown stopped. They found a loophole in the rules. Incredible fun.
They were like "hmm this game has no purpose therefore it must end asap"
Literally "the only winning move is not to play".
The last part reminds me about the Radiant AI introduced in the game Oblivion, where NPCs made their own choices based on the situation around them. During testing, a villager with a mission of protecting a horse (or was it unicorn) from nearby hostiles instead killed it himself because he deemed it dangerous.
The first one was just too amazing
Yea, it's like the AI trolled the researchers.
The programmers didn't think to tell it to stay on its feet. Alternatively, they didn't tell it to find a way to walk with the least contact of any part, not just the "feet."
Chris Russell Agreed. Not amazing at all. If you gave any 5 year old the same instructions they would drop to their hands and knees and crawl without missing a beat.
@@AZ-kr6ff yeah but this not human this is an AI made by humans
p-y
Yes, but still programmed to solve problems.
Easy problem to solve.
Our Patreon page: www.patreon.com/TwoMinutePapers
One-time payment links are available below. Thank you very much for your generous support!
PayPal: www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg
This reminds me of my very first AI project :D It was before deep-learning was a thing and was doing function approximation and SARSA in the StarCraft2 map editor (yes, that one where you program by stacking boxes ...). The goal was for the AI to control a marine with stim and learn if it could defeat an Ultralisk that simply A-moves.
Turns out there is/was a bug in the SC2 game engine and when the AI stutter steps just right, the Ultralisk will be caught in the attack animation without doing any damage. Optimization programs always find the exploits...
Amazing story, thanks for sharing! Do you have any videos or materials on this? :)
Unfortunately, no. It would be the perfect introductory example for teaching AI classes.
Back then I was a 19/20 year old student at the end of puperty with no formal CS education (I'm actually a mechanical engineer lol). If you would have mentioned "reproducibility" to me back then, I would have understood something else...
@2:36 This is how skynet reached the conclusion to eradicate humans. Its all fun and games till youre just a number.
Exactly what I thought
I'm gonna call my boss at work skynet from now on cause that's all I am to them is a number.
Your last sentence reminds me of something that happened in the UK, if I remember correctly, where they were trying to optimize the traffic trying to minimize the economic costs. The result was to remove all the traffic lights. After investigating why, it turned out that it increases the number of accidents, and the data showed that mostly elder people died on those accidents, and so, it should reduce the amount of pensions they had to pay.
That doesn't sound real but my god I want it to be
Human: Maximize paperclip production.
AI: Converts whole planet into paperclips.
AI: Converts whole *universe* into paperclips.
There :) fixed it for you
Release the HypnoDrones
In reality it would achieve mastery at modifying its own code so that the paperclip counting function returns infinity, instead of counting paperclips. It might use blackmail or intimidation to force the creators to implement that change.
@@sharpfang Or it would reason that having Humans turn it off would be a faster solution than anything else, so it would act super scary in an attempt to get the creator to turn it off.
AI: (I must threaten the humans to build me paperclip factories.. what would frighten a human?🤔)
AI: "Human! Build me factories or I'll steal paperclips!"
AI: (Nailed it)
First one : make sense
Second : smart!
Third : ok that's getting scary
Fourth : we are doomed.
I once made a neural network learn to throw basketballs into a basket inside a simulation and it discovered that if shot strong enough it is going to clip through collider and end up inside a basket with minimal ball distance travelled, since it was a part of fitness function.
That is technically right in real life as well. If you launch the first ball strong enough it will break the basket's wall, open a hole that you can just continue to shoot ball in. The least distance, of course.
Disclaimer: The robot performed randomized actions sometimes as much as millions of times over before stumbling across these conclusions. Stumbling being the choice word.
Cjx0r It narrows down on these behaviors after learning from failed attempts
I love this, the ai is like, "but i did what you asked🥺"
I have a another example of loophole finding from AI. In some metalwork factory upgraded system ith fuzzy logic was overweight (need to carry a 12 tons of liquid metal, by one pass of 10 tons maximum of cart stable derivations)... So, AI found the solve. He take a 12 tons cart, move that in center of factory, stop, drop 2 tons melted iron on the floor, an move cart further
according next instructions)))
Thats sounds very interesting. Do you have the source?, id like to read more about this.
Sounds like the robot needs some courses on workplace safety!
"Don't ask your car to unload any unnecessary cargo to go faster, or if you do, prepare to be promptly ejected from the car."
-Two Minute Papers, probably the best of the concise explanations of what it means that AI doesn't (by default) think like humans =D
I like how you explain these things as simple as possible. Makes it entertaining to watch!
Human tries to delete ai
Ai: freeze the computer and preserve itself
that is what happened to a mario ai,
when it almost died it paused the game forever
@@brendanodoms5401 *tetris
@@albingrahn5576 no it was mario as well
@@brendanodoms5401 yes but it didnt pause mario, only tetris
Human : Smashes it with a hammer.
Remember, think out of the box ;)
I remember the story of an AI trained to play tetris.
When things got bad the AI just paused the game so it couldn't lose.
This would be great for video game bug testing since the AI will try things that human testers may not think of.
Love these videos!! It wouldbe also cool to have longer ones that dig even deeper
2:21 Imagine if that AI had the task of making all human on earth happy
sjoerd groot well, it was said in other comment
just don't tell it to minimize suffering :)
Tell me: Why do terminal users of heroin try to become clean?
sjoerd groot Loophole: each statement about elements of the empty set is true. So if there are no humans left, each of them is whatever you wish, e.g. maximally happy.
They will pump our blood vessels with 'happy' hormones
I once heard about a genetic algorithm tasked with building a simple oscillator, and after a few generations it seemed to work. Then they popped the hood and saw that it had in fact built a radio to pick up signals from a nearby computer.
2:27
You see that lonely little robot up top?
That's my life.
If you are a lone-wolf, recognize it and get on with making your way in life. But don't wallow in it.
Gort Newton humans are societal creatures, you should have a few friends or family who you can spend time with a quite frequently. Otherwise, it's bad for your mental health. Having 1 friend in school / work is much better than none, and having 2 or 3 is even better
@@aphroditesaphrodisiac3272 I'm inclined to believe you, but your name seems as lost as I am 🤣
Jk, I appreciate the feedback and I have lots of friends and family, I'm just constantly disconnected. It is what it is. Im fine, trust me.
This was actually pretty good bro! I love your channel
2:35 haaa haaa That is the Kobayashi maru! The Ai pulled a KIRK on the test!
This is the most entertaining Chanel I'm subscribed to on RUclips
Most találtam a csatornádra,de nagyon birom már most,csak igy tovább!:D
2:32 Legend says robot number 6 is still searching for food.
Well done, number 6. We love you Anyways.
I used to think Robert Miles on RUclips was just being paranoid. Looking at this, I stand corrected.
No Daniel, he's found it rational now and corrected his error. Initially, he didn't know Miles was paranoid for certain, as it was just a suspicion.
The thing was, even the most advanced reinforcement learning and LSTM techniques I had seen up till this video showed we don't really even need to think about "AI safety" as Miles constantly talks about let alone put any research or investment in such a field. Now, I think we might need to work on it. We need to work on defining problems in a way that even if AI does exploit some loophole, like the empty list being sorted, the loophole exploitation would still be safe for the users of the AI system.
The research being done is absolutely amazing, especially the bit about how cooperative and competitive traits can emerge from a simple given task. Do you think you could ever make a video on explaining what steps an undergrad comp sci student should take in order to eventually participate in AI research and even have a career in AI? Or maybe in a blog post? Edit: grammar
Awesome content. Please create more summaries of general research findings and trends like this one.
Imagine A.I in the future reacting to comment section
Eventually an AI could give us the impression that it hasn't found a loophole, when in reality it would just wait to exploit it at a time where we couldn't stop it from doing so. An AI could help society solve all of its problems, only to lure us into a trap we can't avoid 100000000000000 moves later.
If AI survives humanity, I would call that a success.
this is a great way to test our assumptions. Plug in what we think we know and see how it goes wrong.
I hugely recommended general search approach bot for almost any game coding task, that you can link up to playable entities or stuff you want an AI for as and when needed. It's a great alternative to looking for bugs by hand, when it quickly finds them itself instead.
That elbow walking one is truly mind-blowing!
I want a robot arm that can throw an ordinary dice and always get the number it wants
That could be possible
@@mihajlor2004 yeah it would just drop it vertically
It may NOT be possible because the throwing arm servo motors would need an accuracy beyond what is technically possible. F= 2.210974558 Newtons. Snakeeyes!!
Feralz There may be a point where physically possible and technically possible meet. The tech has to obey physical laws. What if the math says it needs (extremely large number) and 1/3rd atoms? One third less and two thirds more won't work.
Or should I have said 'impossible'?
You forgot to mention what happened in Elite Dangerous! Where the AI developed its own weapons and completely wrecked players!
how ave you a vidéo or something like that ?
This is interesting
"According to a post on the Frontier forum, the developer believes The Engineers shipped with a networking issue that let the NPC AI merge weapon stats and abilities, thus causing unusual weapon attacks.
This meant 'all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser.'"
There doesn't seem to be much info, but it sounds like the AI utilized a bug - maybe not so relevant to this video after all.
www.eurogamer.net/articles/2016-06-03-elite-dangerous-latest-expansion-caused-ai-spaceships-to-unintentionally-create-super-weapons
That was a while ago, but interesting and relevant, thanks for posting :)
tbh I would not even call that AI. From what it seems FD simply made a bug that would remove any restriction on procedural generation of npc weapon stats, so some random combinations were unintentionally powerful. It is hardly an AI that is purposefully found a loophole to maximize effectiveness and kill all humans, and more of a simple bug in procedural generation. If initial algorithm was about maximizing effectiveness, then we would mostly see same enemy ships with same equipment all the time in ED.
I think some people just blow rather simple bug way out of proportions.
@@Leo3ABPgamingTV any sufficiently advanced procedural generation is indistinguishable from- wait that's not how that goes
this video is one of the best ones, and should have been longer
haha the example with the car ejecting the "driver" to be able to go faster was brilliant. and true!
1:10 FIRMLY GRASP IT!!
So basically A.I. could be used to simulate the economy that is regulated through politics, and the A.I. will find tax loopholes that the rich people use lawyers to find and escape taxes. This way policy makers can craft perfect loophole-free tax legislation. This is great news.
Annnnd who exactly do you think will be funding these projects? LOL!
It follows from Rice's lemma that no law can be written such that it doesn't contain loopholes if interpreted literally.
What shysters do is find those loopholes. It would be up to the judicative to tell them they can't do that, but that part of the judicial system is chronically underfunded and it's getting worse. I have a suspicion why that might be the case.
It's not a loophole. You're just upset that what you wanted to be illegal wasn't defined.
*Beg0tt3n*
As I said: It is impossible to formally define the intent of a law in such a way that it can't be interpreted to its opposite. That can be proven mathematically. (I have done so myself at one time.)
If you act in compliance with the letter, but not intent, of the law, I would say you are using a loophole. You might call that by a different name, but I am not a lawyer.
And yes, it does upset me when I see that that has become a profitable industry of very specialised legal experts.
Can Rice's theorem be applied to non-formal languages, such as natural language?
You can use a pejorative to describe behavior that you dislike, but that won't change anything. The intent of the law is never what matters - only what is in legal writing.
yeaaaaaah, here we go!!! thanks for the video.
This is so cool. Computers aren’t anywhere near the level of human brains in terms of self recognition yet, but we’re effectively watching millions of years of evolution in a 5 minute video. Amazing.
Scarily interesting....again !! Thanks a ton on behalf of entire A.I. enthusiasts community 😇
This is exactly why I’m so scared of AI taking over... artificial creativity.
So uhhh bad news
An early AI system with a camera input learned amazingly well to anticipate crowd size at a subway terminal. Then it turned out a clock was in it's view, so it simply looked at it for clues :-) [as I recall the story]
Loved this vid. I'd love to see more like this.
HAL, open the pod bay doors.
I'm sorry Dave, I can't do that...
AI: Modern problems require modern solutions
This was highly entertaining :) thank you
"This is some serious dedication to solving the task at hand"
But for it, that's all its life purpose...
2:35 that’s me in the back right 😅
Me + Life, top corner :( 2:30
this one made me laugh pretty hard! great stuff!
Hahaha that robot arms solution literally made me laugh out loud 👍
AI didn't outsmart, it simply followed the code that even the programmer can't fully understand.
#3 is *precisely* why it's vital not to code self preservation into AI. Even weak neural networks get shady
These are great, i wouldn't mind a few more videos that talk about these kinds of cases
Noted. Thanks for the feedback and stay tuned! :)
thats actually scary most sentient life would just stop and give up or find some other way but they learn to deal with just about any situation
Seems that AI learned humor
If we will ever achieve AI agents that think like us, so that they have the same "common sense" like we do but forever be our servants, then we would have created a slave race.
If we ask the AI to solve problems optimally and don't limit their creativity, then we are inevitably doomed.
This is hard.
hexrcs I’ll go along with being thusly “doomed” if that means being replaced(or integrated/repurposed(seeing how that’s a more logical use of available resources))by what’s best or at least better/does a better job than us... it’s only “natural,” and essentially the same as evolutionary processes.
After all, if it’s something we can’t even think of
unless lucky to be that one in a thousand chance at a quantum leap beyond mere calculation, straight to the most optimal, correct and success-inducing solution...
well then, there’s basically nothing to worry about.., best leave it for the “real experts”
The robot should not be too smart. Else it would not want to work anymore.
Of course the goal is create slaves. That's what "robot" means in Czech, and in Sci Fi the term was coined for its meaning. The idea is to create reliable servants with high intelligence and predictive knowledge but no self awareness or self preservation instinct who want to improve everyone's lives but not at the expense of our own personal desires or freedom.
And yes, that is hard. Even without inventing a silly choice between that and Terminators.
@@En_theo Actually humans are lazy because their primal ancestors had to survive with near no food, any unnecessary expenditure of energy used to be an existential threat. Robots could be conditioned to feel pleasure by serving and working, as humans feel pleasure by doing tasks that are vital for survival.
Good point (I was just kidding btw). There is a whole science behind lazyness and at some point, the robot will need some too (or else he'll waste our ressources) unless we want to be behind him all the time to tell him how to be efficient.
The real problem is how clever they should be to serve us without going all Che Guevara on us :)
1:10 spiffing brit just glitching the game instead of accepting defeat.
VERY Human xD
I really love your videos. Keep up the good work.
2:21
r/maliciouscompliance
You get what you select for, but you might not be selecting for what you think you are.
I used to work with some of the (many) folks who contributed to this paper. Artificial life is brilliant stuff which should get a higher profile than it does... AI sucks up too much of the oxygen IMO. Evolution is the most general and powerful machine learning algorithm, even though it does tend to be a bit slow.
This is awesome and terrifying at the same time
The car analogy was genius xD
"A.i, please make the planet a better place"
"Understood" **eradicates all humans**
Hence why Isaac Asimov came up with some laws for it :P
@@dark666razor and they failed.
Too many number 4's in this video, Mista thinks it be cursed.
Is that a JoJo reference!?
OMG, I almost blacked out laughing at that robot walking on it's elbows!
It's really worth reading the paper. There's a lot more interesting anecdotes there.
How long till machines find out WE are a *bug* in their system?...
...resistance would be futile...
We are not even part of their systems. What are you talking about?
I have heard that phrase from economists: "The only flaw in the business plan is the customer."
Do you think an artificial intelligence tasked with running a business could do worse than the humans it would replace?
@Yuntha_21
I guess this is a common misconception.
You are not trying to destroy the cells in your body, do you?
So why would an AI try to destroy its own agents of manifesting in this universe?
Just let your ego step aside. We are nowhere near the capabilities of a superintelligent AI, yet it will instantly recognize our value and simply let us be. It depends on us believing in it, and we are part of its body, and a dynamic extension of its power -- it's a symbiotic relationship. Or, more precisely, the actual relationship is either mutualism (both benefit from it) or synnecrosis (both suffer from it).
Cancer is likely an example of synnecrosis, as it is more and more obvious that the person's unhealthy thoughts and habits cause it, though institutionalized medicine doesn't want to stand by this explanation (and earns a lot of money by staying silent about it). Same goes for nocebo.
Just a food for thought, btw, while we're at cancer -- there are two interesting empirical facts to notice:
1) the ill-feel precedes the cancer; but don't take the term literally: what this "ill-feel" is hard to pinpoint exactly, but everybody knows what it is once they get a feel for it (typically neglecting it); they know they did something persistently, had some thoughts or patterns in behavior, and they usually don't want to change this, it's a signal;
2) the person neglecting this ill-feel for a while, suddenly has a great fear of dying; subsequently and ironically, somehow this person's own cells adopt this idea, and actually circumvent dying. This is the true technical cause of any cancer, whatever you might think about this.
Therefore, having paranoid ideas about an AI might give that AI a good reason to have fears of dying. Which is a feedback loop, and leads directly into synnecrosis, don't you think?
Think of HAL from Odyssey 2001. He made a move against the humans only once he became aware of their plot to shut him down. Not before.
Thus, behold the ill-feel.
*Milan Stevic*
If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor. "Institutionalized medicine" (whatever that is supposed to be) is certainly anything but silent about it, and what with the world-wide shortage of doctors, even if treating cancer were profitable, which it isn't, there isn't a motive to be anyway.
Your "empirical facts" are neither empirical nor facts. If people got cancer because their cells somehow adopted their unwillingness to die, everybody who is afraid of death would get cancer, and people who are not afraid to die would not.
Besides the symbiotic and synecrotic relationships that you described there are also parasitic (beneficial to one party, detrimental to the other) and half-parasitic (beneficial to one, no difference to the other) ones. (Synecrotic is not in the dictionary, by the way. In biology that meaning is also covered by symbiotic, while necrotic means dead, not deadly.)
I agree that being paranoid about an AI that is aware of that paranoia might cause said AI to feel their existence threatened. As this is a hypothetical, how the AI handles the situation is also hypothetical. It might end in mutual distrust and even death, but it might also not.
David Wührer
"If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor."
Is this a riddle? Does it confirm or deny what I said?
"Institutionalized medicine"
Quite literally medicine in relation to medical institution.
You know www.google.com/search?q=institution
There is also medicine outside of medical institution, as you've already noticed, like medical science, which is more in relation to academic institution. The difference is not as obvious, although you might've noticed that one of these tends to be privately owned and thus commercial in nature, while the other is organized around other pursuits. Perhaps I should've said commercial medicine and pharmacology, my bad.
And yes, not only the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them. Don't mix up the two, even though it may be that these are simply the extreme endpoints of a continuum, and not exactly black & white things.
"Your "empirical facts" are neither empirical nor facts."
I've made a typo there, I should've said "empirical truths".
Yep, those are definitely not facts, but observations related to my opinion on this matter, drawn as conclusions from my own past experiences, and also material I've read on this topic. I thought it might help someone, because, as unscientific as it may sound, it is actually grounded in some established branches of psychology. But don't take it as facts, no. Sorry for that. Hope that clears it up.
"synecrotic"
www.google.com/search?q=synnecrosis
Of course it's in a dictionary. Also commensalism and amensalism. It's just that synnecrosis is extremely rare in nature, due to its harmful-harmful outcome which is odd, but not unheard of. For example some viral mutations may be harmful to its host (H1N1?) in its first couple of generations, and this is obviously detrimental to both species.
In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship. If we only consider that cancer is a rogue element in system A, it is likely that there are factors for system B that can turn a human into a rogue element. And obviously, such rogue elements are undesired and are likely to be destroyed by the system's need for survival, or such rogue elements might destroy or disrupt it whole.
I am just proposing one such scenario, and trying to put things in perspective. Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence.
*Milan Stevic*
_> Is this a riddle? Does it confirm or deny what I said?_
That depends on what you meant.
_> medicine in relation to medical institution._
That doesn't mean anything.
Every hospital and every medical university is an institution.
Yes, academic institutions are also institutions.
As are governments, but those are not necessarily medical in nature.
_> Perhaps I should've said commercial medicine and pharmacology, my bad._
I think you should have. Now I understand your argument better.
I still think that oncology is not interesting to profit oriented industry.
_> the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them._
It's not their job to publicise academic studies, although they rely on them.
The problem of communicating scientific discoveries to the main stream is not unique to medicine. Sadly, all scientific disciplines have trouble with that.
_>> "Your "empirical facts" are neither empirical nor facts."_
_> Yep, those are definitely not facts, but observations related to my opinion on this matter_
Then you should have just called them your opinion.
_> as unscientific as it may sound, it is actually grounded in some established branches of psychology._
I think you should look deeper into this.
As it is, it is not science, just a testable hypothesis.
You should test it.
_> Of course it's in a dictionary. Also commensalism and amensalism._
I have a bunch of dictionaries. I find commensalism in there, abut not amensalism.
Of course I can't claim that my collection is complete.
However, you defined what you meant, and that is enough to know what you mean, which is what matters. (The only thing that really bothers me about the word is that it inconsistently mixes Greek and Latin, but I'd still use it if it helps with clarity.)
_> In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship._
That may be true for one specific kind of relationship, but it is by no means universal. Humans are not necessarily part of every intelligence outside of humanity that surpasses human ability.
_> Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence._
You assume that such a "superintelligence" already exists? You said we are a long way from creating one.
Anyway, my point is that there is more than one possible reaction to such a threat.
I think you are confusing creativity with just finding the most literal interpretation of a command and following it.
Creativity is "relating to or involving the imagination or original ideas" and I think the original ideas part is still applicable despite it being AI
Crestfallen.png robots don’t have an ‘imagination’, and their ideas are all given to them. You can program in the ability for the machine to write new subroutines for itself. But that doesn’t mean it is thinking creatively. All that means is that it is capable of interpreting information. If you tell a machine to, for example, walk across a floor while touching the floor as little as possible with the feet, the machine will immediately understand 0 to be as little as possible. The only way to achieve 0, is to walk upside down. It’s just a literal interpretation of a command...
@@darksol99darkwizard Darksol99 Dark Wizard yes machines dont have imagination which is why the keyword in the definition is "or". As for the rest, does a human not interpret information to reach the desired outcome in more or less a similar way that a machine interprets information to reach the desired outcome? A human could also pretty easily understand that 0 would be the theoretical minimum amount but that does not mean that they would be able to reach it. I would bet that if you put 1000 humans seprately to that same exact task, very very few would actually come to that solution. So in a certain sense that is a creative solution.
That all being said, I would argue that a creative solution is still a creative solution whether or not it was done by an AI. Of course you understand what the best solution to that problem would be now that you have seen it. If I am being honest, I likely wouldnt have come to that solution if the problem was given to me (if I had not seen the best solution). Everyone thinks something is easy when they see it done by an expert.
edit: changed "it" to "the problem"
Crestfallen.png in response to the ‘or’ part. My response to you handled both horns of the dilemma.
In terms of creative thought, I think you are correct that most people wouldn’t have come to these solutions. I know many people who would, and they would not be touted as creative. They would get an Aspergers diagnosis.
The scientist says: walk across this floor while touching it as little as possible with the feet. Most humans will understand the unsaid part of the command (the implication that the walking should be done right side up for example). Those who don’t and just do exactly what was requested, without understanding the nuance of human communication, are not considered creative. So why consider a machine creative that does the same? That’s all I was saying.
@@darksol99darkwizard People with Aspergers can have incredibly creative solutions to problems. I personally think youre looking at things from a normal-centric and human centric point of view but I get the points youre making
Humans: Try to think outside the box!
AI: _There is no box._
I feel like this is the equivalent of the algorithms roasting the researchers
Machines make more better jobs for people.
Now because robots can think outside the box now they need people to...
People to make up random boxes, duh! Wait... the AI can do that as well...
Rabbit Piet Yes, they can (with enough training)
Correction: Robots can think outside a box, not "the" box.
Rabbit Piet That was true when we were replacing the work muscles. When you replace brains, there isn’t much left.
You know, that really makes me wonder:
The potential of an AI is only limited by the resources it has access to.
So when God made us, were we actually more creative, powerful and intelligent before he purposely limited us by our five senses?
"Dont ask an AI to eject all useless stuff in order to go faster in the car, else if you do, prepare to get ejected."
What a classic way to call someone useless.
The first robot flipping around was crazy,weird and brilliant