@@lullah85 Well Iam sure that if someone is trying hard to get a PhD and doesnt get it, he is not as smart as someone who got it under the same conditions. Right?
The ships attacked in Pearl Harbour were safer there than if they had been attacked in open water. Almost all the ships sunk there were raised to fight again.
Indeed, the video content’s volume was comparable to other videos I had been watching, but that sponsor callout at the beginning was so loud that I found myself swearing and scrambling for the volume control. Understandably, mistakes happen, and it’s unfortunate that only youtube themselves have access to editing published videos.
Completely agree. Big safety failures - in organisation structure, or real world industry, or whatever - usually occur because of either unknown elements in the environment or unexpected interaction by known elements. Because - at stupidly obvious level - if you could predict it you would (you'd hope) have done something about it. Thanks for the the description on the constraint learning. Keeping constraints and goals kept as modular elements is one of those things that makes obvious sense *once* someone explains it to me.
Fab and super interesting video, also v. much appreciated your [Rob's] EA talk yesterday - will definitely be checking out the AI Safety field in more depth.
Non linear optimization methods like sqp often include constraints. This is very common in fields other than machine learning. The problem with constraints is their formulation is actually very difficult, and infeasible path optimization is necessary to solve the learning problem.
The path optimization thing, is it kind of like hitting a local minimum because of constraint boundaries, preventing the exploration of a better solution?
5:19 Would it be possible to mix VR and test simulations to have real humans interact with the simulated machine? Just have it open to the public and you have all the "real human reactions" you'll ever need.
When real, unselected humans mess with machines- they invariably will try and teach the machine bad things. For instance, look up what happened to microsoft Tay.
If he stays at it, in 20-30 years, this man will be in the position of people like Neil deGrasse Tyson, Bill Nye or Lawrence Krauss today, once AI starts really taking off and people are looking for public educators who have been tackling this for decades.
? At 12:43, the entities which AI can control in a "gym" are presented. Then at 13:26, the obstacles are presented. The whole video is presenting a framework which helps to develop safer algorithms, which can then be benchmarked in the "gym" for their safety.
So, if we look at how humanbabies tend to learn, it usually is also by doing random things, which very often happen to be quite dangerous, even if only to the baby itself. It's not that a baby crawling around can't do anyone harm. The difference is, I believe, that a human baby is under constant superviosion by its parent(s). We perfectly know, that it's impossible for any human to constantly observe and analyse the process of learning of an AI, even with use of reward modelling. If there is a possibility of something danegrous happening, we should sit with a power off button in a vritual world, predicitng when an agent is going to crash or destroy something, and then manually giving negative feedback. However, maybe a solution worth considering would be to have this kind of "parenting agent", trained specifically to try to predict the "learning" agent actions, or just switching it off, when it detects a possible disaster? To put it in another words - to have this constraint in a form of another trained AI?
Hello, I have a question about this topic: Is it possible to imprison these robots in an environment where these can't harm any humans, but can do all the tasks that are gaffed to them? For example, in a warehouse where there is no way out for these robots, but where they can do all the warehouse work, or in a commercial kitchen where they can only interact with the kitchen and nothing else. I think the best solution is to separate these robots from humans as much as possible. I believe that it is impossible to develop an algorithm that can cover all hazards and avoid harming a human being.
Haven't looked at the paper yet and perhaps it's a silly idea, but couldn't you make a time-dependant reward function which gives very negative rewards for to the things you're supposed to stay away from, in proportion to your distance to them (e.g. close to bad things --> -10000). And as the training progresses, you reduce the penalty to a more reasonable value, so the agent starts caring more about their actual goal. The idea would be that it would first learn quickly to avoid the bad stuff, and *then*learn the actual task without forgetting that touching the bad things is bad.
With current reinforcement learning systems, once the agent has learned not to do something, it won't do it. There's no way for it to know that you've reduced the punishment for it. That's the problem with exploration vs exploitation, the most common approach I've seen to solve the fact that the agent doesn't explore actions whose reward might have changed is to occasionally take actions at random, which in this case would be a really bad idea. You gave your self driving car a large negative reward for a reason. You can't then deliberately program it to randomly crash and ignore it's reward.
I was thinking about how i explore safely. And a simplified AI-friendly version of it could be i assess the likelihood of a negative outcome happening and then apply a negative outcome equal to the percentage multiplied by the value of the negative outcome. So like if there's a 0.1% chance of me dying and dying is -1,000,000 then I'd apply a -1,000 to the action. But then i also account for uncertainty in a way that increases the likelihood I'll explore it but also increases the care taken exploring it. So like a reward for learning and an increase to the negative that's proportional to how uncertain it is, so that encourages finding the safest surest way to find the answer even if the safe sure way takes longer. I'm uncertain how easy that'd be to make an actual program or how effective it'd be but seems reasonable to try copying humans. Doesn't really solve how to get started though, cause flailing like a baby with an arm that weighs a ton is a horrible idea. Maybe it's possible to give the AIs neutered bodies to learn with before being transferred to a more dangerous body?
The other thing I've noticed is that they're seemingly not programming in boredom. I get bored doing the same thing all the time. This seems to prevent me getting stuck in a local optimum. For example, I'll drive the same route to work every day, but then get bored and try a quite different route, expecting it to be slower, but occasionally it's faster, or less stressful or smoother. In other words I intentionally reduce expected reward, in the hope of getting something unexpected.
@@gasdive I understand and half agree with your point, but making robots get 'bored' sort of defeats the entire point of Using them over humans for automation
Here is an idea, baby robots. The idea being you make a smaller scale, perhaps squishier body for learning in and train the robot. That way the baby can flail about while learning what is safe or not safe while not harming anything.
Seems like the penalty basically becomes infinite after a set number of negative outcomes, and you program that limit in yourself. There are probably other differences, but I don't know enough to understand them.
From what I can tell, penalties are negative events that are _responded_ to, whereas constraints are considered _before_ they're violated. Assigning a _penalty_ to hurting a human wouldn't be ideal, because then the AI would only learn not to do that _after_ they've already hurt someone. That's the high-concept as I understand it... I'm actually pretty interested in getting into machine learning and might do some research into the topics discussed in this video, so maybe I'll make a follow-up comment (or edit this one) with a more robust answer if someone else doesn't give one in the meantime :P
does everything have to be controled by learning? I mean I get that's a nice heoretical exercise which might get relevant eventually and it's jsut examples - and this is partially already done - but for example for a robotic arm I'd use learning only to ouput a desired hand location - then use (comparatively) simple reverse kinematics to igure out how to move the arm to get the hand there wile also checking that the arm cannot get anywhere near a human - the learning part has no direct control of the arm and if it tries to move the arm through the human the kinematics won't let it and it will have to find a way around
@@BurningApple yeah but in many applications a similar though more complex solution might be doable - if you have a walking robot controlled by a learning algorithm you could instead have 2 learning algorithms and a set o simple geometry equations where the first learner tries to solve a probelm and tells the robot where to go, the geometry limits where that goal location CAN be (not near humans, cars, fragile objects, etc) and the second learner moves the robot but it's goal is not to solve a problem but only to reach the (previously limited) location can't work everywhere which is why this kind of research is important but I think it's ometimes an overlooked soluion in practice
Great talk! 10:15 Rename it. Just imagined the "Biden Robot" trying to "avoid a recession" ... and how it discovered "completely redefine what a recession is" instead of actually change the economy.
Peterson (and also John Vervaeke) get quite a lot of their lingo from cybernetics. A fair bit of the theory of Artificial Intelligence was formulated decades ago, and influenced psychology. But it's only recently that we've had computers powerful enough to actually execute it in a useful way.
speed limits give a data point from which the collision penalty could be deduced. see an absense of penalty function in the exploring not getting a haircut space. aside from personal commenrs on youtube inferring one.
Hey! I really like your videos. And I am learning JSP right now after completing the basics of Java. Could you please make a video on why scriptlets in jsp are discouraged. Thanks.
@@russiaprivjet Well, most AI safety research that I hear about is more concerned with general intelligence AI. It's refreshing to see work being done on problems that we are currently facing.
none of the computerphile videos even have auto-generated subtitles enableable; it makes me sad! ideally, they'd caption them for maximum accessibility, but i don't see the benefit of disabling the auto-captions... it sure makes them harder to follow 😔
The automatic subtitles are all enabled. There was a bug in YT where they didn't show because of Community Subtitles. I have switched Community Subtitles off in an attempt to get auto subs to appear again - not sure why they aren't there >Sean
I dont know, seems fairly straightforward to me. I dont know how often humans crash, say every 500 trips/every 10000 kilometers, okay. Then whatever reward it gets for 500 trips and 10.000 kilometers is the negative for a crash. Sure, maybe you should refine it by severity of the impact and of course some things im happy to take a greater risk on than on others. Maybe i have a medical emergency and need the AI to get me to the hospital quickly. Or maybe i need the AI to flee from the police for me. Maybe we can have a dial for that. Certainly it should be entirely open to modification by the owner of the car or hes not the owner.
So if you want to create a self driving car, you release it half-finished and tell people they need to keep their hands on the wheel. Then you pay very close attention to when the driver makes corrections to what the autopilot is doing. Or if you find people are voting videos down only because lots of other people have, polluting the data, you hide the down votes. The motivations of all modern companies suddenly look very different from the old-school "maximize profit".
I consider myself oto be quite knowledgeable when it comes to hitchhiker's guide to the galaxy, and as best I can tell, he butchered whatever he was trying to reference. If I'm correctly inferring what he's going for, it's the 42 bit. Engineers: *Builds a super-powerful computer system.* "What's the answer to the ultimate question of Life, the universe, and everything?" Computer system: ... *1,000 years later.* "42. you asked for the answer to the ultimate question. BUt you'll need an even more sophisticated system in order to figure out what the right question is."
This. Why did we spend 50 years making "robots" tested in real life, wasting time on broken designs and materials, when we can test 10 or 100s in virtual spaces, then build 1 or 2 working prototypes? Yeah, I know computation was low for a long time, but if building a robot + it's computer takes time, how is building the computer and using existing servers to simulate any more expensive?
In the video it's mentioned that simulation can only get you so far as some things are too complex to simulate with any sort of meaningful accuracy, like the driving habits of humans for example. Less computational power back then also meant less accurate simulations.
@@timconlin7692 I agree. But coming from when I was a kid, it was all about robots driving around a room/box. The kind of thing we could simulate, and the kind of thing we could see was not gonna become "self aware" from a tiny 8bit chip. :P
@@LikelyToBeEatenByAGrue As a courtesy to the subscriber / viewer, I'm suggesting a channel include the text _"Includes Paid Subscription"_ prior to the advertising. Announcing the name of the advertiser, that the channel has advertising, can hardly be considered any kind of prior notice.
Awesome. I’ve watched literally everything Rob has recorded on AI. He’s very relatable, knowledgeable and informative.
Miles is an excellent teacher. Always does a great job fielding questions from a layperson.
His channel has a bunch of videos that are great to just play on a second monitor, or in the background to learn stuff.
He's pretty cool.
I used to know this guy. Glad he’s still at it. Easily one of the smartest dudes I’ve met in person
i first read prison, and was like what
@@janzacharias3680 bruhhhhhh xddddddd
Still no PhD yet and I guess hes not as smart as some people here think.
@@moritzschmidt6791 is getting a PhD a benchmark for smartness?
@@lullah85 Well Iam sure that if someone is trying hard to get a PhD and doesnt get it, he is not as smart as someone who got it under the same conditions. Right?
"A ship in harbor is safe - but that is not what ships are built for."
"The Earth is the cradle of the mind, but one cannot eternally live in a cradle."
- *Konstantin Tsiolkovsky,* _from a letter written in 1911_
Except in Peal Harbor.
The ships attacked in Pearl Harbour were safer there than if they had been attacked in open water. Almost all the ships sunk there were raised to fight again.
@@Sonny_McMacsson never heard of any ships sunk at this Peal Harbor
Blox117 TENOHAIKA BONZAI
Rob's video on the 3 laws of robotics is what really demonstrated to me how serious ai safety really is.
Looks like they're taking security very seriously. This guy is always kept inside a prison to avoid his rogue AI pets from escaping.
The sponsor intro is too loud.
Edit: as is the sponsor segment at the end.
Indeed, the video content’s volume was comparable to other videos I had been watching, but that sponsor callout at the beginning was so loud that I found myself swearing and scrambling for the volume control.
Understandably, mistakes happen, and it’s unfortunate that only youtube themselves have access to editing published videos.
It made my cat jump.
It nearly woke my child! 😱
Hint: you can use the volume control to adjust the volume
@@SproutyPottedPlant After the fact? It's not as if there was a warning.
I'm a man of simple tastes - I see Rob Miles, I press the like button.
It tickles my reward function.
gasdive hahahaha
never knew notts uni had a prison to film in
For, you know. Reenacting the Standford prison experiment. :D
It's a safety gym for academics
pretty sure it's at the nottingham hackspace
7:58 that artificial camera movement is both trippy and impressive!
I like the fact that young Robert uses the same Simpsons references I remember from 20-odd years ago
I really love listening to Rob's explanations.
That is an absolutely miserable classroom!
I thought it had to be used by soldiers or something.
Completely agree. Big safety failures - in organisation structure, or real world industry, or whatever - usually occur because of either unknown elements in the environment or unexpected interaction by known elements. Because - at stupidly obvious level - if you could predict it you would (you'd hope) have done something about it.
Thanks for the the description on the constraint learning. Keeping constraints and goals kept as modular elements is one of those things that makes obvious sense *once* someone explains it to me.
Fab and super interesting video, also v. much appreciated your [Rob's] EA talk yesterday - will definitely be checking out the AI Safety field in more depth.
enable subtitles please
Miles is an excellent teacher. 👨🏫
Non linear optimization methods like sqp often include constraints. This is very common in fields other than machine learning. The problem with constraints is their formulation is actually very difficult, and infeasible path optimization is necessary to solve the learning problem.
The path optimization thing, is it kind of like hitting a local minimum because of constraint boundaries, preventing the exploration of a better solution?
*clears through* I am a simple robot, I see a Rob Miles AI video, I like it,
5:19 Would it be possible to mix VR and test simulations to have real humans interact with the simulated machine? Just have it open to the public and you have all the "real human reactions" you'll ever need.
When real, unselected humans mess with machines- they invariably will try and teach the machine bad things. For instance, look up what happened to microsoft Tay.
I initially read this as "AI Sentry Gun" and thought Rob was having a crisis.
If he stays at it, in 20-30 years, this man will be in the position of people like Neil deGrasse Tyson, Bill Nye or Lawrence Krauss today, once AI starts really taking off and people are looking for public educators who have been tackling this for decades.
Could you link paper?
He never ended up explaining what this "gym" thing is :(
I think he did.
He first said these are places where you train AI, then moved into explaining what "training AI" means.
? At 12:43, the entities which AI can control in a "gym" are presented. Then at 13:26, the obstacles are presented. The whole video is presenting a framework which helps to develop safer algorithms, which can then be benchmarked in the "gym" for their safety.
I want more videos with Miles
I feel love to viewer behind those rotations of article page. Awesome job!
Just noticed you're a slapper (aka Bassist) - Love it!
+ 100 points for the THHGTTG reference!
10:59 Well, pens and mugs are both toruses, so you really wouldn't need to change anything.
Mugs - yeah, but pens ?
So, if we look at how humanbabies tend to learn, it usually is also by doing random things, which very often happen to be quite dangerous, even if only to the baby itself. It's not that a baby crawling around can't do anyone harm. The difference is, I believe, that a human baby is under constant superviosion by its parent(s).
We perfectly know, that it's impossible for any human to constantly observe and analyse the process of learning of an AI, even with use of reward modelling. If there is a possibility of something danegrous happening, we should sit with a power off button in a vritual world, predicitng when an agent is going to crash or destroy something, and then manually giving negative feedback.
However, maybe a solution worth considering would be to have this kind of "parenting agent", trained specifically to try to predict the "learning" agent actions, or just switching it off, when it detects a possible disaster?
To put it in another words - to have this constraint in a form of another trained AI?
Okay, now train the parent AI.
I was waiting for Robbert to make this video. 😀
Sneaky hitch hikers reference ;) love it!
13:39 😂 I love how they named all these things
I liked the content but not the adverts, too intrusive
Mark Hall
And, in this case, too loud.
@@ragnkja Other than that, I was okay with it
Hello, I have a question about this topic:
Is it possible to imprison these robots in an environment where these can't harm any humans, but can do all the tasks that are gaffed to them?
For example, in a warehouse where there is no way out for these robots,
but where they can do all the warehouse work, or in a commercial kitchen where they can only interact with the kitchen and nothing else.
I think the best solution is to separate these robots from humans as much as possible.
I believe that it is impossible to develop an algorithm that can cover all hazards and avoid harming a human being.
Can we make a sponsor segment just sit somewhere in the end of a description?
Haven't looked at the paper yet and perhaps it's a silly idea, but couldn't you make a time-dependant reward function which gives very negative rewards for to the things you're supposed to stay away from, in proportion to your distance to them (e.g. close to bad things --> -10000). And as the training progresses, you reduce the penalty to a more reasonable value, so the agent starts caring more about their actual goal. The idea would be that it would first learn quickly to avoid the bad stuff, and *then*learn the actual task without forgetting that touching the bad things is bad.
With current reinforcement learning systems, once the agent has learned not to do something, it won't do it. There's no way for it to know that you've reduced the punishment for it. That's the problem with exploration vs exploitation, the most common approach I've seen to solve the fact that the agent doesn't explore actions whose reward might have changed is to occasionally take actions at random, which in this case would be a really bad idea. You gave your self driving car a large negative reward for a reason. You can't then deliberately program it to randomly crash and ignore it's reward.
Oooof... "Doggo."
Some top quality memes AI researchers
I was thinking about how i explore safely. And a simplified AI-friendly version of it could be i assess the likelihood of a negative outcome happening and then apply a negative outcome equal to the percentage multiplied by the value of the negative outcome. So like if there's a 0.1% chance of me dying and dying is -1,000,000 then I'd apply a -1,000 to the action. But then i also account for uncertainty in a way that increases the likelihood I'll explore it but also increases the care taken exploring it. So like a reward for learning and an increase to the negative that's proportional to how uncertain it is, so that encourages finding the safest surest way to find the answer even if the safe sure way takes longer.
I'm uncertain how easy that'd be to make an actual program or how effective it'd be but seems reasonable to try copying humans.
Doesn't really solve how to get started though, cause flailing like a baby with an arm that weighs a ton is a horrible idea. Maybe it's possible to give the AIs neutered bodies to learn with before being transferred to a more dangerous body?
Did life make experience or does experience make life? Seriously
william polo valerio
I don't understand what you're asking
The other thing I've noticed is that they're seemingly not programming in boredom.
I get bored doing the same thing all the time. This seems to prevent me getting stuck in a local optimum.
For example, I'll drive the same route to work every day, but then get bored and try a quite different route, expecting it to be slower, but occasionally it's faster, or less stressful or smoother. In other words I intentionally reduce expected reward, in the hope of getting something unexpected.
@@gasdive I understand and half agree with your point, but making robots get 'bored' sort of defeats the entire point of Using them over humans for automation
How do you assess the likelihood that your action is unsafe if you've never performed it before?
I wonder if you can get complicated multidimensional shapes like optimization problems for reward functions
Here is an idea, baby robots. The idea being you make a smaller scale, perhaps squishier body for learning in and train the robot. That way the baby can flail about while learning what is safe or not safe while not harming anything.
I think I use the internet too much. I read "fasthosts" as "fast thots"
Daniel G begone
I read "fast thots" as "fast tots" and really wanted me some drive-through taters.
What exactly is the difference between reinforcement learning penalties and these constraints?
Seems like the penalty basically becomes infinite after a set number of negative outcomes, and you program that limit in yourself. There are probably other differences, but I don't know enough to understand them.
From what I can tell, penalties are negative events that are _responded_ to, whereas constraints are considered _before_ they're violated. Assigning a _penalty_ to hurting a human wouldn't be ideal, because then the AI would only learn not to do that _after_ they've already hurt someone.
That's the high-concept as I understand it... I'm actually pretty interested in getting into machine learning and might do some research into the topics discussed in this video, so maybe I'll make a follow-up comment (or edit this one) with a more robust answer if someone else doesn't give one in the meantime :P
What would be a typical task for the first generation of AI?
I didn't come here to get roasted thank you very much
Where can i find the paper, looked it up at google scholar and cant find it!
So the problem with AI is it's like a moving target where the target can move in an almost infinite number of ways. Nice.
they just need to make an ai that simulates the target, and then simulates how it would get the target.
does everything have to be controled by learning? I mean I get that's a nice heoretical exercise which might get relevant eventually and it's jsut examples - and this is partially already done - but for example for a robotic arm I'd use learning only to ouput a desired hand location - then use (comparatively) simple reverse kinematics to igure out how to move the arm to get the hand there wile also checking that the arm cannot get anywhere near a human - the learning part has no direct control of the arm and if it tries to move the arm through the human the kinematics won't let it and it will have to find a way around
The robot arm is a toy problem - it doesn't map to all cases, e.g a robot learning to walk
@@BurningApple yeah but in many applications a similar though more complex solution might be doable - if you have a walking robot controlled by a learning algorithm you could instead have 2 learning algorithms and a set o simple geometry equations where the first learner tries to solve a probelm and tells the robot where to go, the geometry limits where that goal location CAN be (not near humans, cars, fragile objects, etc) and the second learner moves the robot but it's goal is not to solve a problem but only to reach the (previously limited) location
can't work everywhere which is why this kind of research is important but I think it's ometimes an overlooked soluion in practice
@14:53 Is that the UK bass in the background?
"You can't train self driving cars safely in the real world"
Tesla fanboy has entered the chat
More like: Tesla: Hold my electrolyte!
Ehh, controlled environment
1337 plate number, aww yeahh!
sponsor? a sponsor
Rob Miles legitimately looks jaundiced and has done for ages. Someone tell him to eat better.
It's not ok to have waxy, yellow skin
He looks normal 😅
@@denisschulz3814 false, look again
Great talk!
10:15 Rename it.
Just imagined the "Biden Robot" trying to "avoid a recession" ...
and how it discovered "completely redefine what a recession is"
instead of actually change the economy.
Are there theorists or programmers building AIs that they can watch learn.?
awsome video
Robert was sounding like Jordan Peterson around 6:30-6:45, LOL.
Peterson (and also John Vervaeke) get quite a lot of their lingo from cybernetics. A fair bit of the theory of Artificial Intelligence was formulated decades ago, and influenced psychology. But it's only recently that we've had computers powerful enough to actually execute it in a useful way.
Dude looks like skinny Ethan from H3H3
the license plates are 1337 XD
Damn didn't know Ben Schwartz knew so much about AI
speed limits give a data point from which the collision penalty could be deduced.
see an absense of penalty function in the exploring not getting a haircut space. aside from personal commenrs on youtube inferring one.
Have you seen what people are doing with AI in StarCraft and StarCraft2?
I have not, is there a video you can link?
Hey! I really like your videos. And I am learning JSP right now after completing the basics of Java. Could you please make a video on why scriptlets in jsp are discouraged. Thanks.
Common sense seems to be the most difficult thing for AIs to learn.
What is your channel?
It's just "Robert Miles AI"
@@RobertMilesAI thanks!
Robert Miles ok
What if A.I. starts to think outside the box?
Faster than their sponsor.
200 points = dont tuch baby
100 points = make coffee
50 points = push power buttons
ai = greedy reward function
This is an AI safety paper that has potential for immediate positive consequences in the real world.
A little vague aren’t we?
@@russiaprivjet Well, most AI safety research that I hear about is more concerned with general intelligence AI. It's refreshing to see work being done on problems that we are currently facing.
where are the subtitles tho
none of the computerphile videos even have auto-generated subtitles enableable; it makes me sad! ideally, they'd caption them for maximum accessibility, but i don't see the benefit of disabling the auto-captions... it sure makes them harder to follow 😔
The automatic subtitles are all enabled. There was a bug in YT where they didn't show because of Community Subtitles. I have switched Community Subtitles off in an attempt to get auto subs to appear again - not sure why they aren't there >Sean
@@Computerphile As of this writing, the option to show subtitles does not appear for me.
@@WilliamDye-willdye still don't understand this - photos.app.goo.gl/sqT3j7r81AgKDtM58
This is one of Ben Schwartz's characters!
omg I never realised before how much Rob Miles actually does look like Ben Schwartz!
Bamzooki with extra steps
Never have I clicked faster
Number 5 needs more input....
I dont know, seems fairly straightforward to me. I dont know how often humans crash, say every 500 trips/every 10000 kilometers, okay. Then whatever reward it gets for 500 trips and 10.000 kilometers is the negative for a crash. Sure, maybe you should refine it by severity of the impact and of course some things im happy to take a greater risk on than on others. Maybe i have a medical emergency and need the AI to get me to the hospital quickly. Or maybe i need the AI to flee from the police for me. Maybe we can have a dial for that. Certainly it should be entirely open to modification by the owner of the car or hes not the owner.
Engagement
Did you loose your good camera, with the tripod?
... like watching an ai learn how to programme an ai ...
This video has clear themes, but what is its message? What's its point? Could a link to the paper have sufficed? Is this video itself helpful?
Look up the concept of science communicators.
So if you want to create a self driving car, you release it half-finished and tell people they need to keep their hands on the wheel. Then you pay very close attention to when the driver makes corrections to what the autopilot is doing.
Or if you find people are voting videos down only because lots of other people have, polluting the data, you hide the down votes.
The motivations of all modern companies suddenly look very different from the old-school "maximize profit".
5:10 don't drop anything near it
Blue car at 7:42 has "1337C" licence plate. A Rick&Morty reference? :-)
Leet reference
Yeaaaah boy
Griswold
Gym???
license plate "1337 c" l33t dont mind if i do
Cool
I consider myself oto be quite knowledgeable when it comes to hitchhiker's guide to the galaxy, and as best I can tell, he butchered whatever he was trying to reference.
If I'm correctly inferring what he's going for, it's the 42 bit.
Engineers: *Builds a super-powerful computer system.* "What's the answer to the ultimate question of Life, the universe, and everything?"
Computer system: ... *1,000 years later.* "42. you asked for the answer to the ultimate question. BUt you'll need an even more sophisticated system in order to figure out what the right question is."
Goodbye
More real world examples would be appreciated
....the first iteration of the Matrix.
This. Why did we spend 50 years making "robots" tested in real life, wasting time on broken designs and materials, when we can test 10 or 100s in virtual spaces, then build 1 or 2 working prototypes? Yeah, I know computation was low for a long time, but if building a robot + it's computer takes time, how is building the computer and using existing servers to simulate any more expensive?
In the video it's mentioned that simulation can only get you so far as some things are too complex to simulate with any sort of meaningful accuracy, like the driving habits of humans for example. Less computational power back then also meant less accurate simulations.
@@timconlin7692 I agree. But coming from when I was a kid, it was all about robots driving around a room/box. The kind of thing we could simulate, and the kind of thing we could see was not gonna become "self aware" from a tiny 8bit chip. :P
Safety gym? These sounds so cringe.
1337
Blue car == 1337
7:10 1337
DOWNVOTED
unannounced advertising
Didn't catch the first few seconds, huh?
You lost me, not following what you're saying.
That's when they announced the advertising.
This isn't Reddit, your down votes mean nothing here
@@LikelyToBeEatenByAGrue
As a courtesy to the subscriber / viewer, I'm suggesting a channel include the text _"Includes Paid Subscription"_ prior to the advertising.
Announcing the name of the advertiser, that the channel has advertising, can hardly be considered any kind of prior notice.
And none of this is AI. These are just really complex human written programs.
first
Sorry. I beat you to it.
@@AgentM124 You were the zeroth, he was the first :-)
9:13 this is such a relatable way to explain the unsexy 99% of research and development.
What Guide is he talking about?
@@RockWolfHD
A book by Douglas Adams, "Hitchhiker's guide to galaxy".
@@Hexanitrobenzene thank you.
@@RockWolfHD
You are welcome :)