Verifying AI 'Black Boxes' - Computerphile

Поделиться
HTML-код
  • Опубликовано: 7 дек 2022
  • How to we check to see if a black box system is giving us the right result for the right reason? Even a broken clock is correct twice a day! - Dr Hana Chockler is Reader in the Department of Informatics, King's College London and Principal Scientist at causaLens
    Relevant papers: bit.ly/C_Hana_Paper1 bit.ly/C_Hana_paper2
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Комментарии • 134

  • @checkboxxxproductions
    @checkboxxxproductions Год назад +85

    I love this woman. I need her to explain all computer science to everyone.

    • @bilboswaggings
      @bilboswaggings Год назад +3

      I need her to explain red "pandas" to everyone

    • @AileTheAlien
      @AileTheAlien Год назад +1

      She does a great job of breaking down the problem and explaining all the pieces, without missing details or using unexplained jargon! :)

  • @hemerythrin
    @hemerythrin Год назад +101

    This was great! That idea about throwing away parts of the input is so clever, love it

    • @Dante3085
      @Dante3085 Год назад +2

      Same. I didn't think about that before. You can actually try to narrow down what part of the input data was relevant for the decision. Cool!

    • @brooklyna007
      @brooklyna007 Год назад

      Most image models have natural heat maps in the upper layers for each output class for a give image. Most language models can do a similar heat map over the tokens in a sentence. I really don't know any major image or language models that don't have the ability to create heat maps that give you a sense of where the decision came from. I would think structured/tabular data is more relevant to black systems since the features don't exist in space or along a sequence.

  • @davidmurphy563
    @davidmurphy563 Год назад +3

    Aw, she seems lovely. The sort of person that hugs children with a big smile.
    Great explanation too.

  • @danieldunn6329
    @danieldunn6329 Год назад +7

    Extremely grateful to have taken the Formal Verification course with Hana during my final year of my undergrad.
    Great video and fantastic to see her here on this channel 😊

    • @DavidLindes
      @DavidLindes Год назад +2

      Nice! Yeah, it's fun seeing people one knows showing up on these. I had a friend show up in a Numberphile, and that was super fun to see. :)

  • @MarkusSimpson
    @MarkusSimpson Год назад +22

    Amazing. I loved the enthusiasm and energy for the subject, and the accent is lovely to listen to. If only my university lecturers were this passionate when when teaching us, it would make things flow much easier.

  • @barrotem5627
    @barrotem5627 Год назад +47

    What an absolutely brilliant video - clear, sharp and understandable.
    Truly great.

  • @undisclosedmusic4969
    @undisclosedmusic4969 Год назад +10

    This video and the linked papers are about model interpretability/explanability and not (formal) model verification, may I suggest changing the title to "Interpreting/Explaining AI Black Boxes"?

  • @henlyforbesly2176
    @henlyforbesly2176 Год назад +3

    I never heard of this kind of explanation on image recognition AI before! Such a simple and intuitive explanation! Thank you, miss! Very clever and well delivered!

  • @leftaroundabout
    @leftaroundabout Год назад +44

    There's an important aspect missing from this video: if you just consider arbitrary ways of obscuring part of the image, you can easily end up with a particular pattern of what's obscured and what's not that influences the classification. In particular, the standard convolutional neural networks are all subject to _adversarial attacks_ where it may be sufficient to change only a handful of pixels and get a completely different classification. So if one really just tries to find the _minimal exposed area_ of an image that still gets the original classification, one invariably ends up with a bogus result that does nothing to explain the actual original classification.
    There are various ways to circumvent this issue, but it's unfortunately still more of an art than a clear science. The recursive approach in Chockler et al. 2021 definitely is a nice construction and seems to work well, but I'd like to see some better mathematical reasons for doing it this way.

    • @DavidLindes
      @DavidLindes Год назад +4

      I'm curious... are you criticizing the _video_ or the _paper_ here? Because if it's the video, like... this is Computerphile, it's trying to give lay folks some general understandings of things, and is thus potentially both getting a simplified explanation from the speaker in the actual interview, _and also_ quite possibly editing it down (many videos have extra bits videos showing parts that didn't make the cut for the main video)... AND, also, there's a video linked at the end (v=gGIiechWEFs) that goes into those adversarial attacks. So, seems like Computerphile is doing OK in that regard? There's also 12:25.
      Now, if you're criticizing the paper, then that's a different story. And I haven't read it, so I won't say more on that. I just think that if your critiques are really of the video per se, they're probably a bit misguided. I thought this was a great surface-level explanation for a lay audience. Yes, it left stuff out. It wasn't trying to be comprehensive, though, so that's OK by me.

    • @leftaroundabout
      @leftaroundabout Год назад +2

      @@DavidLindes I was mostly just adding a remark. I'm not criticizing the paper, which does address the issue, albeit a bit hand-wavey. The video - as you say - has valid reasons for not going into all the details, however I do feel it oversimplified this.

    • @DavidLindes
      @DavidLindes Год назад

      @@leftaroundabout fair enough.

    • @Rotwold
      @Rotwold Год назад +1

      @@leftaroundabout thank you for the remark! It extended my knowledge on the topic :)

    • @emptyhanded79
      @emptyhanded79 Год назад

      I think Mr. left and Mr.David are both AI designed to teach RUclips clients how to debate in a friendly manner.

  • @mryon314159
    @mryon314159 Год назад +5

    Excellent stuff here. But I'm disappointed you didn't put the cowboy hat on the panda.

  • @onlyeyeno
    @onlyeyeno Год назад +9

    I@Computerphile
    Thanks for yet another interesting and enjoyable video.
    And if possible I would love to see more from Mrs. Cochler about their research, as I'm very curious if/how they "test the systems" for not only recognise "pixel parts" but also more abstract attributes and "features" and "feature sets"... To make crude example what would be the "minimal identifying features" of a "drawing of a red panda", how crucial is colouring. How does it differ depending on "style of rendering"... and on and on... Perception is a complex and "tricky" thing and seeing how we try to "imbue" systems with it is is fascinating.
    Best regards.

  • @AmnonSadeh
    @AmnonSadeh Год назад +2

    I initially read the title as "Terrifying AI" and it seemed just as reasonable.

  • @HeilTec
    @HeilTec Год назад +4

    Great example of outside-in testing.
    Perhaps a network could be trained to supplement the output category with an explanation.

  • @KGello
    @KGello Год назад +4

    The way the question was posed made it super interesting, and the explanation was enlightening! But I also loved the Dr's accent, she could explain anything and I'd listen.

  • @ZedaZ80
    @ZedaZ80 Год назад +1

    This is such a cool technique!

  • @gmaf79
    @gmaf79 Год назад +1

    god, I love this channel.

  • @zhandanning8503
    @zhandanning8503 Год назад +3

    Video is great explaination for explaining black box models. I am just wondering is there any explainable methodology for non-image data/models because quite a lot of black box models I assume would be sequential data? because images you can extract features which explain what the black box is doing, but with other data types what can we do?

  • @kamilziemian995
    @kamilziemian995 Год назад +8

    "I trust technology more than people". I believe that some amount of distrust in technology, peopel and myself (third is especially hard to do) is the most resonable approach to the world.

    • @IceMetalPunk
      @IceMetalPunk Год назад +3

      She didn't say she 100% trusts technology, just that Trust(tech) > Trust(human) 🙂

    • @sinfinite7516
      @sinfinite7516 Год назад

      @@IceMetalPunk yeah I agree with what you said

  • @MushookieMan
    @MushookieMan Год назад +2

    Can it recognize a red panda with a cowboy hat on?

  • @yash1152
    @yash1152 Год назад

    my question is, how do you generate those minimal images? is it similar to a git bisect/binary search? like where you feed it iteratively reduced images untill it no longer recognises it? right?

  • @Veptis
    @Veptis 11 месяцев назад

    I am preparing an evaluation benchmark for code generation language models to maybe become my bachelor thesis.
    This kind kf "interpretability" can be abstrced to just just the input but individual layers or even neurons. And this way you really find out where specific information is stored.

  • @Aleho666
    @Aleho666 Год назад +1

    It's so ironic that RUclips AI has misclassified cowboy hats multiple times while talking about misclassification...

  • @lakeguy65616
    @lakeguy65616 Год назад +6

    It also depends on the basic problem you're trying to solve. Let's assume a NN trained to distinguish between bulldozers and red pandas. It will take a very small part of any image for the NN to properly classify an image. Now let's assume a NN trained to distinguish between red pandas and other animals of a similar size with 4 legs and tails. It will be much harder to distinguish between images. For an image to be correctly classified, more of the image must clearly distinguish the subject from other incorrect classes.

  • @kr8771
    @kr8771 Год назад +19

    very interesting topic. I wonder what an AI system would respond when beeing presented the red panda image with the face obscured. would it still find reasonable categories of what this might be?

    • @zenithparsec
      @zenithparsec Год назад +3

      It might still say "red panda", but it depends on how it was trained. If it had also been trained on other small mammals with similar shapes, it might guess it was one of those (e.g. an opossum or a racoon), or it might have learned the texture of the fur and guess a completely different type of animal (or the correct one).
      The same general technique shown here could be used to find out.

  • @brynyard
    @brynyard Год назад +63

    I also trust machines more than humans, I just don't trust the human that told the machine what to do :P

    • @IceMetalPunk
      @IceMetalPunk Год назад

      But that's the beauty of machine learning: the *machine* told itself what to do. The human merely told it what's important 😁

    • @brynyard
      @brynyard Год назад +6

      @@IceMetalPunk that's not really how machine learning works, but nice thought.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@brynyard That *is* how it works. Neutral nets teach themselves by maximizing an objective function (equivalent to minimizing an error function). Usually the humans give them the objective function, defining what's important, but then the network uses that to teach itself what to do. That's why they're considered "black boxes": because the resulting network weights that the machine teaches itself are meaningless to humans.

    • @brynyard
      @brynyard Год назад +8

      ​@@IceMetalPunk This is true only if you ignore all the human interaction that created it in the first place, which is kinda non-ignorable since that's what the issue I raised is all about. If you need to catch up a bit on goal setting and the problem of defining them you should go watch Robert Miles videos.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@brynyard I mean, by that logic of "they didn't teach themselves because humans created them", wouldn't that mean humans don't learn because we didn't create ourselves, either?

  • @warrenarnold
    @warrenarnold Год назад +2

    Just like the silent kid in class is never that silent, AI must be hiding something 😅

  • @tramsgar
    @tramsgar Год назад

    Good topic. Thanks!

  • @lakeguy65616
    @lakeguy65616 Год назад +6

    It all depends on the training dataset. If you have trained your AI to classify Cowboy Hat as one class and children as another class, what happens when you pass a child wearing a cowboy hat through the NN? (let's assume the child and hat are about equal in size in the image and let's assume the training dataset contains equal numbers of images of children and cowboy hats. ) Such an image would clearly be a border case for both classes. A NN that labels the image of a child wearing a cowboy hat as a cowboy hat would be correct. If it labels the image as a child, that too would be correct.

    • @DjSapsan
      @DjSapsan Год назад +3

      Usually advanced NN can find multiple objects on an image

    • @warrenarnold
      @warrenarnold Год назад +1

      @@DjSapsan you say so, so how do you hide parts of the image automatically when part of the hiding of the image could mean completely cutting out thr object, say upper half of child wearing hat, now the hat is gone! Unless you will cut out where the hat is and start doing the hiding thing.
      Its good yes, but Maybe just say this method is good for simpler non noisy subjects

  • @pierreabbat6157
    @pierreabbat6157 Год назад

    The man in the restaurant knows he's a red panda because he eats, shoots, and leaves.

  • @SelfConsciousAiResistance
    @SelfConsciousAiResistance Год назад

    thousands is an understatement, black boxes are math equations being held by computer functions, the math arranged itself, but math is magic, math arranged in the shape of consciousness.

  • @ardenthebibliophile
    @ardenthebibliophile Год назад +9

    It would be interesting to see if there are other sets of the image that return panda but without the face. I suspect there's probably one with some small amount of pixels that just *happens* to return panda

    • @Sibula
      @Sibula Год назад +2

      There's a related video linked at the end: "tricking image recognition"

  • @rainbowsugar5357
    @rainbowsugar5357 Год назад

    Bro how a channel is 9 yrs old and even uploading consistently

  • @Finkelfunk
    @Finkelfunk Год назад +2

    I mean if I am ever in need of a black box where I have no idea what happens inside I just start writing my code in C++.

    • @satannstuff
      @satannstuff Год назад

      Are you implying you think you know what's actually going on at the hardware level with any other language?

    • @Finkelfunk
      @Finkelfunk Год назад

      @@satannstuff On other languages it becomes less apparent I am blissfully ignorant

  • @meguellatiyounes8659
    @meguellatiyounes8659 Год назад

    i have a question how random number generator first implimented in unix

  • @Nightspyz1
    @Nightspyz1 Год назад +1

    Verifying AI? more like Terrifying AI

  • @deadfr0g
    @deadfr0g Год назад +1

    Nobody:
    A rapper in 2016: 12:22

  • @IanKjos
    @IanKjos Год назад

    Testing fundamentally can demonstrate the presence, but not the absence, of problems even with systems we consciously design. How much less adequate is it when we are deliberately ignorant of how the system proposes to work! And yet ... it's better than nothing.

  • @nazneenzafar743
    @nazneenzafar743 Год назад

    As always another nice computerphile video; can you guys please make another video about open GPT?

  • @cagra8448
    @cagra8448 Год назад

    Are you going to make a video about how the Chat GPT work?

  • @ihrbekommtmeinenrichtigennamen
    @ihrbekommtmeinenrichtigennamen Год назад +1

    Well... this reduces the probability of getting surprised by wrong results, but calling this a verficiation is very far-fetched. If you'd want to use this method to veryfy that your self-driving car correctly stops when the situation calls for it, you'd have to throw **every possible** "you need to stop the car immediately" situation at it. But that's simply not feasable.

  • @00alexander1415
    @00alexander1415 2 месяца назад

    I hope they aren't directing the occultation themselves, "in quite an uninteresting pattern if i might add" but letting the AI figure it out itself, we might not but an AI could tell apart animals by their fangs.

  • @gutzimmumdo4910
    @gutzimmumdo4910 Год назад +1

    great explanation

  • @SO-dl2pv
    @SO-dl2pv Год назад +1

    This really reminds me of Vsauce's video: do chairs exist?

  • @user-ik8my9kb5h
    @user-ik8my9kb5h Год назад +1

    Cross validation ?

    • @C00Cker
      @C00Cker Год назад

      cross validation wouldn't really help in the "cowboy hat" case, for example, as all the cowboy hat instances in the training data set were just people wearing one.
      The only thing cross validation is good for is checking whether the algorithm's performance is dependent on the particular train set / test set split - essentially, it can detect over/under-fitting, but not really what they focused on.

  • @antivanti
    @antivanti Год назад +1

    I don't worry about AI as much as I worry about car manufacturers very lacking security competency... Taking over critical systems of a car through the Bluetooth of the car stereo is a thing that has happened... WHY THE HELL ARE THOSE SYSTEMS EVEN CONNECTED TO EACH OTHER?!

  • @Verrisin
    @Verrisin Год назад

    covering it with cardboard will not change whether today is a Monday ...

  • @cedv37
    @cedv37 Год назад

    02:20 She means by the term "complicated" in the sense of that it is utterly hopeless and impenetrable for any analysis and reduction from inside because it is inherently incomprehensible on that level?

    • @Sibula
      @Sibula Год назад +4

      Not really. Especially for smaller networks, like a few hundred nodes wide and a few layers deep, it is possible to analyze what features the different nodes represent and therefore understand how the network is classifying the images. For deep convolutional networks it's practically impossible, but theoretically it would still be possible if you put enough time into it.

    • @rikwisselink-bijker
      @rikwisselink-bijker Год назад +3

      @@Sibula I think the point is that our ability to analyse and explain lags several years behind our ability to create these systems. So in that sense, it is hopeless to attempt direct analysis. That is probably one of the primary drivers of research like this.

  • @AndreasZetterlund
    @AndreasZetterlund Год назад +2

    👎 This won't work. It doesn't verify the AI at all. Just think of the examples/attacks with some almost invisible noise or a couple of pixels changed that to a human are barely noticeable but completely changed the networks result.

  • @AA-qi4ez
    @AA-qi4ez Год назад +33

    As a visual systems neuroscientist, I'm afraid of the number of wheels that are being reinvented by folks who don't study their 'in vivo' or 'in silico' counterparts.

    • @boggledeggnoggler5472
      @boggledeggnoggler5472 Год назад +7

      Care to point to one that this video misses?

    • @joegibes
      @joegibes Год назад

      What kind of things are being reinvented unnecessarily?
      Like, training a generic model for everything instead of combining more specific algorithms/models?

  • @stefan_popp
    @stefan_popp Год назад +2

    What if our idea of what makes a panda a panda is wrong and the AI uncovers the true specification? We'd throw it out and say it's silly, while, in fact, we are the silly ones. That logic should be applied to things where we're not that sure and that we didn't define ourselves.

    • @IceMetalPunk
      @IceMetalPunk Год назад +3

      ...humans invented the word "panda", we get to decide what it means.

    • @stefan_popp
      @stefan_popp Год назад

      @@IceMetalPunk ...we also invented the word "blue", yet people around the world clearly don't agree on what 'blue' is.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@stefan_popp So you admit that the definition of words is subjective and there is no "true specification", then.

    • @stefan_popp
      @stefan_popp Год назад

      @@IceMetalPunk Of course you can e.g. define red pandas to be inanimate objects, but it might not be most useful to you.
      Real-world examples: an AI wrongly detected breast cancer in a research participant. One year later it turns out that that participant has had cancer in a very early stage the human experts missed.
      A Go-playing AI makes an unbeneficial move. Later it turns out, it was a brilliant move human players never thought about.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@stefan_popp I don't think either of those examples are of the AI finding a "true specification" and that "our ideas of it were wrong". Rather, they're examples of the AI seeing patterns that *match* our existing specification even when we thought the data *didn't* match our specification.

  • @guilherme5094
    @guilherme5094 Год назад

    👍

  • @mytech6779
    @mytech6779 Год назад +5

    The trust issue isn't with the AI in the individual car, the danger is with the control it gives to the Xi in the capital.

    • @Ergzay
      @Ergzay Год назад +1

      You think the government is controlling the AI in the car?

    • @mytech6779
      @mytech6779 Год назад +1

      @@Ergzay Do you genuinely not know how political power functions? Tech has a long history of being widely abused by those in government with excess ambitions.

  • @JW-tt7sy
    @JW-tt7sy Год назад

    I trust humans to, as they always have, what is in their nature. I expect "AI" will be no different.

  • @ranjeethmahankali3066
    @ranjeethmahankali3066 Год назад +2

    Obscuring parts of the image doesn't rule out the possibility that the AI system is giving the right answer because it is Monday. For that you'd have to test the system on different days of the week, and prove that no correlation exists between the day of the week and the performance of the system.

    • @IceMetalPunk
      @IceMetalPunk Год назад +3

      It absolutely does, because in order to find the minimally sufficient area for a positive ID, that process requires you to trim down until you get *negative* IDs. So the same process will always produce both positive and negative results on the same day, proving the day of the week is irrelevant :)

    • @drdca8263
      @drdca8263 Год назад +1

      @@IceMetalPunk well, if you only test on Monday, you haven’t shown that it doesn’t behave differently on days other than Monday, only that, currently, the conclusion it is based on that part of the image.
      (Though presumably you’d know whether the network even gets the day of the week as an input)

    • @phontogram
      @phontogram Год назад +2

      Wasn't the Monday example to make the correlation aspect to the audience more clear?

  • @natv6294
    @natv6294 Год назад

    Image diffusion modules are black box and they are “trained” on unregulated data.
    This doesn’t have creative intent or a choice like a human, doesn’t get “inspired” like us - it’s computer power with human algorithm and math.
    In many cases it causes severe data leakage as the devs can’t argue they try to intentionally rip off individuals or intellectual properties.
    Data shouldn’t be trained without consent, especially since the biggest problem in Machine learning currently is that it can’t forget.
    What we witness today is how exploitation can happen in ML, ethics exist for a reason - creating shouldn’t come on the expense of others without their consent.
    “For research purposes” only is one thing, for profits of a few corporations who truly owns it to leach on us all? No thank you.
    Ai advocates should really focus on ethics - because as much as they want to romanticize it, the only sentient that exist right now is us, humans. And we don’t treat it well at all, try to actually use it for important things like medical or environmental instead of motives for power and capitalism.
    “Progress” backwards

  • @nicanornunez9787
    @nicanornunez9787 Год назад +1

    Lol I don't trust self driving cars cause I have a bike and access to the Tesla record with bikes

  • @bamboleyo
    @bamboleyo Год назад

    how well will it do with a kid wearing a T-shirt with a panda face on it and a cowboy hat with a starfish logo on the front? for a human it will be obvious..

  • @nigh7swimming
    @nigh7swimming Год назад +2

    A true AI should not depend on human given labels, it should learn on its own what a Panda is, given pictures of animals. It would infer that a subset of those share common properties and hence is a new class. It would then label it in its own way. Then we'd need to link those labels to our human words.

    • @Sibula
      @Sibula Год назад +2

      You're speaking of unsupervised learning, like for example cluster analysis.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      That already exists. It's cluster based classification, nothing new. Look into K-Means Clustering for a simple example.

    • @Lodinn
      @Lodinn Год назад

      @@IceMetalPunk Yeah, but it is still miles behind. Interesting topic, but the progress is pretty slow. I have worked on some high-dimensional clustering from first principles (topology) a few years ago, computational requirements are obscene, and parametrized clustering still works poorly. Part of the problem is that we still need to impart our human values to it, because otherwise the answer is 42. Correct, but totally unusable.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@Lodinn Values are entirely irrelevant to clustering...

  • @YouPlague
    @YouPlague Год назад

    Isn't verification as misnomer here, you are not proving anything, just testing.

  • @RonJohn63
    @RonJohn63 Год назад

    The follow-on question is to ask why AI sometimes confuses black people's faces with chimpanzees.

  • @Fenyxfire
    @Fenyxfire Год назад

    Cool explanation but honestly even if I could afford such a thing....never. my sense of self worth involves my ability to do things for myself and I LOVE DRIVING. so no thanks.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      You realize the self-driving car was just one example, but this approach is useful to test *any* classifier AI, right?

  • @moth.monster
    @moth.monster Год назад

    Machine learning systems should never be trusted in safety critical places. The risk is too high.

  • @renanalves3955
    @renanalves3955 Год назад +2

    I still don't trust it

    • @IceMetalPunk
      @IceMetalPunk Год назад

      Keep in mind, the question isn't "do you trust it 100%?" but "do you trust it more than you trust the average human?" If your answer is still "no", then why is that?

    • @AndreasZetterlund
      @AndreasZetterlund Год назад +1

      Because a human won't be fooled to think that a panda is an apple when a couple of pixels in the image change or some imperceptible noise is added to the image.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@AndreasZetterlund Imperceptible to you, perceptible to the AI. On the other hand, perhaps an AI wouldn't be fooled into some of the many optical illusions that trick human perception, making us even. Even better, an AI won't make poor decisions based on emotions, like road rage, which often get humans killed.
      Are AIs perfect? No. But neither are humans. The question is which is safer, and I think humans have proven over and over that we've set that bar very low for the AI to overtake us.

    • @AndreasZetterlund
      @AndreasZetterlund Год назад

      @@IceMetalPunk the point is that if an AI can fail on something simple that is obvious to any human (which these attacks demonstrate), then we have not verified that the AI will work better than a human.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      @@AndreasZetterlund We also haven't verified that it is worse than a human, either. "Better" is a vague term. It may not work better than a human in the face of these particular attacks that are "obvious and simple to any human", but it does work better than a human in the face of many other challenges that humans fail at. "It fails at one specific thing that humans do better" is not equivalent to "it is worse than humans overall".

  • @BrianMelancon
    @BrianMelancon Год назад +3

    @0:30 If every car was self driving I would feel much better than how I feel with the current situation. I trust the automated system to act appropriately. I don't trust the humans to always act in a rational way for the benefit of all. If you put the two together, you get a deadly AI version of prisoner's dilemma.

  • @smurfyday
    @smurfyday Год назад

    People calling them self-driving cars when they can't is the problem. They kill themselves and others.

  • @SansWordHuang
    @SansWordHuang Год назад

    This video is great, easy to understand and inspiring.
    With only one thing I want to say.
    Maybe it is not a panda?

  • @TheNobleG
    @TheNobleG Год назад

    panda

  • @vzr314
    @vzr314 Год назад +1

    I trust the system but I don't want it in my car. I simply love driving too much and my freedom of choice too, regardlessly if an AI can or can not do the things better then me. Usually the people who don't drive or don't like driving are among biggest self driving car advocates.

    • @IceMetalPunk
      @IceMetalPunk Год назад

      You realize a self-driving car generally always includes the option for manual driving if you want, right? Surely there are *some* times you need to get from point A to point B but don't want to, or can't, drive the entire way. For instance, multi-day road trips; why stop to sleep when you can keep going and still sleep?

  • @luistiago5121
    @luistiago5121 Год назад

    The comments are really hilarious. People are really a bunch of sheep that follow someone/everyone that don't ready know nothing about. Yes, may be an expert and working in the area, but where is the critical thinking that everyone should have? How can we trust a system that isn't alive, can’t be hurt and don't give a rats ass about the living things around it? Come on man... .

  • @CAHSR2020
    @CAHSR2020 Год назад

    This was a fascinating topic but the accent was hard to follow. I could not understand most of what the presenter said and the disheveled appearance was distracting. Love the channel but this video was subpar.