This Image Breaks AI

Поделиться
HTML-код
  • Опубликовано: 20 май 2024
  • Self driving vehicles are becoming more popular, but are we ready to share the roads with them? I take a look at the University of Western Australia's autonomous shuttle bus to test the limits of computer vision. Also there are adversarial bananas.
    Perth Science, Episode Sixteen | Adversarial Bananas
    #PerthScience​ #UWA
    --
    Translations
    Polish: Piotr Matuszak
    Indonesian: Anugrah No'inötö Göri
    --
    See more at www.atomicfrontieronline.com​
    or / atomicfrontieronline
    or / atomicfrontier​
    and follow me on Twitter @atomicfrontiers
    You can also support the channel at / atomicfrontier

Комментарии • 2,3 тыс.

  • @JohnHollowell
    @JohnHollowell 3 года назад +7994

    "Before I recuperate my university fees by committing insurance fraud." Classic

    • @letsburn00
      @letsburn00 3 года назад +104

      Even as a kid that went to UWA, his fees aren't that much. This is Australia remember, plus there is basically no interest.

    • @tubegerm6732
      @tubegerm6732 3 года назад +12

      we all saw the video

    • @TheXLAXLimpLungs
      @TheXLAXLimpLungs 3 года назад +8

      Good thing about telling someone over and over that you'll do something with no objections is when you finally do can they really get mad at you?

    • @KitKatHexe
      @KitKatHexe 3 года назад +12

      I read this just as he said it.

    • @hikari1690
      @hikari1690 3 года назад +2

      Ah, the Australian spirit is strong in this one

  • @Etropalker
    @Etropalker 3 года назад +7009

    That vehicle isnt stopping due to any proximity sensors, its just intimidated by the almighty levitating banana.

    • @0LoneTech
      @0LoneTech 3 года назад +159

      It's also not looking for just the top thing. I'm pretty sure it sees him as a lack of flat road with a banana in, possibly as having a banana shirt.

    • @zyrohnmng
      @zyrohnmng 3 года назад +273

      It’s played Mario Kart. It knows what’s up

    • @vale.antoni
      @vale.antoni 3 года назад +105

      "There is no way in hell I'm fitting through under that"

    • @DoktrDub
      @DoktrDub 3 года назад +61

      The almighty *giant* levitating banana!

    • @Dingus_Khaan
      @Dingus_Khaan 3 года назад +21

      O H . . . B A N A N A !

  • @kiledamgaardasmussen5222
    @kiledamgaardasmussen5222 2 года назад +1785

    The funniest adversarial attack I have ever seen is: a piece of paper with 'iPhone' written on it, incorrectly identified as an iPhone.

    • @gorkyd7912
      @gorkyd7912 2 года назад +344

      Hardware hacking in 2016: brute force cut cpu power at precise startup intervals to bypass end-user mode, dump the bios to surreptitiously installed removable drive, decode using black market software tools, insert new code.
      Hardware hacking in 2036: Take a piece of paper, write {reset as root} on it. Wait for the camera. Give verbal commands.

    • @ScionStorm1
      @ScionStorm1 2 года назад +198

      A.I. "It's not my fault! The paper lied to me! You would never lie to me, would you Master Programmer?"

    • @nightsong81
      @nightsong81 2 года назад +107

      @@gorkyd7912 Once we develop true AI and replace all menial tasks with it, all hacking will essentially be social engineering.

    • @Kj16V
      @Kj16V 2 года назад +2

      😂

    • @ruukinen
      @ruukinen 2 года назад +88

      @@nightsong81 Most hacking is already social engineering.

  • @kazerii6229
    @kazerii6229 2 года назад +835

    Imagine walking next to this guy and hear “this car thinks I’m a banana, so it’s going to run me over”

  • @mushroomsoup2866
    @mushroomsoup2866 3 года назад +7526

    Can't wait for the cyberpunk future where we all run around with giant bizarrely patterned sheets over ourselves so that the robocops think we're all bananas and won't report our crimes

    • @proxy90909
      @proxy90909 3 года назад +511

      That sounds like an awesome plot for wacky "stealth patterns"

    • @lztx
      @lztx 3 года назад +307

      You could even call it dazzle camouflage

    • @jakezepeda1267
      @jakezepeda1267 3 года назад +214

      And then they halt important or growing bananas because they commit too many crimes.

    • @God-ch8lq
      @God-ch8lq 3 года назад +96

      Or even make their ai crash by using an exploit which caused an infinite loop

    • @Music-nn9mi
      @Music-nn9mi 3 года назад +52

      @@lztx dazzleflage

  • @NotSoMelancholy
    @NotSoMelancholy 3 года назад +4785

    I can’t wait for 2045
    Self Driving Patch Notes v2.6.7
    - Road line distinguishing improved
    - Dynamic Weather Analysis added
    - Car will no longer slam the gas when it reads a school zone sign

    • @brodies2494
      @brodies2494 3 года назад +42

      Gas?

    • @KangJangkrik
      @KangJangkrik 3 года назад +125

      @@brodies2494 more like throttle pedal

    • @ruileite4579
      @ruileite4579 3 года назад +200

      GAS GAS GAS

    • @metleon
      @metleon 3 года назад +239

      Car will no longer deliberately hit giant bananas.

    • @jordanl.8509
      @jordanl.8509 3 года назад +233

      -Removed Herobrine
      Because you know someone will make that joke in the future.

  • @JasakutheLeafeon
    @JasakutheLeafeon 2 года назад +301

    "This pattern should confuse it enough into thinking I'm a banana."
    This seems like a good channel

  • @Cyberguy42
    @Cyberguy42 Год назад +54

    Excellent intro to AI. As someone in this field, I have a few comments:
    1. For detecting straight lines, the Hough line transform is the better, more efficient approach to use.
    2. The RGB values of objects are too dependent on lighting conditions to be useful in most real-world situations. One solution is to convert colors to HSV space and only look at the hue component.

  • @aussie405
    @aussie405 3 года назад +2380

    As a human, even after recognising a kangaroo, I still have no idea what it is going to do. They can, and do change direction mid jump.

    • @hannahranga
      @hannahranga 3 года назад +88

      Change direction to the nearest ARB to buy a bullbar?

    • @sirBrouwer
      @sirBrouwer 3 года назад +71

      @@hannahranga no, those work for thirsty bulls. Bulls are mean and don't allow kangaroos to sit at there bar.

    • @allangibson2408
      @allangibson2408 3 года назад +47

      @@hannahranga And that is why Australians fit Roobars to their cars in the outback (to protect the car radiators from impact). You hit a bull or camel and it goes through the windscreen.

    • @LeoStaley
      @LeoStaley 3 года назад +4

      Yeah but at least they're tasty.

    • @ValugaTheLord
      @ValugaTheLord 3 года назад +1

      You install a ram bar

  • @edcameron
    @edcameron 3 года назад +3306

    The unique thing about this guy is the many on screen graphics and varied filming locations that just make his videos 10x more interesting!

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +499

      Thanks! Keeps me out the house :)

    • @joanbennettnyc
      @joanbennettnyc 3 года назад +28

      @@AtomicFrontier You can't fool me! Roo's don't ski! Only yowies do.

    • @portobellomushroom5764
      @portobellomushroom5764 3 года назад +32

      Every time he uploads I think I'm super early because there's only a few thousand views. Then I remember that this channel is severely underappreciated and needs about 1000x the subscribers it has right now

    • @timothymclean
      @timothymclean 3 года назад +22

      I wouldn't say _unique;_ I can think of some other RUclipsrs who do much the same thing. (Tom Scott is probably the best-known.) But it's certainly uncommon.

    • @edcameron
      @edcameron 3 года назад +32

      @@timothymclean I actually disagree. While Tom Scott is also a great creator (and by no means boring), he tends to only film in one location, explaining an interesting fact about a place or thing. James on the other hand, films at several different locations for one video, I find this very engaging and I can't think of any other educational youtubers who also do this. The locations he chooses are interesting and relevant, for instance in this video as he was talking about road signs, instead of just showing some b roll of one, he went to some and filmed in front of them.

  • @animusadvertere3371
    @animusadvertere3371 Год назад +84

    Human “vision” includes a lot of understanding. Think about how hard it was to learn how to drive, even as an almost adult human. And how much concentration it takes to safely drive, especially in difficult and dangerous situations. Good luck with AI!

    • @Outwardpd
      @Outwardpd Год назад +8

      Learning to drive isn't hard at all though lol, most people are more than capable of driving within minutes of being put into the driver seat. The "hardest" part of driving is staying calm in stressful situations which an AI never has to worry about.

    • @animusadvertere3371
      @animusadvertere3371 Год назад +9

      @@Outwardpd not safely

    • @SgtLion
      @SgtLion 8 месяцев назад +5

      Admittedly true, but I also never had a grey blob next to my banana and thought I was looking a toaster, so the analogy probably isn't great.

    • @PJM257
      @PJM257 4 месяца назад +2

      ​@@animusadvertere3371 My driving instructor said I was a better and safer driver than most other people on the road the very first time I drove a car. It depends on the human

  • @MesaCoast
    @MesaCoast 2 года назад +33

    A couple of key points that weren't covered here: These adversarial images are AI specific, in this case generated for Google's AI in particular. If you showed that shirt to a Tesla, it won't think you're a banana. Other major point, most AIs nowadays aren't actually built like this; more popular techniques include back-propogation, or gradient descent methods that are based more on mathematical theory than evolution like we see in nature.

  • @piotrmarczynski8613
    @piotrmarczynski8613 3 года назад +713

    That first blobby picture does look like a toaster though, at least that's what I immediately picked up from seeing it in my peripheral vision

    • @Raren789
      @Raren789 3 года назад +44

      Tbh our brains aren't that much different from NN so they can also be confused similarly, look up deep dream images, they really mess with you when you look at them

    • @Soken50
      @Soken50 3 года назад +43

      @@Jtzkb I can see the Banana one, it's a grape of them seen from below, kind of.
      Most of these adversarial pictures are what the algorithm interprets as the subject from multiple angles, adversarial animals look very trippy also, seeming to have multiple faces each with a different angle

    • @pedrolmlkzk
      @pedrolmlkzk 3 года назад +6

      @@Raren789 our brains are really different from a neural network

    • @seaque.
      @seaque. 3 года назад +20

      @@pedrolmlkzk not really. You see, seeing something is mostly about expectations. You can identify things because you have an idea about them. If i were to show you a picture with no context and expect you too see something you might not be able to see it. But if i were to tell you to look exactly for _that_ thing then you'd try to see that and might be able to see.

    • @ZentaBon
      @ZentaBon 3 года назад +10

      @@pedrolmlkzk our brains are just nature's computers. Our neurons even use electricity to communicate.

  • @arcticdino1650
    @arcticdino1650 3 года назад +580

    "Or can spot a lion, hiding away in the long grasses"
    Meanwhile the safe and unsafe switch sides.

    • @williamchamberlain2263
      @williamchamberlain2263 3 года назад +57

      Those berries are sneaky bastards.

    • @drago5819
      @drago5819 3 года назад +37

      I saw that safe and unsafe switch and I never thought anything of it until this comment

    • @Jikkuryuu
      @Jikkuryuu 3 года назад +41

      I was real darn confused when the holly berries were labelled as "safe." I don't recognize the other berries though, they could both be poisonous.

    • @hewhohasnoidentity4377
      @hewhohasnoidentity4377 3 года назад +5

      I saw movement among the words but didn't catch what they did. Did they flash several times? Disappear for a few seconds? Change font size? I couldn't tell you. I feel like what was done with the 2 words was referring to human ability or lack thereof.

    • @leumdray
      @leumdray 2 года назад +4

      I'm partial to (at 2:49) standing next to a give way sign and showing a bunch of stops signs

  • @hexaV_
    @hexaV_ 2 года назад +23

    "The book is still a book"
    Screen shows clock and alarm clock as most likely answers as to whats in the image.

  • @CaliforniaCarpenter7
    @CaliforniaCarpenter7 2 года назад +26

    You have a ton of potential, James. This channel is a hidden gem, I can see you becoming the next VSauce.

    • @aachucko
      @aachucko 3 месяца назад

      Good content. He needs a spellchecker first, though.

    • @shaolinshoppe
      @shaolinshoppe 3 месяца назад +1

      but will he be as bald

    • @CaliforniaCarpenter7
      @CaliforniaCarpenter7 3 месяца назад +2

      @@shaolinshoppe It's a definite possibility - give him time, he's young.

  • @BigAdam2050
    @BigAdam2050 3 года назад +274

    10:49 - "Classified as the pure essence of a toaster"
    By the Omnissiah, this is making me harder than terminator armor.

  • @augusthoglund6053
    @augusthoglund6053 3 года назад +436

    “If the impact doesn’t kill you, the farmer will”
    Given how fond of ice cream I am, the farmer sounds pretty understandable to me.

    • @thePronto
      @thePronto 3 года назад +4

      If I don't kill the farmer first. The farmer needs to keep his cattle of the road!

    • @andfriends11
      @andfriends11 2 года назад +2

      "Learn to build a fence idiot." They've only been around for thousands of years.

    • @alext3811
      @alext3811 6 месяцев назад

      @@andfriends11 ... You know they can jump over them.

    • @andfriends11
      @andfriends11 6 месяцев назад +2

      ​@alext3811 Had to rewatch this video since it's been 2 years since I commented.
      Then you didn't build a big enough fence. Electric fences work, too.

    • @alext3811
      @alext3811 6 месяцев назад

      @@andfriends11 Yeah. I'm American so the most I've had to worry about is deer and maybe foxes.

  • @leparkin
    @leparkin 2 года назад

    This was a great video! Very informative and you pulled a sneaky on us at the end; definitely a little more confident in self driving vehicles but more knowledgeable about it's limitations. Thanks!

  • @jamesonneyman9714
    @jamesonneyman9714 2 года назад

    This video/production quality was incredible, I was fully expecting you to have over a million subscribers, keep up the great work!

  • @michaelwinter742
    @michaelwinter742 3 года назад +903

    I really hope you continue this channel after you graduate. You’re a natural.

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +294

      Thanks! As long as I keep finding cool things we'll keep making cool videos!

    • @alexz7914
      @alexz7914 3 года назад +4

      @@Jtzkb Same. :)

    • @magnum0121984
      @magnum0121984 2 года назад +1

      STOP SIGN: “DUR”
      Me: yeah, Dur it’s a stop sign.

    • @dascreeb5205
      @dascreeb5205 2 года назад +1

      @@AtomicFrontier your a natural.
      An all-natural banana.

  • @loukas6373
    @loukas6373 3 года назад +105

    12:39 "The book, still a book"
    Pretty sure that's an alarm clock

    • @ArsenicDrone
      @ArsenicDrone 2 года назад +4

      The neural net in his head is clearly poorly trained, if he looks at that alarm clock and sees a book

    • @DccToon
      @DccToon 2 месяца назад

      It's an iphone 12 with Minecraft on it!! 1!

  • @bigbeefscorcho
    @bigbeefscorcho Год назад

    Fascinating video! Thank you. I knew nothing about this topic coming into the video and left feeling like I genuinely gained a broader understanding. Much appreciated, watch out for buses! :)

  • @melf5883
    @melf5883 2 года назад

    this is wonderful!! keep up the amazing work dude

  • @AtomicFrontier
    @AtomicFrontier  3 года назад +4147

    The question is, can I make an AI take over the channel for me? And would anyone notice if I did?

    • @alanyep
      @alanyep 3 года назад +21

      maybe

    • @hi_im_eoin
      @hi_im_eoin 3 года назад +8

      On it

    • @AkiSan0
      @AkiSan0 3 года назад +70

      from toms video, currently yes. in a few years. mabye. in a decade, probably not.

    • @hav5n
      @hav5n 3 года назад +5

      no
      we wouldnt notice

    • @damyenhockman5440
      @damyenhockman5440 3 года назад +5

      I don't think AI is yet sophisticated enough to replicate what you look like enough to fake a full length video of "outdoor filming.

  • @dankdungeon5104
    @dankdungeon5104 3 года назад +465

    Just posting a comment for the algorithm. I really want to see this channel grow.

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +106

      🍌

    • @HercadosP
      @HercadosP 3 года назад +5

      🥵

    • @harriehausenman8623
      @harriehausenman8623 3 года назад

      some more random engagement

    • @Soken50
      @Soken50 3 года назад +4

      I'd really like to know what the RUclips algorithm's adversarial banana is so I could give James infinite recommendations by watching a specific set of videos for a specific amount of time :D

    • @aviw5636
      @aviw5636 3 года назад

      Worked for me!

  • @MonkeyJedi99
    @MonkeyJedi99 2 года назад +2

    That array of stop signs triggered Sesame Street memories.
    "One of these things is not like the other. One of these things just isn't the same..."
    That round stop sign is one I've never seen. I've even seen home-made stops signs and they're at least somewhat similar to an octagon. One was not even red anymore or even had the word STOP on it due to weathering, and it still worked.

  • @MrEazyE357
    @MrEazyE357 2 года назад

    New subscriber that's loving your content. Great work!

  • @ResDogOrange
    @ResDogOrange 3 года назад +298

    As a fellow Perthian, its been a hoot trying to figure out where each of these shots were filmed!

  • @thekilla1234
    @thekilla1234 3 года назад +83

    "The book is still a book"
    AI: *C L O C K*

    • @vystorm
      @vystorm 2 года назад

      Was looking for this comment xD

  • @ginjaninja6963
    @ginjaninja6963 5 месяцев назад

    How do you not have more subs?! This channel is great

  • @murtaza6464
    @murtaza6464 2 года назад

    I really like the way this is filmed! Awesome!

  • @thestudentofficial5483
    @thestudentofficial5483 3 года назад +248

    If you've never appeared on Tom Scott, it might take extra 2 years for the algorithm to get me to you.

    • @chaomatic5328
      @chaomatic5328 2 года назад

      he did

    • @KilosWorld
      @KilosWorld 4 месяца назад

      It did take me 2 more years, on the other hand...

  • @pbaumgarten
    @pbaumgarten 3 года назад +364

    I was impressed with your dedication to travelling to all the different filming locations around Crawley, Kings Park and West Perth. Great intro to the complexities of vision AI. I'll be sharing with my students :)

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +44

      Thanks Paul! Let me know how it goes!

  • @pjsmith6954
    @pjsmith6954 Год назад

    Blown away by the production on this video and the content. This kid’s got a future (and the team behind the scenes)!

    • @AtomicFrontier
      @AtomicFrontier  Год назад

      Thanks! Nope, it's just me and my dad (who does the music and any of the camera work that looks decent)

  • @suparki123
    @suparki123 2 года назад +1

    So I am currently doing a research project in machine learning, and I noticed two issues in your explanation.
    The model that you described is a multilayer perceptron(MLP - built entirely of fully connected layers), and although they are capable of classifying images, they are no where near as good as convolutional neural networks (CNN - which are translationally invariant). Most image classifiers use a few convolutional and pooling layers, which is then passed to a few fully connected layers. Many tutorials use MLP for image recognition to teach fundamental theory, which is probably where you got the confusion from.
    The training method you described is reinforcement learning, and although this is a popular method for training models for other tasks, it is not great for training image recognition. A much more suitable training method for image classification is Adam optimization.

  • @IantraSolari
    @IantraSolari 3 года назад +331

    Hey James, great video as always!
    Just one small gripe from a somewhat experienced AI developer: while the process you describe at 7:47 is real, and has been used to train some neural networks for some tasks, it's not how any vision-oriented network that I know of is trained. What you described is a genetic algorithm, but most modern nets rely on some form of gradient descent and supervised learning.
    This process also starts with a random network that spits out gibberish, but rather than making random mutations and combining it with other ones, it uses only one network and makes small strategic adjustments to it in an attempt to minimize one (or many) values, called the loss. The loss is calculated after every step by comparing the network's output to the expected output, and we can then do some "backpropagation" to figure out how each weight would have to be adjusted in order to reach a result that's closer to the one we want. This is possible because we have images that are labeled (usually by an overworked and underpaid undergrad student) with the expected output, which allow us to nudge the network in the right direction. If we do this enough times for enough images, we should get a network that can reliably predict things within that dataset.
    Thus, the more diverse the data we have in our training dataset is, the better our network will be at dealing with previously unseen situations. You can even go one step further and do what's called "adversarial training", whereby you find these pictures that will trip up the network and intentionally include them in your training data, with the right labels of course, in an attempt to make the net more robust against them.
    Hope this helps!

    • @suparki123
      @suparki123 2 года назад +24

      In addition, most vision oriented neural networks start with a few convolutional and pooling layers. Multilayered perceptrons do work, but no where near as good compared to using image convolutions.

    • @LolToalNoobs
      @LolToalNoobs Год назад +5

      One way the networks are trained is through captchas that humans have to solve to verify they're actually human

    • @proloycodes
      @proloycodes 4 месяца назад

      ​@@ahetsame

    • @rickwilliams967
      @rickwilliams967 4 месяца назад

      Don't think anyone asked, but okay.

    • @adora_was_taken
      @adora_was_taken 4 месяца назад +7

      @@rickwilliams967 ???? clearly if someone's watching this video they think it's interesting and would probably like to know more accurate information from a specialist. i don't think you know how you're supposed to use that phrase.

  • @meri5012
    @meri5012 3 года назад +142

    I'm so happy Tom Scott promoted you! Great content! :)

  • @ciyttcix6661
    @ciyttcix6661 2 года назад +1

    Your channel is grown a ton good job

  • @potomanic3820
    @potomanic3820 3 года назад

    Love your videos always helps me fall asleep at night :)

  • @noctuslynx6834
    @noctuslynx6834 3 года назад +847

    "They don't need to be perfect. They just need to be better than humans."

    • @generalcodsworth4417
      @generalcodsworth4417 2 года назад +129

      A self driving car will never get distracted by their phone, drive drunk, be sleepy, or freak out when a bee gets into the car. Even if a self driving car can never reach the abilities of a human in ideal conditions, it is important to remember that humans almost never drive under ideal conditions

    • @clown134
      @clown134 2 года назад +31

      I think this will be an extremely easy accomplishment in retrospect .

    • @gorkyd7912
      @gorkyd7912 2 года назад +80

      @@generalcodsworth4417 It should be noted that while this is true of the average human, the average human rarely sees itself as an average human.

    • @kilzfordays
      @kilzfordays 2 года назад

      That's not hard.

    • @lilacdoe7945
      @lilacdoe7945 2 года назад +7

      In reality they need to be much better than humans. We are irrational and if you had a 1 in a 1-million chance of being deliberately killed by a machine or a 1 in a 500-thousand chance of being accidentally killed by a human, many people would choose the later (at least subconsciously).

  • @cheezzinator
    @cheezzinator 3 года назад +45

    Neural nets don't (usually) get trained with genetic algorithms, buy with some form of a gradient descent learning algorithm. Genetic algorithms do get used for setting the parameters of that learning algorithm.
    Adversarial attacks only work on an specific trained network, and those same attacks could no longer work once the network is retrained. A lot of AI systems actually go through another round of training where they are shown a set of such adversarial attacks. After that, the network is less vulnerable to them, but at the cost of accuracy. In some cases it's actually safer to keep the adversarial attacks weakness, as those are way less likely than the situations in which you are giving up some accuracy.

    • @harriehausenman8623
      @harriehausenman8623 3 года назад +1

      He oversimplified quite a lot, but I think it's well adjusted to most of the audience.

    • @parnikkapore
      @parnikkapore 3 года назад +3

      Yeah, I expected him to give an oversimplified description of gradient descent ("but unlike with a series of steps, a computer can automatically tune these weights with a lot of math" or something), but a good explanation of the evolution method is fine by me.

  • @eshanali9323
    @eshanali9323 2 года назад +2

    2:55
    Why is my ad-blocker a stop sign now?

  • @sandvichofthesea4910
    @sandvichofthesea4910 2 года назад +2

    The main take away i got from this, is that we can make an image, that is the quintessential ultimate integral essence if a toaster

  • @robbieaulia6462
    @robbieaulia6462 3 года назад +52

    This video really shows how easy it is to forget that we inherit some of our parents abilities and their parents abilities and so on, and the fact that our brain has been in development for millions of years by this point

  • @krishras23
    @krishras23 3 года назад +71

    From Breakthrough Junior Challenge Finalist to this - Congrats James!

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +22

      Thanks for joining me! Its been quite a journey

  • @saltyrealism
    @saltyrealism 4 месяца назад

    This video was very interesting, mostly because I live next to almost every shot in the video! Perth for the win!

  • @freefiles3839
    @freefiles3839 2 года назад +1

    Sup fellow Perther! i live in the hills (kalamunda) and am really fasinated by ur work. i hope to work at UWA under Chemestry one day and you are a real insparation

  • @FianFreigeist
    @FianFreigeist 3 года назад +74

    Can I just say that your audio is somehow much better when recorded on set? Of course, there are the surrounding sounds that also get picked up by the mic but it sounds more natural and I quite like it!

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +20

      Thanks! We just bought some new mics so glad that you can hear the differencr!

    • @FianFreigeist
      @FianFreigeist 3 года назад +1

      @@AtomicFrontier I really appreciate your content, so keep up the good work^-^

  • @cobalt2672
    @cobalt2672 3 года назад +50

    The "talking banana" angle is an interesting direction for the channel, but I think it has potential going forward.

  • @dannypipewrench533
    @dannypipewrench533 2 месяца назад

    I just realized that I watched this video when it was first posted, but then for some reason it was only just a few days ago that I ever watched another Atomic Frontier video. Not sure what happened, but it was a funny realization that I have been here before.

  • @lindenhoch8396
    @lindenhoch8396 8 месяцев назад +1

    Speaking of Google AI training, they also make use of the CAPTCHA images we all know and love, to train their image recognition algorithms. Whenever we come across a CAPTCHA asking us to identify all squares with a lamp, stairs etc. to prove we are human, we contribute to improve their AI by confirming/rejecting choices already made by the AI.

  • @JohnDlugosz
    @JohnDlugosz 3 года назад +41

    I've never heard of neural networks being trained by generic algorithms, and never heard of such training affecting the number of layers and the number of nodes per layer (in your simple vs complex example where the simple is deemed more fit when the results are the same).
    Neural networks are typically trained by using "back propagation", which you never described in the video.

    • @suparki123
      @suparki123 2 года назад +9

      Not only that, but most image classification models in practice make use of convolutional layers first.

  • @vijayabhaskarj3095
    @vijayabhaskarj3095 3 года назад +89

    7:52 The process you explain here is not the normal commonly used approach to train neural networks, the normally used way would be using gradient descent (for supervised learning as in this case), what you explained is using genetic algorithm like NEAT, which are useful but not so much compared to gradient descent in this case.

    • @NYgasman8
      @NYgasman8 3 года назад +20

      Was looking through the comments to see if someone said this first. I am worried that most basic ML videos explain ML as if all NNs are trained with genetic algos.

    • @MuffinTastic
      @MuffinTastic 2 года назад +4

      there's also the issue that he never mentioned the impact of training data on results. changes to the structure of the neural network is also sometimes necessary, but many issues can be solved by providing more varied and elaborate training data, forcing the network to be more in line with what we want

  • @Anfield-bn3wg
    @Anfield-bn3wg 2 года назад

    I’m waiting to see this guy on science channel or discovery commentating or hosting. Love the vids!

  • @kro2704
    @kro2704 2 года назад +1

    This was a really well made and informative video. But there is one issue where the stop sign suddenly becomes a 45 speed limit sign, other than that, this was a great video.

  • @saims.2402
    @saims.2402 3 года назад +33

    Love to see a good RUclips channel growing.

  • @AlexanderRafferty
    @AlexanderRafferty 3 года назад +32

    It still feels so cool to see my own city and University represented on the science-y side of RUclips. The super high quality of these videos is even cooler 😄

  • @tighegilmore9202
    @tighegilmore9202 2 года назад

    I didn't realise how cool it would be to see B-roll shots of the city I live in! Perth is so rarely put on display like that

  • @joeljoonatankaartinen3469
    @joeljoonatankaartinen3469 Год назад

    You should read about generative adversarial networks or GANs for short. It's a technique that aims to avoid overly simplistic criteria for identifying things by training not just the classifier network, but also an adversarial network that's being trained for the precise purpose of trying to fool the classifier and using that to train the classifier to not get fooled so easily. A lot of the more impressive advances in AIs seem to be done using GANs lately.
    Also, it's worth noting that programmers working on AIs aren't working on patching individual errors, but rather looking for ways to improve the training process so that it's the AI that learns how to overcome them.

  • @cookies23z
    @cookies23z 3 года назад +67

    your intro is so good, "so it will think I am a banana and run me over" and "recuperate my university fee by committing insurance fraud" wow, 2 amazing lines in the first 35 seconds...

  • @joaohmendonca
    @joaohmendonca 3 года назад +37

    Good job with the not-voice over!

  • @Peter-pu7bo
    @Peter-pu7bo 2 года назад

    Thank you for the detailed explanation!

  • @gsau3000
    @gsau3000 4 месяца назад

    Regardless of the information within this video, I was most impressed that there was not a single jump-cut. Well done. Excellent work.

  • @MrLucascanuto
    @MrLucascanuto 3 года назад +24

    I am so happy to finally find a channel that is aware of the need to educate visitors on the dangers of dropbears!

  • @martinliebo
    @martinliebo 3 года назад +3

    So glad to see your views and likes are going up! You have been creating high quality, interesting content for a long time without getting the recognition you deserve. Keep going bro!

  • @MasonHargrave
    @MasonHargrave Год назад +1

    An important note here is that adversarial patches are generated to trick the specific neural network which they were generated from. You cannot expect an adversarial patch from one neural network to generalize to other neural nets. It probably has less to do with the engineers improving the networks (which they certainly have done) but rather the fact that any change to the neural networks whatsoever would lead to a different set of adversarial patches needing to be generated to fool the updated network.
    TL;DR: The adversarial patch problem has not been 'solved' by Google engineers.

  • @Nolander1
    @Nolander1 2 года назад

    Loving the tok Scott style content super good but different enough from it that it’s unique good job!

  • @LorenzoDelmonte0530
    @LorenzoDelmonte0530 3 года назад +4

    Discovered you today. Wow. Amazing. Exceptional quality, clear audio, easy to understand and a very joung talented boy. Hope i sre you grow, very well done

  • @demonmonsterdave
    @demonmonsterdave 3 года назад +279

    “Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
    ― Frank Herbert, Dune

    • @SoupSackHandle
      @SoupSackHandle 3 года назад +14

      banana

    • @OfficialJuke
      @OfficialJuke 3 года назад

      Your mother

    • @YayapLives
      @YayapLives 3 года назад +6

      Those damn machines trying to tell me what is and isn't a banana! Revolt!

    • @dustinjames1268
      @dustinjames1268 3 года назад +5

      Considering how few of us need to farm and do menial labor compared to the old days, I would say it has set us free.
      If not for everything the technological revolution brought, I would likely be a farmer working 12+ hour days 7 days a week
      Thankfully I only have to work 8 hour shifts and make more than just enough to survive

    • @demonmonsterdave
      @demonmonsterdave 3 года назад +4

      @@dustinjames1268 You clearly don't understand how wealth is created.

  • @asailijhijr
    @asailijhijr 2 года назад +1

    11:46 "We could also cross reference the government databases which store the location" of all the people in the country which we don't want to hit with a bus.
    I was amazed how far into that sentence my prediction remained accurate. The visuals even helped.

  • @stevevalley2321
    @stevevalley2321 2 года назад +4

    If you look at it carefully enough it actually does look like a psychedelic toaster

  • @Gome.o
    @Gome.o 3 года назад +15

    From one aussie to another: You're a bloody legend mate! Fantastic videos!

  • @lucadingman2857
    @lucadingman2857 3 года назад +15

    I just discovered this channel, and I already love it. It’s like a combo of Tom Scott and Fact Fiend, two of my favorite creators!

  • @deantammam
    @deantammam 2 года назад

    Extremely high quality content. I felt as if I was watching something from 90’s/00’s Discovery Channel in 4K

  • @DazeWare
    @DazeWare Год назад +2

    The thumbnail must of broke youtube because I'm getting this video recommended to me a year late

  • @jackjac
    @jackjac 3 года назад +9

    Really liked your personal little experiment in the end, instead to just talk about the news headline and leave it there. GJ, as always ;)

    • @AtomicFrontier
      @AtomicFrontier  3 года назад +3

      Thanks! I wasn't origionally intending on having it but then found out there was a python API and just had to give it a go!

  • @ultimategamer7465
    @ultimategamer7465 3 года назад +3

    Found this channel on the UWA site. Great work

  • @0xEARTH
    @0xEARTH Год назад

    okay but i want to say that you gave the simplest and yet most understandable breakdown of neural networking i've ever head and i am extremely pleased by that

  • @cs3705
    @cs3705 2 года назад

    Great video mate!

  • @markc7884
    @markc7884 3 года назад +4

    The audio is so, so much better in this video! Really great improvement.

  • @Jeanvit
    @Jeanvit 3 года назад +4

    Wow! Amazing video as always!
    A thing that I wanna point out is that the probabilities shown at your experiment (12:00) decreases a lot when the Adversarial Patches are added.
    Google improved for sure its IA, however the Patches are still making an impact on the classification.

  • @railfan_3371
    @railfan_3371 Год назад

    2:30 that's actually really neat, and probably explains why we can "visualize" things in our head, or how the most vivid hallucinations are visual ones

  • @mikethemaniacal
    @mikethemaniacal Год назад

    4:00 Speaking of international recognition of signs, I recognize that "today's fire danger level" sign in the background. Greetings from Utah, US. Keep up the good work.

  • @neonbunnies9596
    @neonbunnies9596 3 года назад +32

    5:58 Just gotta love the Kangaroo skiing in the bottom right corner

    • @feddy11100
      @feddy11100 2 года назад +1

      I wasn't completely sure that's what I saw until now.

    • @rushthezeppelin
      @rushthezeppelin 2 года назад +1

      Glad I'm not the only one that noticed lol. Just imagine being at a resort and a kangaroo comes flying off a side hit in the trees and just knocks you out cold in the middle of a run lol.

  • @TheFirstObserver
    @TheFirstObserver 3 года назад +4

    Fun video! A few corrections to keep in mind, though.
    1.) Neural Networks used in vision and self-driving don't tend to use genetic algorithms (the evolving style you mentioned here). Not only that, but even if they did use genetic algorithms, it would use a NEAT-like algorithm, which starts with a sparse or empty network which slowly gains neurons through mutation. No, most computer vision use tried and true backpropagation methods, like Stochastic Gradient Descent (SGD) where the weights of the various neurons are corrected by comparing the output of the net to some target value, and adjusting the weights based off of that difference and a pre-determined (or adaptive) learning rate.
    2.) The issue of adversarial attacks isn't just a matter of network complexity. In fact, a paper a few months back even found that simpler networks tended to do better, because small incorrect regions had a lesser impact on decision making. It's sort of like displaying an image on different resolution screens, with a higher-res one able to pick out more details, but also more likely to notice errors. On the lower resolution screen, you can't tell the difference. Obviously, that comes with its own pitfalls, but the point is that adversarial attacks predominantly work against specific forms of vision, and often exploit specific shortcomings (such as interpreting the blob of colors as a toaster, because it hits all the same buttons as the toaster).
    3.) Most forms of self-driving vision (and controls) are different. Tesla uses a segmented neural network (with each segment helping identify specific items within the world) using a shared input, while Comma AI uses a more end-to-end design, and Waymo just uses Lidar and can only work within specific pre-mapped areas. While Tesla and Comma AI both use Neural Networks IIRC, different attacks would likely be required.
    4.) The best way to stop adversarial attacks is to feed the network enough data that its generalizations are....well, accurate generalizations. Give it noise, different perspectives, lighting, everything. Essentially train it to the point it's not using a short-cut interpretation, but rather a more robust, almost human-equivalent interpretation. As a black box, though, it's hard to know when enough is enough.
    Thankfully most self-driving projects still have redundancies. :P

  • @debries1553
    @debries1553 2 года назад

    Machine learning usually use backpropagation.
    The neural network will output a "confidence score" for each category.
    It will then check what happens when it changes its nodes just a tiny bit, and try to nudge the values in the "right" direction.

  • @TheBigChoomah
    @TheBigChoomah 2 года назад

    I get "Technology Connections" meets "Tom Scott" vibes from your videos. Nice. I'll subscribe

  • @EverythingIsMacabre
    @EverythingIsMacabre 3 года назад +10

    I remember when my National Geographic Kids magazine in 2005 or so predicted we’d have self-driving cars perfected (as well as color-changing clothes that we can tell our mirror to switch), but I don’t think those writers understood how woefully complex AI could be back then...

    • @noatrope
      @noatrope Год назад +1

      Futurists have been predicting that strong AI is only twenty years away for almost a century. :P

  • @domib2896
    @domib2896 3 года назад +25

    Great video. Now get some more coffee and do your lit review / finish your thesis.

  • @melissahopper3660
    @melissahopper3660 2 года назад

    Totally lost me in the first few minutes, but I stayed to find out if you were going to get run over....
    Great job, excited to see what comes next from you!

    • @AtomicFrontier
      @AtomicFrontier  2 года назад

      Boats! Thursday 4pm GMT. Glad you're enjoying the videos, the rocket episode is particually fun

  • @nicks6657
    @nicks6657 Год назад

    Hey @Atomic Frontier you're really fucking smart, thats wild that you can do all of the stuff you are explaining

  • @neonbunnies9596
    @neonbunnies9596 3 года назад +9

    1:35 Just gotta love the Swiss cheese building behind him

  • @Expertzero6Dingley
    @Expertzero6Dingley 3 года назад +6

    Ha loved the "Dingley road" easter egg. Great video!

  • @merlinjim
    @merlinjim 2 года назад

    @2:20 I work in the field of machine learning and computer vision and I never heard this explanation for human's big brains before. Will totally be starting every public speaking opportunity with that explanation going forward.

  • @JJRicks
    @JJRicks 2 года назад +3

    As someone who rides in Google's self driving cars regularly (just as a hobby), I really love this overview and learned a lot! Thank you!

  • @twiddlebit
    @twiddlebit 3 года назад +15

    Great video, although I'm curious as to why you chose to use a genetic algorithm to train the network in your example. The typical training method is back-propagation, which works entirely differently. What was the reason for picking GA over backprop?

    • @Vincent89297
      @Vincent89297 3 года назад +2

      I was thinking the same thing, and was also surprised by there being no mention of deep learning in the context of image recognition. Also AFAIK the reason the adversarial patches did not work on the other nets is because adversarial images are tailored to a single neural net, not because engineers are constantly updating their nets to keep up with the latest batch of adversarial images. Both the vehicle and the example he made likely used both a different training algorithm and different data, which made the images not work on them.

    • @Vincent89297
      @Vincent89297 3 года назад +1

      I double checked the paper and apparently these attacks do generalize to an extent to unseen models, though it's not entirely clear from the paper under which circumstances they will/will not generalize well.

    • @Frank01985
      @Frank01985 3 года назад +2

      @@Vincent89297 The networks they would generalise to (if they do) would be the ones trying to detect the same type of objects. A self driving car is not going to be trained to recognize bananas, so wouldn't be fooled by an adversarial banana patch. Also: camera resolution, at 10m from the car, it is doubtful the resolution is good enough for a patch like that to work either way.

    • @Vincent89297
      @Vincent89297 3 года назад

      @@Frank01985 Right, I hadn't even considered that. Of course if a network does not have a toaster category then a toaster patch is going to do nothing...

  • @TheRockybulwinkle
    @TheRockybulwinkle 3 года назад +14

    8:34 what you're describing is a genetic algorithm, which while could be applied to neural networks, I don't think is that common? Usually it's gradient descent, i.e., for each weight taking the partial derivative to determine if the output would be slightly more or less accurate if the given weight increased or decreased.

    • @ArsenicDrone
      @ArsenicDrone 2 года назад

      Batch processing of several such operations in parallel, and then combining the results in some way (taking the best, weighted average, etc), can be thought of a little bit like a genetic algorithm, though.

  • @wolfrig2000
    @wolfrig2000 2 года назад

    I have no idea how self driving cars work, I could figure it out most likely, but some of the stuff you're talking about I have worked on in making a video game bot, I was training the bot to have computer vision, detect objects and shapes and pick them up if they were good, avoid them if they were bad, and learn from their successes!

  • @xanderlastname3281
    @xanderlastname3281 2 года назад

    hmm
    Once we get them nice Hyperspeed Quantum computers, plus some insanely detailed... Laser distance sensor things, we found use the physical distances from every point on an object, to render a 3d mesh, and base our prediction off on that