Adversarial Attacks on Neural Networks - Bug or Feature?

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии •

  • @int-64
    @int-64 5 лет назад +295

    What people think the war against AI like: humans killed by robots
    What it actually like: humans attacks AI by changing pixels

    • @KuraIthys
      @KuraIthys 5 лет назад +52

      -is about to be attacked by an combat robot¬
      -holds up a piece of cardboard with a strange pattern in it¬
      AI:¬tree detected. No target found.-
      Yeah... XD

    • @odw32
      @odw32 5 лет назад +32

      American/Russian/Chinese/European AIs all trying to fool the other ones by screaming various kinds of noise at each other, then carefully trying to figure out whether the noise fooled the other AIs, or whether the other AIs are just pretending to be fooled as part of a counter-adversarial attack.

    • @enormousmaggot
      @enormousmaggot 5 лет назад +17

      Humans hiding from robots by drawing pixels on themselves, thus being classified as airplanes.

    • @abyteuser6297
      @abyteuser6297 5 лет назад +7

      @@enormousmaggot Pixels hiding from planes by drawing themselves on humans, thus being classified as robots

    • @vannoo67
      @vannoo67 5 лет назад +1

      @@enormousmaggot Would you also need to hold your arms out?

  • @atomscott425
    @atomscott425 5 лет назад +123

    I really wish more papers were on distill, its really amazing.

    • @MaxwellMcKinnon
      @MaxwellMcKinnon 3 года назад +1

      Can you elaborate? I’m both curious if i missed something about distillation as well as maybe I can offer insight. I’ve studied and used distillation in an application before.

  • @StickerWyck
    @StickerWyck 5 лет назад +210

    Look, all I'm saying is that the bus did look a little bit like an ostrich.

    • @immortaldiscoveries3038
      @immortaldiscoveries3038 5 лет назад

      ikr, their algorithm is the problem....bus looks like a bus to me! No ostrich, BARELY, 0.001%

    • @ChrisD__
      @ChrisD__ 5 лет назад +23

      YES FELLOW HUMAN, THAT BUS WAS ALMOST CERTAINLY AN OSTRICH IN DISGUISE.

    • @nicolasfiore
      @nicolasfiore 5 лет назад +10

      why do you guys keep calling that ostrich "a bus"?

    • @larryng1
      @larryng1 5 лет назад

      @@nicolasfiore Agreed! I only saw two ostriches.

  • @barjuandavis
    @barjuandavis 5 лет назад +156

    Bug or feature?
    *YES.*

  • @moth.monster
    @moth.monster 5 лет назад +8

    *speck of dust lands on stop sign*
    AI: Yeah i think that's a green light, go ahead

  • @odw32
    @odw32 5 лет назад +86

    The question is: Who uploads noisy cat videos to RUclips to trick the algorithm into recommending me a strange documentary about the history of toilets every few months?

    • @JM-us3fr
      @JM-us3fr 5 лет назад +13

      Orian de Wit Actually, this sounds like an ingenious attack on RUclips’s algorithm

    • @vinayreddy8683
      @vinayreddy8683 5 лет назад

      This is what I'm thinking

    • @cube2fox
      @cube2fox 5 лет назад

      @@JM-us3fr Does RUclips even analyze videos? I thought analyzing video title, description, and comments would be much simpler and accurate enough.

    • @circuit10
      @circuit10 4 года назад

      @@cube2fox I heard it flagged a video of robot dogs for animal abuse automatically

  • @ShubhamYadav-xr8tw
    @ShubhamYadav-xr8tw 5 лет назад +4

    In my opinion Distill needs more publicity, thanks for highlighting them!

  • @WilliamBoothClibborn
    @WilliamBoothClibborn 5 лет назад +8

    Keep going please! I need these updates to keep me in the loop of the research.

  • @MrMysticphantom
    @MrMysticphantom 5 лет назад +40

    Okay.. Did not know about distill....
    Great... There goes my free time

  • @isg9106
    @isg9106 5 лет назад +8

    That's really interesting, I've never heard of a discussion paper thread. I thoroughly enjoyed this, and hope to hear more about it!

  • @MobyMotion
    @MobyMotion 5 лет назад +1

    Karoly please keep making videos that interest you and your viewers - I don’t care if it’s lacking the visual “fireworks”, this topic is important

  • @claxvii177th6
    @claxvii177th6 5 лет назад +4

    I LOVE ALL YOUR VIDEOS. no matter how flashy are the articles you share, they are consistently informative and they ALWAYS provide a good read. (given i often don't read the whole of them, but thats on me XD)

  • @warmpianist
    @warmpianist 5 лет назад +6

    Every time I see this, I have my fun analogy: my key has got a very small scratch. It won't work with my house anymore, but worked on someone else's car instead!
    What happens if someone else's key with small scratches actually unlocks my house?! We should have the unlocking system fixed!

  • @noergelstein
    @noergelstein 5 лет назад +27

    If the adversarial features arise from the dataset and can be eliminated after being found, wouldn't it also be possible to do the reverse and poison a dataset with a sort of backdoor?

  • @Pfaeff
    @Pfaeff 5 лет назад +59

    One pixel in a 32x32 image is roughly the same relative area as a hundred pixels in a 224x224 image, though.

    • @Meg_A_Byte
      @Meg_A_Byte 5 лет назад +7

      100 pixels is only an area of 10x10 pixels, it's still nothing if you look where those pixels were added.

    • @StormBurnX
      @StormBurnX 5 лет назад +6

      @@Kipras.Skeirys the main difference is the fact that a 10x10 pixel chunk, which has the same relative area but is quite noticeable, could instead be replaced with 100 random pixels throughout the image, which would simply look like a very tiny bit of noise, if noticeable at all.

    • @therogerrogergang8517
      @therogerrogergang8517 5 лет назад

      So you only need to change 0.1% of an image to fool it

    • @UnitSe7en
      @UnitSe7en 5 лет назад +1

      @@happyfase Did you look at the examples? I'd posit that your assetion is 100% not true.

  • @kebakent
    @kebakent 5 лет назад +16

    I'm sure most social networks have an aggressive NSFW filter, that provide fast feedback. It would be fun to see if it could be cheated using these methods.

    • @kebakent
      @kebakent 5 лет назад +3

      @Ahmed Nader CNNs tend to have some preprocessing, often starting out with some kind of cropping and scaling. Throwing different manipulated images at it, might reveal their process. In any case, this sort of attack is not new, as previous work revealed how structured noise could change the classification. I believe some spam use noise to cheat the filters, possibly for this reason.

  • @keco185
    @keco185 5 лет назад +8

    Adversarial attacks like this seem like a great way to train neural nets. It’s like a specialized version of a GAN

  • @NetherFX
    @NetherFX 5 лет назад +22

    AI takes over
    "Just change 1 pixel"

  • @ArnoSelhorst
    @ArnoSelhorst 5 лет назад +2

    You don't need "visual fireworks" to get me in here every time again. You do splendid work nonetheless. Keep it up! You are an intriguing source for new insights.

  • @Henrix1998
    @Henrix1998 5 лет назад +28

    How about just train with random noise added? That could get rid of noise dependency

    • @warmpianist
      @warmpianist 5 лет назад +4

      Henrix98 It could work similarly as a "data augmentation", but imo I don't think that we can cover all variations of noises.
      If we train with (img + noise 1) and (img + noise 2), we might not get a same result if we test (img + noise 3), or even (img + noise 1 + noise 2).
      I would think like this: define new img = old img + noise 1. If new img is trained, we can find a carefully crafted noise 2 such that (new img + noise 2) generates a wrong result.
      And if we have N noises to try, we would require N times more time to train the model.

    • @buttonasas
      @buttonasas 5 лет назад +8

      This has been tried and it doesn't work. The network still tends to learn patterns instead of objects and so, giving a wolf a sheep's fur will likely fool it anyways. It will be harder than 1 pixel, though. No source, sorry!

    • @bukovelby
      @bukovelby 5 лет назад

      There is "style transfer augmentation", which I believe do the thing

  • @ronnetgrazer362
    @ronnetgrazer362 5 лет назад

    I shouldn't be drunk-commenting, prolly gonna regret this but... the passion, the sheer relentlessness with which this guy engages every single facet of the discipline... brings a tear to me eye. I'll shut up now. Don't do ethanol kids. Thanks Károly.

  • @roua.
    @roua. 5 лет назад +3

    I wonder if we could reduce the chance of a network getting tricked by these types of attack by adding ur own white noise on top of the image before feeding it into the network. I guess that might also reduce the overall accuracy of the network in some case

  • @MatiasPoggini
    @MatiasPoggini 5 лет назад +17

    Interesting take on peer reviewing and cross examining a paper. Do you (or any commentators) know if this happens in the humanities as well?

  • @enormousmaggot
    @enormousmaggot 5 лет назад +2

    LOVE this content. Your title is what made me view this particular one, actually.

  • @MichaelSHartman
    @MichaelSHartman 5 лет назад +1

    It answered some thoughts I had on the brittleness of image recognition. I was surprised at the level of one pixel at this stage of development

  • @odw32
    @odw32 5 лет назад +7

    Content-wise: I love the mix you bring. Sometimes icecream for the eyes, sometimes icecream for the mind. I think it's also important to cover AI security, ethics, implications for society. My absolute favorite videos though are when you cover projects where I can download the Python code and put my graphics card to work 😁

    • @TwoMinutePapers
      @TwoMinutePapers  5 лет назад +3

      Thank you so much for the kind feedback Orian! Ice cream for the mind...damn, I wish I came up with this one. Mind if I use it? 🙂

    • @odw32
      @odw32 5 лет назад +3

      @@TwoMinutePapers Not at all! Human communication is the most beautiful neural net, ideas that work well should propagate freely 😄

    • @TwoMinutePapers
      @TwoMinutePapers  5 лет назад +1

      Noted, thank you!

  • @davidmartin1628
    @davidmartin1628 5 лет назад

    I would love to see more journals with a discussions sections where other experts can publically discuss research.
    There are so many unreplicatable studies that make it in to peer reviewed journals that deserve to be scrutinized publically as flawed research papers waste other researchers time when they try to use said research!

  • @funkybob7772
    @funkybob7772 5 лет назад +31

    Fool me once, shame on you. Fool me 100.000.000 times, shame on me ;)

  • @phillipotey9736
    @phillipotey9736 5 лет назад

    This idea the paper has about creating mini discussions is crazy awesome! I need to look more into it but it could solve a lot of replication issues

  • @mickmickymick6927
    @mickmickymick6927 5 лет назад +4

    Very nice pepper today, I should get your recipe some time.

  • @cube2fox
    @cube2fox 5 лет назад

    Google's reCAPTCHA apparently sometimes uses adversarial attacks on their images of cars, traffic lights etc. I noticed some very artificial looking noise on some of the images.

  • @JamesHUN
    @JamesHUN 5 лет назад +1

    why would you call noise that is computed specifically to reach a goal, not just randomly drawn?

  • @0x0404
    @0x0404 5 лет назад

    That is interesting and shows how important a proper set is since the algorithm will go with whatever is the most consistent even if that thing has nothing to do with the actual material.

  • @maxinealexander9709
    @maxinealexander9709 5 лет назад +3

    Fascinating topic, as always. Keep up the good work!

  • @georhodiumgeo9827
    @georhodiumgeo9827 5 лет назад +4

    I am up voting this so hard hopefully it gets you some more views.

  • @leafhappy
    @leafhappy 5 лет назад

    More discussions, rebuttals, and replicability of science!

  • @Flowtail
    @Flowtail 5 лет назад

    God this channel is so pure

  • @williamrichards5241
    @williamrichards5241 5 лет назад

    This paper style is worth exploring more.

  • @nonameplsno8828
    @nonameplsno8828 5 лет назад

    Wasnt there a paper about how adversarial neural networks encode information in the noise so that they could cheat? Something about satellite images to maps? Because it looks like that got modified in the noise attack.

  • @jaydeepvipradas8606
    @jaydeepvipradas8606 5 лет назад

    This problem may be fixed by varied size of pixel in an image. Arranging say 3 by 3 pixels into 1 pixel for entire image can help neural network to classify correctly. Or 4 by 4 pixels into 1 pixel. Usually things which want to classify in an image are bigger than 8 by 8 pixels.
    Multiple training sets will have to created, original image, image where each pixel is 3 by 3 of original and another image where each pixel is say 5 by 5 of original.

    • @Guztav1337
      @Guztav1337 5 лет назад

      I feel like if it was that easy, the researchers would have already done that

    • @MegaKakaruto
      @MegaKakaruto 5 лет назад +1

      Aren't this the main idea of CNN?

    • @jaydeepvipradas8606
      @jaydeepvipradas8606 5 лет назад +1

      @@MegaKakaruto CNN looks for hierarchical patterns, may be like a door knob pattern inside a door pattern.
      Here, it's more like pre-processing data, so as to create better training set.
      Before data going to neural network, it's human eye like zoom-out for better visualisation. Down side is that, after training, for run time usage of network, again an image will have to be translated into 3 images for pattern matching.
      Also, here some noise removal techniques can also help.
      Or, training multiple networks for the same data, where each network uses different approach. E.g one network for edge detected shapes, one CNN like network etc. Then combining output from each network to decide final conclusion.

    • @MegaKakaruto
      @MegaKakaruto 5 лет назад

      @@jaydeepvipradas8606 wow, thanks for detailed answers! There's so much stuff I need to learn more.

  • @FrazerKirkman
    @FrazerKirkman 5 лет назад +1

    Discussion articles are a great idea.

  • @kalebbruwer
    @kalebbruwer 5 лет назад

    What is you have two independently trained classifiers (identical except for their initial state before training)? How hard would it be to fool both with the same alteration?

  • @TheSolarScience
    @TheSolarScience 5 лет назад

    Could one apply a textured/"pixelated" "makeup" to avoid facial recognition?

  • @AsmageddonPrince
    @AsmageddonPrince 5 лет назад

    If you think about it, a big part of human cognition is those exact non-robust features. All our cognitive and memory biases and a good chunk of our behavior are basically quick hacks our brains have that get in the way of properly abstract reasoning.

  • @jlnrdeep
    @jlnrdeep 5 лет назад

    This is an modern and awesome way to enhance conversation over a topic, Nice 👌.

  • @kfftfuftur
    @kfftfuftur 5 лет назад

    but if you have two independent networks that are trained to classify images would they fall for the same wrong pixel or would you need to fool them independently? If so can you come up with a noise pattern that fools both networks?

  • @ophello
    @ophello 5 лет назад +8

    Then these networks are NOT “seeing” at all. We need to make a system that cannot be fooled this way.

    • @ophello
      @ophello 5 лет назад

      Hopi Ng a system that can be fooled this way is not seeing like we see. Your analogy doesn’t make sense. Even without color, we can still accurately identify objects in a photo without being tricked by one pixel being changed.

    • @eduardoachach4099
      @eduardoachach4099 5 лет назад +1

      ​@@ophello I would put a big asterisk on that "accurately". I mean was the dress white and gold or black and purple? And how about all the optical illusions out there. We may not be fooled in the same way, but our perception can easily be tricked as well.
      If you've been following this channel there was a paper showcased in which they even applied the same noise technique targeted to humans and ai: /watch?v=AbxPbfODGcs

    • @ophello
      @ophello 5 лет назад

      Eduardo Achach dude, those examples are so completely far away from this system that it’s laughable. You can’t trick a human to see something completely different by changing a tiny part of the image. That’s not how we see. We see by generalizing the whole image. You can’t trick the eye into seeing a photo of a cat when it’s actually a dog, by changing the color of one pixel of the image. Get it? Finally??

  • @skr_8489
    @skr_8489 5 лет назад

    Karol, how these noise patterns perform if the image is greyscale and pre-processed to make better contrast between lines and surfaces?
    I noticed, that in all these examples, neural networks work on color images. But human perception has a split between color and shape.

  • @vd.se.17
    @vd.se.17 4 года назад

    Thanks for this, I need your help that where can i find all the machine learning papers from last 3 years? Please give me reply. Thank you.

  • @SymEof
    @SymEof 5 лет назад +1

    It's definitely a more interesting format even though the normal format is great in other regards.

  • @Teluri
    @Teluri 5 лет назад

    1.sooo could this be used in a similar way to capcha? (stopping advanced bots from spamming and stuff)
    2.what about an AI with the goal to fool another generic image recognition AI while making the less changes possible?

  • @kylebowles9820
    @kylebowles9820 5 лет назад

    This one is very interesting! Could they have naively corrupted the dataset with salt and pepper to address that weakness? That'd probably be more inefficient on training resources and only move the goal post slightly

  • @oddsandexabytes
    @oddsandexabytes 5 лет назад

    Thanks! You always give me something interesting to think about

  • @cipherxen2
    @cipherxen2 5 лет назад

    Can it be termed "lacuna"?

  • @skyacaniadev2229
    @skyacaniadev2229 5 лет назад

    Just add a new kernel that decides which pixel will be chosen for pooling instead of pooling directly. The CNN before was not designed to prevent this trick, if they want they can easily came up with some mechanism to deny this attack...

  • @donotlike4anonymus594
    @donotlike4anonymus594 5 лет назад

    While i understand how it works and... still feels amazing... the point we reached with ai.. and how easy it is to manipulate...

  • @googacct
    @googacct 5 лет назад

    I wonder what GPT-2 could come with for an argument of, is it a feature or is it a bug?

  • @SuperVfxpro
    @SuperVfxpro 5 лет назад

    could this overlay be used to get into someones facial recognition security?

    • @SuperVfxpro
      @SuperVfxpro 5 лет назад

      identifying as someone else

  • @kebakent
    @kebakent 5 лет назад +6

    Well, if it's somehow recognizing DNA, that dog is probably 99.9% cat.

  • @Flowtail
    @Flowtail 5 лет назад

    Dude, you would’ce got way more views on this if you had made the title something like “One Weird Pixel Makes This AI Think Everything is an Ostrich”

  • @johnniefujita
    @johnniefujita 5 лет назад

    excellent man!! keep up the good work!

  • @badhombre4942
    @badhombre4942 5 лет назад

    More interesting would be to learn why the AI thinks a horse with a hole, is a bus.

  • @terner1234
    @terner1234 5 лет назад +1

    this "one pixel attack" isn't fair, because those pictures are very low res

  • @shagster1970
    @shagster1970 5 лет назад

    It makes you wonder how we can identify it though.

  • @paulgarcia2887
    @paulgarcia2887 5 лет назад +2

    1 pixel: I'm about to end this whole neural network carrier

  • @Alekosssvr
    @Alekosssvr 5 лет назад

    Excellent overview!

  • @robertweekes5783
    @robertweekes5783 5 лет назад +1

    It sounds like some AI’s took major shortcuts with image classification

  • @hazzard77
    @hazzard77 5 лет назад

    has anyone taken a monte carlo approach to machine learning sample inputs?

  • @larryng1
    @larryng1 5 лет назад

    wow, great discussion!

  • @bernardvantonder7291
    @bernardvantonder7291 5 лет назад

    Another awesome episode!

  • @souravjha2146
    @souravjha2146 4 года назад +1

    is it a bug or feature of ML..????

  • @nopethisisnotreal1434
    @nopethisisnotreal1434 5 лет назад

    Make more of this!

  • @JoaoVitor-mf8iq
    @JoaoVitor-mf8iq 5 лет назад

    Sometimes a paper is not the best way to pass forth our knowledge, the structure is very important, its pretty bad to create something good and don't have visualizations , or to create something not that great and be famous, most papers with machine learning should have a link to github or something like that for example.

  • @Wecoc1
    @Wecoc1 5 лет назад +5

    1:20 Wait... Always an ostrich? When it has no idea what it could be it simply goes "Must be an ostrich"? I love that AI 😂

    • @Villfuk02
      @Villfuk02 5 лет назад

      no, it was specifically tricked to think it was an ostrich

    • @npip99
      @npip99 5 лет назад +1

      The idea behind the adversarial attack is you specifically write an algorithm that given a neural net and a photo of a bus, it can manipulate the photo only slightly to trick the neural net into thinking it's an ostrich. They specifically forced it to be ostrich. They could've forced it to be a car, because they're cherry picking exactly the pixel manipulations needed to trick it. If you change a random pixel of a bus, it'll almost always still be a bus.

  • @gorgolyt
    @gorgolyt 5 лет назад

    I prefer interesting conceptual videos like these over "visual fireworks" videos and I'd be very happy if the channel shifted its balance a bit more in this direction... anyone else agree?

  • @linusklocker2890
    @linusklocker2890 5 лет назад

    Always tought you are saying in the intro: Dear Fellow Scholars, this is Tow Minute Papers with "name" *here*. But it is ... this is Tow Minute Papers with Károly Zsolnai-Fehér! thx to commentary

  • @werty7098
    @werty7098 5 лет назад

    This work is brilliant

  • @VictorCaldo
    @VictorCaldo 5 лет назад

    Amazing, thank you.

  • @NICK....
    @NICK.... 5 лет назад

    Aren't all neural networks technically bugs?
    Bug: _An error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or _*_unexpected result_*_ , or to behave in unintended ways._

  • @Mader_Levap
    @Mader_Levap 4 года назад

    One pixel for extremely low res image? I am supposed to be impressed by that?

  • @sermuns
    @sermuns 5 лет назад +3

    This is how we will fight the AI Revolution

  • @AtulLonkar
    @AtulLonkar 5 лет назад +1

    One pixel attack ! Sounds like a good news -)

  • @frankx8739
    @frankx8739 5 лет назад +5

    The man who though his wife was a hat.

    • @Zetakyun
      @Zetakyun 5 лет назад

      I don't think alot of people know what frank is referencing, so I'll link it here. super interesting stuff. it's about neurological disabilities and illnesses which lead up to one person who mistook his wife for a hat.
      en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat

  • @Serthys1
    @Serthys1 5 лет назад +1

    What about making podcast two minute papers with the one that doesn't have visual fireworks.

  • @FiguraCinque
    @FiguraCinque 5 лет назад +1

    ty sir

  • @o1ecypher
    @o1ecypher 5 лет назад +1

    YOU HAVE CREATED A PARADOX IN THIS VIDEO

  • @adammartin6347
    @adammartin6347 5 лет назад

    there’s a puppy pic on the thumbnail - obviously it’s gonna go viral

  • @marverickbin
    @marverickbin 5 лет назад

    I am still in the single pixel camera.

  • @jeffGordon852
    @jeffGordon852 4 года назад

    So you basically saying that I can wear a cloak in the future robot war so they think I'm a friend? cool

  • @andybaldman
    @andybaldman 5 лет назад +1

    It's a frog. You can tell by the pixel.

  • @ellenajt1027
    @ellenajt1027 4 года назад

    This gives a pretty convincing explanation of why one pixel attacks are (perhaps) not too surprising: arxiv.org/abs/1901.10861

  • @clapton79
    @clapton79 5 лет назад +2

    Wow this is serious.. I would not say it is a bug but definetely something that scienctists want to fix to avoid serious vulnerabilities.

  • @Flowtail
    @Flowtail 5 лет назад

    Holy shit ive seen that at 2:44! There’s a great website called Explorable Explanations that may be of interest to you

  • @kyleanderson1613
    @kyleanderson1613 5 лет назад

    This is interesting stuff.

  • @Flowtail
    @Flowtail 5 лет назад

    !?!? I feel feelings of joy???

  • @eleos5
    @eleos5 5 лет назад

    Std training is very effective

  • @RubixB0y
    @RubixB0y 5 лет назад

    Honestly, I can see why they were classified as ostritches. I saw the ostrich in the bus picture all the way in the left column at 1:09

  • @darksidegirl
    @darksidegirl 5 лет назад

    I'm patreon. Join, guys!

  • @udendranmudaliyar4458
    @udendranmudaliyar4458 5 лет назад +1

    Why cannot encrypt these deep learning classifier such a way that pixels cannot be disoriented ?

    • @drdca8263
      @drdca8263 5 лет назад +3

      Can you elaborate? I don’t know what you mean.

    • @DanyIsDeadChannel313
      @DanyIsDeadChannel313 5 лет назад +1

      Basically use blockchain encryption to avoid these attacks.
      :) I'm a tech savvy guy

    • @KuraIthys
      @KuraIthys 5 лет назад +3

      Because you're not attacking the structure of the neural net.
      you're attacking the 'input' of the neural net, which by definition can be anything.
      Encrypting the input you provide to the neural net would do... Nothing?
      Well, at best it would result in the AI being incapable of recognising the image as anything at all.
      Because encryption without a decryption phase is equivalent to feeding semi-random noise into a system.
      Unless I'm missing something about your intentions here, encryption won't do anything because it bears no relation to the problem at hand.
      It's like saying the best way to deal with dropping your coffee is to put the cup inside a safe before you drink it.
      Doesn't make much sense.

    • @udendranmudaliyar4458
      @udendranmudaliyar4458 5 лет назад

      generally we need to decrypt the data before we feed into neural network , so why can't we develop a deep learning classifier which can work over encrypted data for instance , there are research papers about deep learning classifiers which can be employed in stenography ,

    • @drdca8263
      @drdca8263 5 лет назад

      @@udendranmudaliyar4458 Are you talking about like, fully homeomorphic encryption? Supposing that that was made to be fast enough to be practical, I don't see why that would help address adversarial examples. Adversarial examples, iirc, often generalize somewhat to multiple networks trained for roughly the same task (though they don't fool the other networks *quite* as well).
      If one wanted to fool a network where one doesn't have access to the internals, one could do that.
      and I don't see why having it take in encrypted input is of any use, except possibly for the purpose of privacy and stuff.

  • @cyberlord64
    @cyberlord64 5 лет назад +1

    We can see the single pixel attack in the like-dislike bar. The dislike portion is a single pixel...