-is about to be attacked by an combat robot¬ -holds up a piece of cardboard with a strange pattern in it¬ AI:¬tree detected. No target found.- Yeah... XD
American/Russian/Chinese/European AIs all trying to fool the other ones by screaming various kinds of noise at each other, then carefully trying to figure out whether the noise fooled the other AIs, or whether the other AIs are just pretending to be fooled as part of a counter-adversarial attack.
Can you elaborate? I’m both curious if i missed something about distillation as well as maybe I can offer insight. I’ve studied and used distillation in an application before.
The question is: Who uploads noisy cat videos to RUclips to trick the algorithm into recommending me a strange documentary about the history of toilets every few months?
I LOVE ALL YOUR VIDEOS. no matter how flashy are the articles you share, they are consistently informative and they ALWAYS provide a good read. (given i often don't read the whole of them, but thats on me XD)
Every time I see this, I have my fun analogy: my key has got a very small scratch. It won't work with my house anymore, but worked on someone else's car instead! What happens if someone else's key with small scratches actually unlocks my house?! We should have the unlocking system fixed!
If the adversarial features arise from the dataset and can be eliminated after being found, wouldn't it also be possible to do the reverse and poison a dataset with a sort of backdoor?
@@Kipras.Skeirys the main difference is the fact that a 10x10 pixel chunk, which has the same relative area but is quite noticeable, could instead be replaced with 100 random pixels throughout the image, which would simply look like a very tiny bit of noise, if noticeable at all.
I'm sure most social networks have an aggressive NSFW filter, that provide fast feedback. It would be fun to see if it could be cheated using these methods.
@Ahmed Nader CNNs tend to have some preprocessing, often starting out with some kind of cropping and scaling. Throwing different manipulated images at it, might reveal their process. In any case, this sort of attack is not new, as previous work revealed how structured noise could change the classification. I believe some spam use noise to cheat the filters, possibly for this reason.
You don't need "visual fireworks" to get me in here every time again. You do splendid work nonetheless. Keep it up! You are an intriguing source for new insights.
Henrix98 It could work similarly as a "data augmentation", but imo I don't think that we can cover all variations of noises. If we train with (img + noise 1) and (img + noise 2), we might not get a same result if we test (img + noise 3), or even (img + noise 1 + noise 2). I would think like this: define new img = old img + noise 1. If new img is trained, we can find a carefully crafted noise 2 such that (new img + noise 2) generates a wrong result. And if we have N noises to try, we would require N times more time to train the model.
This has been tried and it doesn't work. The network still tends to learn patterns instead of objects and so, giving a wolf a sheep's fur will likely fool it anyways. It will be harder than 1 pixel, though. No source, sorry!
I shouldn't be drunk-commenting, prolly gonna regret this but... the passion, the sheer relentlessness with which this guy engages every single facet of the discipline... brings a tear to me eye. I'll shut up now. Don't do ethanol kids. Thanks Károly.
I wonder if we could reduce the chance of a network getting tricked by these types of attack by adding ur own white noise on top of the image before feeding it into the network. I guess that might also reduce the overall accuracy of the network in some case
Content-wise: I love the mix you bring. Sometimes icecream for the eyes, sometimes icecream for the mind. I think it's also important to cover AI security, ethics, implications for society. My absolute favorite videos though are when you cover projects where I can download the Python code and put my graphics card to work 😁
I would love to see more journals with a discussions sections where other experts can publically discuss research. There are so many unreplicatable studies that make it in to peer reviewed journals that deserve to be scrutinized publically as flawed research papers waste other researchers time when they try to use said research!
Google's reCAPTCHA apparently sometimes uses adversarial attacks on their images of cars, traffic lights etc. I noticed some very artificial looking noise on some of the images.
That is interesting and shows how important a proper set is since the algorithm will go with whatever is the most consistent even if that thing has nothing to do with the actual material.
Wasnt there a paper about how adversarial neural networks encode information in the noise so that they could cheat? Something about satellite images to maps? Because it looks like that got modified in the noise attack.
This problem may be fixed by varied size of pixel in an image. Arranging say 3 by 3 pixels into 1 pixel for entire image can help neural network to classify correctly. Or 4 by 4 pixels into 1 pixel. Usually things which want to classify in an image are bigger than 8 by 8 pixels. Multiple training sets will have to created, original image, image where each pixel is 3 by 3 of original and another image where each pixel is say 5 by 5 of original.
@@MegaKakaruto CNN looks for hierarchical patterns, may be like a door knob pattern inside a door pattern. Here, it's more like pre-processing data, so as to create better training set. Before data going to neural network, it's human eye like zoom-out for better visualisation. Down side is that, after training, for run time usage of network, again an image will have to be translated into 3 images for pattern matching. Also, here some noise removal techniques can also help. Or, training multiple networks for the same data, where each network uses different approach. E.g one network for edge detected shapes, one CNN like network etc. Then combining output from each network to decide final conclusion.
What is you have two independently trained classifiers (identical except for their initial state before training)? How hard would it be to fool both with the same alteration?
If you think about it, a big part of human cognition is those exact non-robust features. All our cognitive and memory biases and a good chunk of our behavior are basically quick hacks our brains have that get in the way of properly abstract reasoning.
but if you have two independent networks that are trained to classify images would they fall for the same wrong pixel or would you need to fool them independently? If so can you come up with a noise pattern that fools both networks?
Hopi Ng a system that can be fooled this way is not seeing like we see. Your analogy doesn’t make sense. Even without color, we can still accurately identify objects in a photo without being tricked by one pixel being changed.
@@ophello I would put a big asterisk on that "accurately". I mean was the dress white and gold or black and purple? And how about all the optical illusions out there. We may not be fooled in the same way, but our perception can easily be tricked as well. If you've been following this channel there was a paper showcased in which they even applied the same noise technique targeted to humans and ai: /watch?v=AbxPbfODGcs
Eduardo Achach dude, those examples are so completely far away from this system that it’s laughable. You can’t trick a human to see something completely different by changing a tiny part of the image. That’s not how we see. We see by generalizing the whole image. You can’t trick the eye into seeing a photo of a cat when it’s actually a dog, by changing the color of one pixel of the image. Get it? Finally??
Karol, how these noise patterns perform if the image is greyscale and pre-processed to make better contrast between lines and surfaces? I noticed, that in all these examples, neural networks work on color images. But human perception has a split between color and shape.
1.sooo could this be used in a similar way to capcha? (stopping advanced bots from spamming and stuff) 2.what about an AI with the goal to fool another generic image recognition AI while making the less changes possible?
This one is very interesting! Could they have naively corrupted the dataset with salt and pepper to address that weakness? That'd probably be more inefficient on training resources and only move the goal post slightly
Just add a new kernel that decides which pixel will be chosen for pooling instead of pooling directly. The CNN before was not designed to prevent this trick, if they want they can easily came up with some mechanism to deny this attack...
Sometimes a paper is not the best way to pass forth our knowledge, the structure is very important, its pretty bad to create something good and don't have visualizations , or to create something not that great and be famous, most papers with machine learning should have a link to github or something like that for example.
The idea behind the adversarial attack is you specifically write an algorithm that given a neural net and a photo of a bus, it can manipulate the photo only slightly to trick the neural net into thinking it's an ostrich. They specifically forced it to be ostrich. They could've forced it to be a car, because they're cherry picking exactly the pixel manipulations needed to trick it. If you change a random pixel of a bus, it'll almost always still be a bus.
I prefer interesting conceptual videos like these over "visual fireworks" videos and I'd be very happy if the channel shifted its balance a bit more in this direction... anyone else agree?
Always tought you are saying in the intro: Dear Fellow Scholars, this is Tow Minute Papers with "name" *here*. But it is ... this is Tow Minute Papers with Károly Zsolnai-Fehér! thx to commentary
Aren't all neural networks technically bugs? Bug: _An error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or _*_unexpected result_*_ , or to behave in unintended ways._
I don't think alot of people know what frank is referencing, so I'll link it here. super interesting stuff. it's about neurological disabilities and illnesses which lead up to one person who mistook his wife for a hat. en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat
Because you're not attacking the structure of the neural net. you're attacking the 'input' of the neural net, which by definition can be anything. Encrypting the input you provide to the neural net would do... Nothing? Well, at best it would result in the AI being incapable of recognising the image as anything at all. Because encryption without a decryption phase is equivalent to feeding semi-random noise into a system. Unless I'm missing something about your intentions here, encryption won't do anything because it bears no relation to the problem at hand. It's like saying the best way to deal with dropping your coffee is to put the cup inside a safe before you drink it. Doesn't make much sense.
generally we need to decrypt the data before we feed into neural network , so why can't we develop a deep learning classifier which can work over encrypted data for instance , there are research papers about deep learning classifiers which can be employed in stenography ,
@@udendranmudaliyar4458 Are you talking about like, fully homeomorphic encryption? Supposing that that was made to be fast enough to be practical, I don't see why that would help address adversarial examples. Adversarial examples, iirc, often generalize somewhat to multiple networks trained for roughly the same task (though they don't fool the other networks *quite* as well). If one wanted to fool a network where one doesn't have access to the internals, one could do that. and I don't see why having it take in encrypted input is of any use, except possibly for the purpose of privacy and stuff.
What people think the war against AI like: humans killed by robots
What it actually like: humans attacks AI by changing pixels
-is about to be attacked by an combat robot¬
-holds up a piece of cardboard with a strange pattern in it¬
AI:¬tree detected. No target found.-
Yeah... XD
American/Russian/Chinese/European AIs all trying to fool the other ones by screaming various kinds of noise at each other, then carefully trying to figure out whether the noise fooled the other AIs, or whether the other AIs are just pretending to be fooled as part of a counter-adversarial attack.
Humans hiding from robots by drawing pixels on themselves, thus being classified as airplanes.
@@enormousmaggot Pixels hiding from planes by drawing themselves on humans, thus being classified as robots
@@enormousmaggot Would you also need to hold your arms out?
I really wish more papers were on distill, its really amazing.
Can you elaborate? I’m both curious if i missed something about distillation as well as maybe I can offer insight. I’ve studied and used distillation in an application before.
Look, all I'm saying is that the bus did look a little bit like an ostrich.
ikr, their algorithm is the problem....bus looks like a bus to me! No ostrich, BARELY, 0.001%
YES FELLOW HUMAN, THAT BUS WAS ALMOST CERTAINLY AN OSTRICH IN DISGUISE.
why do you guys keep calling that ostrich "a bus"?
@@nicolasfiore Agreed! I only saw two ostriches.
Bug or feature?
*YES.*
I see you are a man of logic as well.
Bethesda in a nutshell
Quantum logic unlocked
Maybe
No, marcel davis
*speck of dust lands on stop sign*
AI: Yeah i think that's a green light, go ahead
The question is: Who uploads noisy cat videos to RUclips to trick the algorithm into recommending me a strange documentary about the history of toilets every few months?
Orian de Wit Actually, this sounds like an ingenious attack on RUclips’s algorithm
This is what I'm thinking
@@JM-us3fr Does RUclips even analyze videos? I thought analyzing video title, description, and comments would be much simpler and accurate enough.
@@cube2fox I heard it flagged a video of robot dogs for animal abuse automatically
In my opinion Distill needs more publicity, thanks for highlighting them!
Keep going please! I need these updates to keep me in the loop of the research.
Okay.. Did not know about distill....
Great... There goes my free time
That's really interesting, I've never heard of a discussion paper thread. I thoroughly enjoyed this, and hope to hear more about it!
Karoly please keep making videos that interest you and your viewers - I don’t care if it’s lacking the visual “fireworks”, this topic is important
I LOVE ALL YOUR VIDEOS. no matter how flashy are the articles you share, they are consistently informative and they ALWAYS provide a good read. (given i often don't read the whole of them, but thats on me XD)
You are very kind, thank you so much! :)
Every time I see this, I have my fun analogy: my key has got a very small scratch. It won't work with my house anymore, but worked on someone else's car instead!
What happens if someone else's key with small scratches actually unlocks my house?! We should have the unlocking system fixed!
If the adversarial features arise from the dataset and can be eliminated after being found, wouldn't it also be possible to do the reverse and poison a dataset with a sort of backdoor?
One pixel in a 32x32 image is roughly the same relative area as a hundred pixels in a 224x224 image, though.
100 pixels is only an area of 10x10 pixels, it's still nothing if you look where those pixels were added.
@@Kipras.Skeirys the main difference is the fact that a 10x10 pixel chunk, which has the same relative area but is quite noticeable, could instead be replaced with 100 random pixels throughout the image, which would simply look like a very tiny bit of noise, if noticeable at all.
So you only need to change 0.1% of an image to fool it
@@happyfase Did you look at the examples? I'd posit that your assetion is 100% not true.
I'm sure most social networks have an aggressive NSFW filter, that provide fast feedback. It would be fun to see if it could be cheated using these methods.
@Ahmed Nader CNNs tend to have some preprocessing, often starting out with some kind of cropping and scaling. Throwing different manipulated images at it, might reveal their process. In any case, this sort of attack is not new, as previous work revealed how structured noise could change the classification. I believe some spam use noise to cheat the filters, possibly for this reason.
Adversarial attacks like this seem like a great way to train neural nets. It’s like a specialized version of a GAN
AI takes over
"Just change 1 pixel"
You don't need "visual fireworks" to get me in here every time again. You do splendid work nonetheless. Keep it up! You are an intriguing source for new insights.
How about just train with random noise added? That could get rid of noise dependency
Henrix98 It could work similarly as a "data augmentation", but imo I don't think that we can cover all variations of noises.
If we train with (img + noise 1) and (img + noise 2), we might not get a same result if we test (img + noise 3), or even (img + noise 1 + noise 2).
I would think like this: define new img = old img + noise 1. If new img is trained, we can find a carefully crafted noise 2 such that (new img + noise 2) generates a wrong result.
And if we have N noises to try, we would require N times more time to train the model.
This has been tried and it doesn't work. The network still tends to learn patterns instead of objects and so, giving a wolf a sheep's fur will likely fool it anyways. It will be harder than 1 pixel, though. No source, sorry!
There is "style transfer augmentation", which I believe do the thing
I shouldn't be drunk-commenting, prolly gonna regret this but... the passion, the sheer relentlessness with which this guy engages every single facet of the discipline... brings a tear to me eye. I'll shut up now. Don't do ethanol kids. Thanks Károly.
🙏
I wonder if we could reduce the chance of a network getting tricked by these types of attack by adding ur own white noise on top of the image before feeding it into the network. I guess that might also reduce the overall accuracy of the network in some case
Interesting take on peer reviewing and cross examining a paper. Do you (or any commentators) know if this happens in the humanities as well?
LOVE this content. Your title is what made me view this particular one, actually.
It answered some thoughts I had on the brittleness of image recognition. I was surprised at the level of one pixel at this stage of development
Content-wise: I love the mix you bring. Sometimes icecream for the eyes, sometimes icecream for the mind. I think it's also important to cover AI security, ethics, implications for society. My absolute favorite videos though are when you cover projects where I can download the Python code and put my graphics card to work 😁
Thank you so much for the kind feedback Orian! Ice cream for the mind...damn, I wish I came up with this one. Mind if I use it? 🙂
@@TwoMinutePapers Not at all! Human communication is the most beautiful neural net, ideas that work well should propagate freely 😄
Noted, thank you!
I would love to see more journals with a discussions sections where other experts can publically discuss research.
There are so many unreplicatable studies that make it in to peer reviewed journals that deserve to be scrutinized publically as flawed research papers waste other researchers time when they try to use said research!
Fool me once, shame on you. Fool me 100.000.000 times, shame on me ;)
This idea the paper has about creating mini discussions is crazy awesome! I need to look more into it but it could solve a lot of replication issues
Very nice pepper today, I should get your recipe some time.
Google's reCAPTCHA apparently sometimes uses adversarial attacks on their images of cars, traffic lights etc. I noticed some very artificial looking noise on some of the images.
why would you call noise that is computed specifically to reach a goal, not just randomly drawn?
That is interesting and shows how important a proper set is since the algorithm will go with whatever is the most consistent even if that thing has nothing to do with the actual material.
Fascinating topic, as always. Keep up the good work!
I am up voting this so hard hopefully it gets you some more views.
More discussions, rebuttals, and replicability of science!
God this channel is so pure
This paper style is worth exploring more.
Wasnt there a paper about how adversarial neural networks encode information in the noise so that they could cheat? Something about satellite images to maps? Because it looks like that got modified in the noise attack.
This problem may be fixed by varied size of pixel in an image. Arranging say 3 by 3 pixels into 1 pixel for entire image can help neural network to classify correctly. Or 4 by 4 pixels into 1 pixel. Usually things which want to classify in an image are bigger than 8 by 8 pixels.
Multiple training sets will have to created, original image, image where each pixel is 3 by 3 of original and another image where each pixel is say 5 by 5 of original.
I feel like if it was that easy, the researchers would have already done that
Aren't this the main idea of CNN?
@@MegaKakaruto CNN looks for hierarchical patterns, may be like a door knob pattern inside a door pattern.
Here, it's more like pre-processing data, so as to create better training set.
Before data going to neural network, it's human eye like zoom-out for better visualisation. Down side is that, after training, for run time usage of network, again an image will have to be translated into 3 images for pattern matching.
Also, here some noise removal techniques can also help.
Or, training multiple networks for the same data, where each network uses different approach. E.g one network for edge detected shapes, one CNN like network etc. Then combining output from each network to decide final conclusion.
@@jaydeepvipradas8606 wow, thanks for detailed answers! There's so much stuff I need to learn more.
Discussion articles are a great idea.
What is you have two independently trained classifiers (identical except for their initial state before training)? How hard would it be to fool both with the same alteration?
Could one apply a textured/"pixelated" "makeup" to avoid facial recognition?
If you think about it, a big part of human cognition is those exact non-robust features. All our cognitive and memory biases and a good chunk of our behavior are basically quick hacks our brains have that get in the way of properly abstract reasoning.
This is an modern and awesome way to enhance conversation over a topic, Nice 👌.
but if you have two independent networks that are trained to classify images would they fall for the same wrong pixel or would you need to fool them independently? If so can you come up with a noise pattern that fools both networks?
Then these networks are NOT “seeing” at all. We need to make a system that cannot be fooled this way.
Hopi Ng a system that can be fooled this way is not seeing like we see. Your analogy doesn’t make sense. Even without color, we can still accurately identify objects in a photo without being tricked by one pixel being changed.
@@ophello I would put a big asterisk on that "accurately". I mean was the dress white and gold or black and purple? And how about all the optical illusions out there. We may not be fooled in the same way, but our perception can easily be tricked as well.
If you've been following this channel there was a paper showcased in which they even applied the same noise technique targeted to humans and ai: /watch?v=AbxPbfODGcs
Eduardo Achach dude, those examples are so completely far away from this system that it’s laughable. You can’t trick a human to see something completely different by changing a tiny part of the image. That’s not how we see. We see by generalizing the whole image. You can’t trick the eye into seeing a photo of a cat when it’s actually a dog, by changing the color of one pixel of the image. Get it? Finally??
Karol, how these noise patterns perform if the image is greyscale and pre-processed to make better contrast between lines and surfaces?
I noticed, that in all these examples, neural networks work on color images. But human perception has a split between color and shape.
Thanks for this, I need your help that where can i find all the machine learning papers from last 3 years? Please give me reply. Thank you.
It's definitely a more interesting format even though the normal format is great in other regards.
1.sooo could this be used in a similar way to capcha? (stopping advanced bots from spamming and stuff)
2.what about an AI with the goal to fool another generic image recognition AI while making the less changes possible?
This one is very interesting! Could they have naively corrupted the dataset with salt and pepper to address that weakness? That'd probably be more inefficient on training resources and only move the goal post slightly
Thanks! You always give me something interesting to think about
Can it be termed "lacuna"?
Just add a new kernel that decides which pixel will be chosen for pooling instead of pooling directly. The CNN before was not designed to prevent this trick, if they want they can easily came up with some mechanism to deny this attack...
While i understand how it works and... still feels amazing... the point we reached with ai.. and how easy it is to manipulate...
I wonder what GPT-2 could come with for an argument of, is it a feature or is it a bug?
could this overlay be used to get into someones facial recognition security?
identifying as someone else
Well, if it's somehow recognizing DNA, that dog is probably 99.9% cat.
Dude, you would’ce got way more views on this if you had made the title something like “One Weird Pixel Makes This AI Think Everything is an Ostrich”
excellent man!! keep up the good work!
More interesting would be to learn why the AI thinks a horse with a hole, is a bus.
this "one pixel attack" isn't fair, because those pictures are very low res
It makes you wonder how we can identify it though.
1 pixel: I'm about to end this whole neural network carrier
Excellent overview!
It sounds like some AI’s took major shortcuts with image classification
has anyone taken a monte carlo approach to machine learning sample inputs?
wow, great discussion!
Another awesome episode!
is it a bug or feature of ML..????
Make more of this!
Sometimes a paper is not the best way to pass forth our knowledge, the structure is very important, its pretty bad to create something good and don't have visualizations , or to create something not that great and be famous, most papers with machine learning should have a link to github or something like that for example.
1:20 Wait... Always an ostrich? When it has no idea what it could be it simply goes "Must be an ostrich"? I love that AI 😂
no, it was specifically tricked to think it was an ostrich
The idea behind the adversarial attack is you specifically write an algorithm that given a neural net and a photo of a bus, it can manipulate the photo only slightly to trick the neural net into thinking it's an ostrich. They specifically forced it to be ostrich. They could've forced it to be a car, because they're cherry picking exactly the pixel manipulations needed to trick it. If you change a random pixel of a bus, it'll almost always still be a bus.
I prefer interesting conceptual videos like these over "visual fireworks" videos and I'd be very happy if the channel shifted its balance a bit more in this direction... anyone else agree?
Noted - thank you so much for the feedback!
Always tought you are saying in the intro: Dear Fellow Scholars, this is Tow Minute Papers with "name" *here*. But it is ... this is Tow Minute Papers with Károly Zsolnai-Fehér! thx to commentary
This work is brilliant
Amazing, thank you.
Aren't all neural networks technically bugs?
Bug: _An error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or _*_unexpected result_*_ , or to behave in unintended ways._
One pixel for extremely low res image? I am supposed to be impressed by that?
This is how we will fight the AI Revolution
One pixel attack ! Sounds like a good news -)
The man who though his wife was a hat.
I don't think alot of people know what frank is referencing, so I'll link it here. super interesting stuff. it's about neurological disabilities and illnesses which lead up to one person who mistook his wife for a hat.
en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat
What about making podcast two minute papers with the one that doesn't have visual fireworks.
ty sir
YOU HAVE CREATED A PARADOX IN THIS VIDEO
?
there’s a puppy pic on the thumbnail - obviously it’s gonna go viral
I am still in the single pixel camera.
So you basically saying that I can wear a cloak in the future robot war so they think I'm a friend? cool
It's a frog. You can tell by the pixel.
This gives a pretty convincing explanation of why one pixel attacks are (perhaps) not too surprising: arxiv.org/abs/1901.10861
Wow this is serious.. I would not say it is a bug but definetely something that scienctists want to fix to avoid serious vulnerabilities.
Holy shit ive seen that at 2:44! There’s a great website called Explorable Explanations that may be of interest to you
This is interesting stuff.
!?!? I feel feelings of joy???
Std training is very effective
Honestly, I can see why they were classified as ostritches. I saw the ostrich in the bus picture all the way in the left column at 1:09
I'm patreon. Join, guys!
Thank you so much for your support! 🙏
Why cannot encrypt these deep learning classifier such a way that pixels cannot be disoriented ?
Can you elaborate? I don’t know what you mean.
Basically use blockchain encryption to avoid these attacks.
:) I'm a tech savvy guy
Because you're not attacking the structure of the neural net.
you're attacking the 'input' of the neural net, which by definition can be anything.
Encrypting the input you provide to the neural net would do... Nothing?
Well, at best it would result in the AI being incapable of recognising the image as anything at all.
Because encryption without a decryption phase is equivalent to feeding semi-random noise into a system.
Unless I'm missing something about your intentions here, encryption won't do anything because it bears no relation to the problem at hand.
It's like saying the best way to deal with dropping your coffee is to put the cup inside a safe before you drink it.
Doesn't make much sense.
generally we need to decrypt the data before we feed into neural network , so why can't we develop a deep learning classifier which can work over encrypted data for instance , there are research papers about deep learning classifiers which can be employed in stenography ,
@@udendranmudaliyar4458 Are you talking about like, fully homeomorphic encryption? Supposing that that was made to be fast enough to be practical, I don't see why that would help address adversarial examples. Adversarial examples, iirc, often generalize somewhat to multiple networks trained for roughly the same task (though they don't fool the other networks *quite* as well).
If one wanted to fool a network where one doesn't have access to the internals, one could do that.
and I don't see why having it take in encrypted input is of any use, except possibly for the purpose of privacy and stuff.
We can see the single pixel attack in the like-dislike bar. The dislike portion is a single pixel...