Common sense is the rarest sense of all, which makes this speaker a very rare individual indeed. As a technologist I have been sceptical about a lot of things that have happened in this space, but Mr Mickens brings it into to sharp focus with a highly entertaining talk.
Im trying to describe how great that talk was but the interplay of technical detail with quality presentation material makes it hard to be retold. Great presentation. Great work and an even more powerful message for a computer scientist/engineer.
What a wonderfully poignant, provocative, engaging, .... just fucking fantastic talk (and slide design!). I do research in machine learning and this is exactly the kind of scepticism much of the community lacks.
Really good. Even for passers-by who know nothing about security. That said you don't need so many gags. They started to get in the way of the very interesting content
Why did I just find this now... This is pretty much a must see for anyone in IT. Especially as we see Bing's AI chat and ChatGPT being connected to The Internet of hate and being allowed Python code execution.
To be fair, GPT-3 is a static model when running in production, so at least they're not making the same mistake as Tay and allowing bigoted inputs to actually corrupt the neural network in real time lmfao
@@technoturnovers7072 it's been known leak private information though, hasn't it? With billions of layers building up these models at best you're making good guesses about the answers you might get to any given question, right? It's amazing technology and it has to be used very cautiously.
great talk, so many people need to see that ! The problem with machine learning is probably that it is so easy to do, as you said anyone can create an AI-based startup
AI-generated timestamps: 00:00 - 00:20 Introductory Remarks and Definition 00:20 - 01:16 Keynote Theme: Computer Science in Trouble 01:16 - 02:14 A True Story: Encounter with a Magician 02:14 - 03:40 The Value of Skepticism in Technology 03:40 - 05:00 Machine Learning: The Buzzword Phenomenon 05:00 - 06:59 Challenges of AI from a Security Perspective 06:59 - 09:01 Understanding Machine Learning: Gradient Descent Explained 09:01 - 10:50 The Problem with Hyperparameters in ML 10:50 - 12:50 Inscrutability of Machine Learning Models 12:50 - 14:30 The Dangers of AI in Critical Systems 14:30 - 16:20 The Case of Tay: AI Gone Wrong 16:20 - 18:00 The Ethical Implications of AI Decisions 18:00 - 19:50 The Need for a Holistic View of Security 19:50 - 21:30 Rethinking Security in the Age of AI 21:30 - 23:15 Conclusion: The Call for Skepticism
"So, like, if you don't use firewalls and stuff like that, you're potatoes are gonna get compromised; don't be shocked!" Campaign slogan for Mickens 2020. I'm thinking of volunteering
This is great but I would love to see usable solutions rather than criticisms. Still though, should be required watching for all infosec/dev/syseng people.
I only wish these kind of people had the influence necessary to actually change that the people in charge are implementing this technology that he hates on purpose.
My response here is that there are already completely inscrutable parts of software that are deeply trusted, machine learning contributes nothing here.
He's just as cogent and entertaining in the Q&A when he isn't relying on so many gags. Unfortunately, his point does seem to get lost in his delivery, although it was incredibly entertaining.
THINK before applying technology to everything and anything. The application of technology by itself isn't enough, it needs to be the right technology with the appropriate minimal safeguards built in by design.
Most issues raised about ML are in fact also true about humans. Humans ARE inscrutable. You may ask them to explain their actions. And they will give you one. Likely a convincing one. Yet psychology has shown again and again that for most decisions, this explanation is totally made up after the fact. I'm not saying we should trust ML as much as humans in every situation. But the reason not to do so is more complicated that "it's inscrutable".
You can hold humans to account for their actions (hypothetically corporations and governments can be held accountable - but in practice it is much more difficult.) Putting an inscrutable and unaccountable system in the position of making life impacting decisions is likely to end up causing problems for somebody. Thus the call to action to carefully think through deploying these systems is important to heed.
@@z_t_k interestingly, this raises the question of what is accountability. Philosophically, I mean. (I don't care about obligations of gathering proofs you did your job correctly. AIs can do it too.) We, humans, like to have someone to blame when something goes wrong. But it's often more complicated than that, isn't it? Nobody does something bad on purpose. Either they believe it's the good thing to do, or they made a mistake. Sometime both: they make a mistake and rationalize by being convinced this was the right thing to do afterwards. We usually say that someone is responsible for an incident when they had the capacity to foresee the incident and had the capacity to make a decision that would have avoided it. In that regard, AIs are *much more* scrutable than humans. We can take a model, replay a situation tons and tons of time in slightly different scenarios and probe every step of the computation. We can't say that much about humans. (Even if we could, it'd be highly unethical.) What's missing with AIs (and with corporations to some extent), is the incentive to not take an action that can result in a bad outcome. The way to do it is to hurt its goal and have it take that negative reward into account into its decisions. The only missing piece of technology is that AIs are very bad at taking into account very high but very infrequent negative rewards. I mean, humans are bad... But AIs are worse... For now.
For any given thing, there will always be something else that's better. Take this talk for what it is; a < 1 hour keynote, not a deep dive into a specific topic.
Hilarious but incredibly disappointing in the end... No clear message other than "stop acting like spergs", no elaboration or examples for why connecting ML systems to the net is bad, and his lesson is that everyone should change their behaviour by pure force of will. No discussion of profit motives or how to change systems/law. Unbelievably talented guy which is why the letdown was so huge; he could provide real leadership.
12:40 second slide was poor taste because higher education is overrated and overvalued. The skill gap is in big part due to Academia detachment from the real world aka business Most of the rest was okay, but his delivery was indeed too artistic. Still, great effort, good underlying idea
Honestly, I think academia can fail more when it tries to attach to business - the lifecycles and goals of each institution is very different. "Spinout" startups are a nice idea though. I think building a startup might be more overrated and overvalued than higher education - most startups are unimaginitive failures, from what I've seen first-hand anyway. Trying to make technology fit the goal of chasing rounds of investment is backwards thinking.
OK, not value neutral, so values have to be imposed: but whose values? No consideration. This just adds up to the same argument for censorship and central control and forcing everyone to go along with a system that you decided was 'just', that they are not *at all* convinced by, that dominates the rest of what SV is doing. You might want to take some self-reflection and consider whether using computers to impose *your* values from above is really moral or not.
Your belief that people's values should not be imposed on others is itself a value that you wish to impose upon the world. I hold similar values. Currently computers and their software are being used with no thought-out values whatsoever, or with values of pure profit for their creators. If we want computers to be used to resist censorship and oppression, then we need to make that happen.
What a great Keynote. And it is still relevant. Even more than back in 2018. James Mickens speech was hilarious. Thanks for sharing.
"Manifest Destiny oftentimes ends in dysentery."
That is solid Quote Gold.
The "s" in IoT stands for "security"
the only man funnier than james mickens is also james mickens
True but James Mickens makes them both look dull.
Every James Mickens talk I have seen or essay I have read has been worth it.
Common sense is the rarest sense of all, which makes this speaker a very rare individual indeed. As a technologist I have been sceptical about a lot of things that have happened in this space, but Mr Mickens brings it into to sharp focus with a highly entertaining talk.
This is good for the first 28 minutes, then it takes off and is GREAT.
“Tls is the only good thing we have” man I cant believe I haven’t read all his stuff already.
Im trying to describe how great that talk was but the interplay of technical detail with quality presentation material makes it hard to be retold. Great presentation. Great work and an even more powerful message for a computer scientist/engineer.
"The stuff is what the stuff is, brother."
What a wonderfully poignant, provocative, engaging, .... just fucking fantastic talk (and slide design!). I do research in machine learning and this is exactly the kind of scepticism much of the community lacks.
I will watch this many times and share it everywhere.
14:03 "Just explore that studio space, okay?" Underrated.
This should be required watching for all AI hype-thusiasts.
James Mickens gives me hope for this world.
Can you please enable subtitle crowdsourcing for this video and your channel?
Really good. Even for passers-by who know nothing about security. That said you don't need so many gags. They started to get in the way of the very interesting content
Why did I just find this now... This is pretty much a must see for anyone in IT. Especially as we see Bing's AI chat and ChatGPT being connected to The Internet of hate and being allowed Python code execution.
To be fair, GPT-3 is a static model when running in production, so at least they're not making the same mistake as Tay and allowing bigoted inputs to actually corrupt the neural network in real time lmfao
@@technoturnovers7072 it's been known leak private information though, hasn't it? With billions of layers building up these models at best you're making good guesses about the answers you might get to any given question, right? It's amazing technology and it has to be used very cautiously.
*James Mickens 2020* . I've never laughed and learned this much from any talk in my life. 😂
great talk, so many people need to see that !
The problem with machine learning is probably that it is so easy to do, as you said anyone can create an AI-based startup
It's really embarrassing that I work in IoT security and it's even more embarrassing that what James said is true.
IoT security ... the ultimate target-enriched environment.
This is a guy I have to follow. I'm aware of how deafening my echo chamber is but, until now, I didn't have a way to step out of it. Bravo!
AI-generated timestamps:
00:00 - 00:20 Introductory Remarks and Definition
00:20 - 01:16 Keynote Theme: Computer Science in Trouble
01:16 - 02:14 A True Story: Encounter with a Magician
02:14 - 03:40 The Value of Skepticism in Technology
03:40 - 05:00 Machine Learning: The Buzzword Phenomenon
05:00 - 06:59 Challenges of AI from a Security Perspective
06:59 - 09:01 Understanding Machine Learning: Gradient Descent Explained
09:01 - 10:50 The Problem with Hyperparameters in ML
10:50 - 12:50 Inscrutability of Machine Learning Models
12:50 - 14:30 The Dangers of AI in Critical Systems
14:30 - 16:20 The Case of Tay: AI Gone Wrong
16:20 - 18:00 The Ethical Implications of AI Decisions
18:00 - 19:50 The Need for a Holistic View of Security
19:50 - 21:30 Rethinking Security in the Age of AI
21:30 - 23:15 Conclusion: The Call for Skepticism
This talk is the new standard to which I'll hold all future kenyotes.
RIP Tay, you was the best.
44:41 "Your paper will get rejected if it sounds like it was written by someone who struggles with depression."
This guy brings balance to the force
"So, like, if you don't use firewalls and stuff like that, you're potatoes are gonna get compromised; don't be shocked!"
Campaign slogan for Mickens 2020. I'm thinking of volunteering
3:10 - hey, it could have been two full ping pong balls - en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
This is great but I would love to see usable solutions rather than criticisms. Still though, should be required watching for all infosec/dev/syseng people.
did you watch 42:28, 45:44, or 47:30? solutions already provided.
I love this presentation!
Excellent talk lol, I didn't fall asleep
I only wish these kind of people had the influence necessary to actually change that the people in charge are implementing this technology that he hates on purpose.
you didn't link the papers and youtube suggestions on the youtube video of the speech?
He should be the mandatory keynote speaker at all cons
I'm skeptical of this. PS: you didn't have to bring Steve Holt into this, dude.
video should be called: machine learning unveiled and demystyfied
Wait till this man hears about politicians
Jokes apart he's awesome
Thanks, very nice presentation.
My response here is that there are already completely inscrutable parts of software that are deeply trusted, machine learning contributes nothing here.
"The Internet is a cauldron of evil" 6:38
It's "Par-tick thistle football club" not "Pat-rick" :)
James Mickens for President
This John Oliver with a Harvard professorship. Respect!
46:52 I want that book.
Dang, that is one good orator. Funny af. Not sure what the message was, but I think that's okay.
He's just as cogent and entertaining in the Q&A when he isn't relying on so many gags. Unfortunately, his point does seem to get lost in his delivery, although it was incredibly entertaining.
"Don't"
THINK before applying technology to everything and anything. The application of technology by itself isn't enough, it needs to be the right technology with the appropriate minimal safeguards built in by design.
Fantastic!
I loved it. Thank you!
Very good speaker
Soooo… I can't use machine learning?
Excellent talk!
most hilarious keynote ever!
I guessed the one word summary!
engaging talk.
James Mickens is amazing and hilarious. I don't agree with everything he said here (some loaded assumptions imho), but damn this was a good talk.
Most issues raised about ML are in fact also true about humans.
Humans ARE inscrutable. You may ask them to explain their actions. And they will give you one. Likely a convincing one. Yet psychology has shown again and again that for most decisions, this explanation is totally made up after the fact.
I'm not saying we should trust ML as much as humans in every situation. But the reason not to do so is more complicated that "it's inscrutable".
You can hold humans to account for their actions (hypothetically corporations and governments can be held accountable - but in practice it is much more difficult.)
Putting an inscrutable and unaccountable system in the position of making life impacting decisions is likely to end up causing problems for somebody. Thus the call to action to carefully think through deploying these systems is important to heed.
@@z_t_k interestingly, this raises the question of what is accountability. Philosophically, I mean. (I don't care about obligations of gathering proofs you did your job correctly. AIs can do it too.)
We, humans, like to have someone to blame when something goes wrong. But it's often more complicated than that, isn't it? Nobody does something bad on purpose. Either they believe it's the good thing to do, or they made a mistake. Sometime both: they make a mistake and rationalize by being convinced this was the right thing to do afterwards.
We usually say that someone is responsible for an incident when they had the capacity to foresee the incident and had the capacity to make a decision that would have avoided it.
In that regard, AIs are *much more* scrutable than humans. We can take a model, replay a situation tons and tons of time in slightly different scenarios and probe every step of the computation. We can't say that much about humans. (Even if we could, it'd be highly unethical.)
What's missing with AIs (and with corporations to some extent), is the incentive to not take an action that can result in a bad outcome.
The way to do it is to hurt its goal and have it take that negative reward into account into its decisions.
The only missing piece of technology is that AIs are very bad at taking into account very high but very infrequent negative rewards.
I mean, humans are bad... But AIs are worse... For now.
gigabit ethernet signals are not interpret-able either lol
I don't agree with him much, but damn he's funny.
classic, i am impressed.
STEVE HOLT!
Example with sugar was lame. Not very good data sets. Funny to watch. But wonder if it would better whitout it.
Hahahaha hilarious. This made my day :)
Q-Bert!
he sounds like the boundary break guy
This was so good, what the fuck
the internet must be destroyed
I think *Weapons of Math Destruction* said it better.
For any given thing, there will always be something else that's better. Take this talk for what it is; a < 1 hour keynote, not a deep dive into a specific topic.
still waiting for Fuck This IOT Shit This Shit Is Shit to drop
ahem - Partick Thistle not Patrick, unfortunately.
Laugh riot. And informative.
Hilarious but incredibly disappointing in the end... No clear message other than "stop acting like spergs", no elaboration or examples for why connecting ML systems to the net is bad, and his lesson is that everyone should change their behaviour by pure force of will. No discussion of profit motives or how to change systems/law. Unbelievably talented guy which is why the letdown was so huge; he could provide real leadership.
12:40 second slide was poor taste because higher education is overrated and overvalued. The skill gap is in big part due to Academia detachment from the real world aka business
Most of the rest was okay, but his delivery was indeed too artistic. Still, great effort, good underlying idea
Honestly, I think academia can fail more when it tries to attach to business - the lifecycles and goals of each institution is very different. "Spinout" startups are a nice idea though.
I think building a startup might be more overrated and overvalued than higher education - most startups are unimaginitive failures, from what I've seen first-hand anyway. Trying to make technology fit the goal of chasing rounds of investment is backwards thinking.
Peter, why are you afraid to use your own name on RUclips?
CS education will pay for itself quickly, unless you do it wrong.
...shit.
Meh. Only idiots think technology is neutral - technology is a hammer. Hammer a nail...hammer a finger.
OK, not value neutral, so values have to be imposed: but whose values? No consideration. This just adds up to the same argument for censorship and central control and forcing everyone to go along with a system that you decided was 'just', that they are not *at all* convinced by, that dominates the rest of what SV is doing. You might want to take some self-reflection and consider whether using computers to impose *your* values from above is really moral or not.
Your belief that people's values should not be imposed on others is itself a value that you wish to impose upon the world. I hold similar values. Currently computers and their software are being used with no thought-out values whatsoever, or with values of pure profit for their creators. If we want computers to be used to resist censorship and oppression, then we need to make that happen.
Excellent talk!