This is an absolutely great talk that singles out the key open points about artificial intelligence and its application. Our handing over some key decisions to specific tools without understanding the underlying phenomenon is more than critical. Thank you Dr. Hany Farid for this deep talk!!
Algorithms that try to predict my behaviour and preferences in things have always been horribly wrong. One of the reasons RUclips keeps recommending me channels and videos I'd rather see erased from existence.
It's like when you go shopping, right? All of the items you want/buy are shelved way way down there by your cankles, or way up there so you have to get on your tippy toes to reach, right? If you were more predictable, the items you wanted would be straight across from you, at about chest high. Next time you go shopping think about how horribly wrong the highly-paid marketers have been about you, mappyhappychappy.
You mean like, when I go shopping, all of those highly paid marketers have decided not to order in the things that I want and replace them with things I don't? Also, reorganize the kinds of businesses that exist in markets because they want to change the branding of said markets, so that the stores I want to visit have disappeared and the only things left are the kinds of stores that I'd never frequent, thus forcing me to have to find what I want online? Yeah, it's a bit like that, TAR ICO.
Or they keep you in a loop same algorithms over and over same stuff you watched over and over keep popping up it's almost getting harder to find anything new other than the news
@@tarico4436 Um, no. The items you want are at ankle level. The items they want to sell you, because they are higher profit with a shinier box, is at eye height. That is not a mistake, it is designed into the system. They do very nicely out of people in a hurry just grabbing what is in front of them.
14:50 “Big data, data analytics, AI, ML, are not inherently more accurate, more fair, less biased than humans." True, but we should not then conclude that they can't be better. The speaker showed a method that can be used to grade the predictive power of an algorithm. We should use one that has an acceptable grade. Also, since government functions are inherently the public's business, only open source software should be used.
Awesome speech about justice of crime we hope like a speech aware protect our self & discourage commits any kinds of crime so I highly appreciate to Mr. Farid his good speech. Thanks a lot him
The data he showed literally told us that humans and the AI made roughly the same calculation regarding the risk factor, respectively 65 and 66%. It is easier to adjust the algorithm to give more accurate results than it is to convince all human beings to take the given ''hidden'' data into account. The AI is infinitely superior to humans, given the correct data.
This makes a lot of sense, if the data is created by purposeful or inadvertent racism, and the algorithm uses this data to form its algorithm, then the algorithm will perpetuate the trend. I wonder what other factors are most likely to help improve the decisions made by judges?
We don't use algorithms because they're better. We use them because we're lazy and it's easier to let a machine do your job for you. That's the sad reality.
I would honestly recommend watching PhilosophyTube's video on AI over this, it's far more informative and goes deeper than just "the algorithm could be wrong".
It depends on the quality of the software and data that supports it. It doesn't seem too complex to me. It's basic statistics. I'm not surprised to find that the software reflects developers predispositions at all. It would be down right strange for my work to reflect yours. But your main conclusions need to be said. #2 regarding how assumptions of people impact outputs of programs can't be overstated. Other presenters have made the same case in other areas. Face recognition is a big one. Now we see this.
It is complex once you put that data in those algorithms it's very hard to know what's actually happening it's like trying to know what happens with transitors in a computer and it's not the model that is wrong but the data they are feeding
@@brendarua01 Thank you, Miss Rua. No doubt i would have go on, leaving the quote incorrectly, many times. Much appreciated. I know i got it from a class called Physics & the Mind, in which Einstein figured prominently. But yeah, it does actually sound more like Twain, now that i think of it. Go figure. I was thinking maybe it was Whitehead, lol....
There is one major flaw in this message. If they don't get caught how do you know whether they reoffend? Also what if this person has committed multiple crimes but only arrested once? Classified as a low risk?
It depends on how previous criminal history is defined. They can only work with data that they have, so I assume if there is no previous history of arrest, then that figure is low or zero and they would be deemed low risk to reoffend
How come they don't deduct points when you rehabilitate and don't offend like car insurance gives you better rates the better you are as a driver I'll tell you why they don't have respect for Humanity anymore they create these drug problems then punish the people for using them
I am harassed for over two years in Modesto Ca. Being targeted by the police. Being arrested with a felony fslsely for a violent crime. No evidence with a crime. Losing my home and all things inside. Disabled Vet. I believe i am on a list because i exercise my first amendment right to free speech.
Surprised that I'm the only one coming back to this just after Florida (of course) started employing this exact method in pre-policing and started harrassing random citizens
It is striking that this channel has 19 million subscribers, but has been viewed 41,435 times and has generated 112 responses in 8 months. There's something wrong.
Well if they weren't getting all there data offline by a bunch of people with nothing better to do when the guy said the company won't tell him their secret ingredient I'll give you a hint it's people on the internet with dollar surveys they don't want you to know that it's all based on popular opinion and not facts
@@memojr4444 I guess as a psychology major, I struggle with suspension of disbelief when it comes to watching Psycho-pass. There are way more things to consider before determining if a person can commit a crime. Heck even people whose brains are predisposed to be delinquent can still manage to live normal life provided they are in a nurturing environment. But in Psycho-pass, I find it hard to believe that intelligent people would allow such a technology to be utilized. That's why I dropped the show.
@@maattthhhh I developed anxiety after watching psycho pass and had multiple panic attacks for an entire year ! The premise was terrifying and depressing i had a horrible experience!
1) how did you adjust for one of the demographic variables namely race in the sample you used? That makes your choice itself biased. 2 ) How did you collect juvenile offender records\history as they are not public records? 3) If and when you did not disclose the race of the offender to those responding to the facts you disclosed, how did you conclude that their responses showed a racial bias?
A lot of people commenting on this don't quite seem to understand his point. For those of you saying "of course it predicts bad because it only has two classifiers" - he was proving that the algorithms that the courts are using are in comparison to the results that a two classifier algorithm would give you. Which is NOT good and should NOT be deciding whether or not someone should be put in jail. For those of you saying "the algorithm is proving that blacks commit more crimes" this is also simply not the case. He is saying that the algorithm predicted that more blacks would reoffend when they actually didn't. And that more whites would not reoffend when they actually did. It has nothing to do with who is committing more crimes. It has something to do with how the algorithm is classifying REAL life people and making decisions about people's lives and race ends up playing a huge roll in this. This research could say a lot about our criminal justice system, but it should really worry you that these algorithms are being deployed without actually KNOWING why it is getting the results it is getting. The courts that are using this probably had no idea that this was actually happening, because a human mindset is " well if a computer believes it will happen then I agree with the computer." The research done here should open up the community to challenge these AI's and their abilities. I do believe that technology is powerful and can solve many many things than us as humans cannot. But i do believe that if these algorithms are being deployed in the real world, than they need to have concrete evidence of their capabilities and should have proof that they are helping us and not ultimately hurting us. Awesome talk.
Ai and machine learning will be the future of world . the video gives very nice examples of how we can use ML on different field also the guy defined that the algos also need to be change somewhere . it still inaccurate on some measure things . and all the things should be understood to being a Data scientist or ml lover.
“Therefore, COME OUT FROM THEIR MIDST AND BE SEPARATE,” says the Lord. “AND DO NOT TOUCH WHAT IS UNCLEAN; And I will welcome you. “And I will be a father to you, And you shall be sons and daughters to Me,” Says the Lord Almighty.
That’s oversimplification of what ML can do. If you feed in just two classifiers, of course it will generate the result that any Average Joe can guess.
They didn't feed in just two classifiers, they fed it many classifiers, but found that the algorithm only needed those two classifiers to get that rate of accuracy (which happens to be the same rate of accuracy of any average Joe on the internet), and was using those two most heavily in order to make its predictions. The problem that he is outlining here is that the algorithm is being fed biased data, because it is data that is coming from a biased society. That means that the predictions will, of course, also be biased. But part of the problem is that it is difficult to really start solving these problems because many people still believe that data can't be biased, that numbers can't be biased, and so obviously algorithms can't be biased. And those incredibly misguided views need to be addressed; because before you can fix a problem, you need to understand that there IS a problem
This is about imprisonment not Rehabilitation if they cover every angle with the law that means everybody is always under government scrutiny especially the minority groups new boss same as old boss
Im against using a.i. For crime, but im curious ,if you remove the pot bust what will you get? Im always going to say the guy with 15 arrest will commit a crime befire the guy who,s never been arrested, maybe he never got caught but still i,d have to go with it.
I’m sorry but this doesn’t make sense to me! With Neuroweaponry, the System can do far-out SyFy things like implant false memories, a precrime algorithm should be soooo easy to set right! 🤷♀️ Unless a profitable and awfully convenient model is only being ...amplified 🙄 corrupt systems now go high tech huh? Why wouldn’t they?? Gotta justify a broken System. Scaled up to the world!
No but they can get your driver's report which also involves the police which could also involve a co operative release of Criminal Intent making your insurance agent deny you or hike your rate up so far highway robbery
I hate how people with a small brain have to quantity everything so that they can not make a wrong decision. Come on, u r a human, that s why u re better than that. Dont be lazy and think.
You should probably read 1984, Fahrenheit 451, Brave New World and Animal Farm. We don't want thought crime to become a thing, we really, *really* don't. It's already bad enough that intent to commit a crime is in itself a crime, the law need not go further than it already has.
A documentary i watched showed a govt agency that is trying to use parents behavior paterns to predict children's behavior and then will force an abortion. Be careful what you wish for.
This is an absolutely great talk that singles out the key open points about artificial intelligence and its application. Our handing over some key decisions to specific tools without understanding the underlying phenomenon is more than critical. Thank you Dr. Hany Farid for this deep talk!!
Algorithms that try to predict my behaviour and preferences in things have always been horribly wrong. One of the reasons RUclips keeps recommending me channels and videos I'd rather see erased from existence.
It's like when you go shopping, right? All of the items you want/buy are shelved way way down there by your cankles, or way up there so you have to get on your tippy toes to reach, right? If you were more predictable, the items you wanted would be straight across from you, at about chest high. Next time you go shopping think about how horribly wrong the highly-paid marketers have been about you, mappyhappychappy.
You mean like, when I go shopping, all of those highly paid marketers have decided not to order in the things that I want and replace them with things I don't? Also, reorganize the kinds of businesses that exist in markets because they want to change the branding of said markets, so that the stores I want to visit have disappeared and the only things left are the kinds of stores that I'd never frequent, thus forcing me to have to find what I want online? Yeah, it's a bit like that, TAR ICO.
Or they keep you in a loop same algorithms over and over same stuff you watched over and over keep popping up it's almost getting harder to find anything new other than the news
@@tarico4436 Um, no. The items you want are at ankle level. The items they want to sell you, because they are higher profit with a shinier box, is at eye height. That is not a mistake, it is designed into the system. They do very nicely out of people in a hurry just grabbing what is in front of them.
14:50 “Big data, data analytics, AI, ML, are not inherently more accurate, more fair, less biased than humans." True, but we should not then conclude that they can't be better. The speaker showed a method that can be used to grade the predictive power of an algorithm. We should use one that has an acceptable grade. Also, since government functions are inherently the public's business, only open source software should be used.
Lol. You write as if governments are representative of their tax payers. They aren't.
Awesome speech about justice of crime we hope like a speech aware protect our self & discourage commits any kinds of crime so I highly appreciate to Mr. Farid his good speech. Thanks a lot him
The data he showed literally told us that humans and the AI made roughly the same calculation regarding the risk factor, respectively 65 and 66%. It is easier to adjust the algorithm to give more accurate results than it is to convince all human beings to take the given ''hidden'' data into account. The AI is infinitely superior to humans, given the correct data.
Given the correct data, yes. Unfortunately the programmers are still human and bringing human biases into it.
@@nbucwa6621 I am not denying that, just pointing something out.
👏👏👏👏 Masterclass in speech format!
Predictive algorithms can't account for how human beings change, for good or bad. It also can't understand intention.
This makes a lot of sense, if the data is created by purposeful or inadvertent racism, and the algorithm uses this data to form its algorithm, then the algorithm will perpetuate the trend. I wonder what other factors are most likely to help improve the decisions made by judges?
Think Share there are extralegal factors judges ask during arraignment. Like jobs or education or other stuff like that
Jackie MAtos I wonder which of those increases accuracy the most, thanks for responding :)
THIS NEEDS MORE VIEWS!!!
One of the best tedx talk, thanks👍
This is so on point. I am a fan of Dr. Hany Farid. Thank you for bringing awareness to this critical issue.
How well this video aged from 5 years ago is incredible.
Very informative. Thank you for raising the awareness about AI
Alles Gute! 💖🙌
Georges Orwell was ahead of his time when he wrote 1984 back in 1948... frightening.
We don't use algorithms because they're better. We use them because we're lazy and it's easier to let a machine do your job for you. That's the sad reality.
Also sad is the unaccoutability when they create a casualty.
I would honestly recommend watching PhilosophyTube's video on AI over this, it's far more informative and goes deeper than just "the algorithm could be wrong".
Predictive until Proven Guilty..
Which video
It depends on the quality of the software and data that supports it. It doesn't seem too complex to me. It's basic statistics. I'm not surprised to find that the software reflects developers predispositions at all. It would be down right strange for my work to reflect yours.
But your main conclusions need to be said. #2 regarding how assumptions of people impact outputs of programs can't be overstated. Other presenters have made the same case in other areas. Face recognition is a big one. Now we see this.
It is complex once you put that data in those algorithms it's very hard to know what's actually happening it's like trying to know what happens with transitors in a computer and it's not the model that is wrong but the data they are feeding
0
I think it was albert einstein who said:
"There are now three kinds of lies: lies, damned lies, and statistics."
(no peace💘)
Hi Britt. I think Mark Twain was first with this. But no doubt Einstein would have said it lol
@@brendarua01
Thank you, Miss Rua. No doubt i would have go on, leaving the quote incorrectly, many times. Much appreciated. I know i got it from a class called Physics & the Mind, in which Einstein figured prominently. But yeah, it does actually sound more like Twain, now that i think of it. Go figure. I was thinking maybe it was Whitehead, lol....
That's exsctly the problem. It all comes down to who decides the authority and rules we live by.
Hopefully, The Creators Of Artificial Intelligent will study it, extremely well, before allowing it to take over Everything.
Not eye opening but I’m glad this being talked about in a platform like Tedx
What a blunt instrument for measuring alleged risk / predicted conduct
There is one major flaw in this message. If they don't get caught how do you know whether they reoffend? Also what if this person has committed multiple crimes but only arrested once? Classified as a low risk?
It depends on how previous criminal history is defined. They can only work with data that they have, so I assume if there is no previous history of arrest, then that figure is low or zero and they would be deemed low risk to reoffend
How come they don't deduct points when you rehabilitate and don't offend like car insurance gives you better rates the better you are as a driver I'll tell you why they don't have respect for Humanity anymore they create these drug problems then punish the people for using them
One important factor you can include in not been given equal opportunity. Or financial instability due to false insecurities
I am harassed for over two years in Modesto Ca. Being targeted by the police. Being arrested with a felony fslsely for a violent crime. No evidence with a crime. Losing my home and all things inside. Disabled Vet. I believe i am on a list because i exercise my first amendment right to free speech.
there is enough data to implement 2 simple solution2: 1) balanced sub-sampling and 2) walk-forward validation
Surprised that I'm the only one coming back to this just after Florida (of course) started employing this exact method in pre-policing and started harrassing random citizens
It is striking that this channel has 19 million subscribers, but has been viewed 41,435 times and has generated 112 responses in 8 months.
There's something wrong.
Why in the world would they use an algorithm that has that large of an threshold for error?
Good question
Well if they weren't getting all there data offline by a bunch of people with nothing better to do when the guy said the company won't tell him their secret ingredient I'll give you a hint it's people on the internet with dollar surveys they don't want you to know that it's all based on popular opinion and not facts
And this is why I do not like Psycho-pass.
Its been a long time since I saw it but I thought psycho-pass was arguing à similar thing which is to not let computers do all the thinking for us
@@memojr4444 I guess as a psychology major, I struggle with suspension of disbelief when it comes to watching Psycho-pass. There are way more things to consider before determining if a person can commit a crime. Heck even people whose brains are predisposed to be delinquent can still manage to live normal life provided they are in a nurturing environment.
But in Psycho-pass, I find it hard to believe that intelligent people would allow such a technology to be utilized. That's why I dropped the show.
thought the same
@@maattthhhh I developed anxiety after watching psycho pass and had multiple panic attacks for an entire year ! The premise was terrifying and depressing i had a horrible experience!
this system is ridiculous . Thank you '
1) how did you adjust for one of the demographic variables namely race in the sample you used? That makes your choice itself biased.
2 ) How did you collect juvenile offender records\history as they are not public records?
3) If and when you did not disclose the race of the offender to those responding to the facts you disclosed, how did you conclude that their responses showed a racial bias?
I like his accent and voice
Dr shaym
Identical
Pupur,purstuta
Great video. Loved all of it !
Yes and don't forget computers don't lie. Unless the technician or software engineer did. But when does that ever happen?
A lot of people commenting on this don't quite seem to understand his point.
For those of you saying "of course it predicts bad because it only has two classifiers" - he was proving that the algorithms that the courts are using are in comparison to the results that a two classifier algorithm would give you. Which is NOT good and should NOT be deciding whether or not someone should be put in jail.
For those of you saying "the algorithm is proving that blacks commit more crimes" this is also simply not the case. He is saying that the algorithm predicted that more blacks would reoffend when they actually didn't. And that more whites would not reoffend when they actually did. It has nothing to do with who is committing more crimes. It has something to do with how the algorithm is classifying REAL life people and making decisions about people's lives and race ends up playing a huge roll in this.
This research could say a lot about our criminal justice system, but it should really worry you that these algorithms are being deployed without actually KNOWING why it is getting the results it is getting. The courts that are using this probably had no idea that this was actually happening, because a human mindset is " well if a computer believes it will happen then I agree with the computer." The research done here should open up the community to challenge these AI's and their abilities.
I do believe that technology is powerful and can solve many many things than us as humans cannot. But i do believe that if these algorithms are being deployed in the real world, than they need to have concrete evidence of their capabilities and should have proof that they are helping us and not ultimately hurting us.
Awesome talk.
This talk is a real gem 💎
1 from india also watched Very informative .
Jai Hind🇮🇳🇮🇳🇮🇳🇮🇳🇮🇳🇮🇳
Akash soni me also from INDIA ✌✌✌
And what’s so informative about it that we don’t know already!!??
Ai and machine learning will be the future of world . the video gives very nice examples of how we can use ML on different field also the guy defined that the algos also need to be change somewhere . it still inaccurate on some measure things . and all the things should be understood to being a Data scientist or ml lover.
Stop petty crime felonies. No victim, no crime!
So Great
Very Good ❤
“Therefore, COME OUT FROM THEIR MIDST AND BE SEPARATE,” says the Lord. “AND DO NOT TOUCH WHAT IS UNCLEAN; And I will welcome you. “And I will be a father to you, And you shall be sons and daughters to Me,” Says the Lord Almighty.
Simply put, ai algorithms need to be updated.
Came here because I'm very much interested in criminal justice.
great vid
Just talked about this today
Have you ever thought that :2+2=4 is the end of the world
That’s oversimplification of what ML can do. If you feed in just two classifiers, of course it will generate the result that any Average Joe can guess.
They didn't feed in just two classifiers, they fed it many classifiers, but found that the algorithm only needed those two classifiers to get that rate of accuracy (which happens to be the same rate of accuracy of any average Joe on the internet), and was using those two most heavily in order to make its predictions.
The problem that he is outlining here is that the algorithm is being fed biased data, because it is data that is coming from a biased society. That means that the predictions will, of course, also be biased. But part of the problem is that it is difficult to really start solving these problems because many people still believe that data can't be biased, that numbers can't be biased, and so obviously algorithms can't be biased. And those incredibly misguided views need to be addressed; because before you can fix a problem, you need to understand that there IS a problem
I agree 👍
Great talk! Quick note: the automatically generated subtitles have, among others, this huge inaccuracy - "GDP our" where it should read "GDPR".
You must have used SVM or Ensembles!! Not the classic linear models...
so with that being said then a jury of ones "peers" is just as bias
This is about imprisonment not Rehabilitation if they cover every angle with the law that means everybody is always under government scrutiny especially the minority groups new boss same as old boss
Like like like 🌷
Im against using a.i. For crime, but im curious ,if you remove the pot bust what will you get? Im always going to say the guy with 15 arrest will commit a crime befire the guy who,s never been arrested, maybe he never got caught but still i,d have to go with it.
10:30. If you're 70 yrs old and only have 1 arrest i,d say you are less likely to re offend than the 25 yr old with 9 arrest.
5:20. You know no such thing, you know if they GOT CAUGHT reoffending.
1:45. You said to make bail decisions,not whether youd be arrested again.
I’m sorry but this doesn’t make sense to me! With Neuroweaponry, the System can do far-out SyFy things like implant false memories, a precrime algorithm should be soooo easy to set right! 🤷♀️
Unless a profitable and awfully convenient model is only being ...amplified 🙄 corrupt systems now go high tech huh?
Why wouldn’t they?? Gotta justify a broken System. Scaled up to the world!
Random people on the internet may know know the race but the judge will.
I thought it was cool. It would be even more so if someone were able to superimpose the detention teacher, from The Breakfast Club, as thespeaker.
جميل جدا ...
🤔think
How much did they get paid to do the survey?
Artificial Intelligence is a Trip!!! One day, Soon, AI will be running Everything. That should be very interesting...
Why would anyone let and AI do that?
arsenic sore, Perhaps AL will be used Much more by people. I think it could be helpful, provided that they are monitored closely.
Cindy Moore AL?
great sir
First Dawn, I meant AI.😊
I doubt these stats TOTALLY
I bet insurance companies use algorithms like this to some extent.
Daniel Kahnemans Co-authored book, Noise, explains it very well, and it varies very widely.
No but they can get your driver's report which also involves the police which could also involve a co operative release of Criminal Intent making your insurance agent deny you or hike your rate up so far highway robbery
How they know the color of Their skin is by their names
I hate how people with a small brain have to quantity everything so that they can not make a wrong decision. Come on, u r a human, that s why u re better than that. Dont be lazy and think.
You can't tell me what to do, buddy!
15x more for weed will definitely skew it, what if you removed the marijuana ?
Where my LDers at lol
ayeeeee
Hi Iam Turkish🇹🇷
allllahu akbaaar *boom* ...sad...sad religion
@FOOD CHANNEL mavi tişörtlü kadın ne öyle 😂😂
@@SPS2684 what
Ulan adamlar bizi ortadoğu ülkesi sanıp tekbir getirmişler
@@SPS2684 people like u are the reason we hate west. No idea whats going around u but believe whatever the media says.
Is the simplification presented here legit?
Dedsec likes this
Today I'm so early wow
LIFE TIME AWARD GOES TO YOU... LOL
Sebepsizce beğen 👍
I am there's for class
Corrupt data in corrupt technology.
Watch Dogs 2 already?
Is the race put into the data?
I hope the police can figure out a formula to find out who will commit crimes before it happens.
That can be potentially abused.
You should probably read 1984, Fahrenheit 451, Brave New World and Animal Farm. We don't want thought crime to become a thing, we really, *really* don't. It's already bad enough that intent to commit a crime is in itself a crime, the law need not go further than it already has.
Well, there are actually people trying to do this, so...
We try but things change and longitudinal studies are crazy expensive
A documentary i watched showed a govt agency that is trying to use parents behavior paterns to predict children's behavior and then will force an abortion. Be careful what you wish for.
Hi
Watch_Dogs?
LOL! You HIDE information (race) and get algorithm with worse prediction. What's the problem?! Just unhide it :)
ok fed
1 to comment
Türkçeye çevirin
Aynen
These are so boring
I thought it was cool. It would be even more so if someone were able to superimpose the detention teacherfrom The Breakfast Club as thespeaker.