The danger of predictive algorithms in criminal justice | Hany Farid | TEDxAmoskeagMillyard

Поделиться
HTML-код
  • Опубликовано: 30 ноя 2024

Комментарии • 166

  • @ninaphilippe
    @ninaphilippe 6 лет назад +19

    This is an absolutely great talk that singles out the key open points about artificial intelligence and its application. Our handing over some key decisions to specific tools without understanding the underlying phenomenon is more than critical. Thank you Dr. Hany Farid for this deep talk!!

  • @BlueSkyBS
    @BlueSkyBS 6 лет назад +34

    Algorithms that try to predict my behaviour and preferences in things have always been horribly wrong. One of the reasons RUclips keeps recommending me channels and videos I'd rather see erased from existence.

    • @tarico4436
      @tarico4436 6 лет назад

      It's like when you go shopping, right? All of the items you want/buy are shelved way way down there by your cankles, or way up there so you have to get on your tippy toes to reach, right? If you were more predictable, the items you wanted would be straight across from you, at about chest high. Next time you go shopping think about how horribly wrong the highly-paid marketers have been about you, mappyhappychappy.

    • @BlueSkyBS
      @BlueSkyBS 6 лет назад

      You mean like, when I go shopping, all of those highly paid marketers have decided not to order in the things that I want and replace them with things I don't? Also, reorganize the kinds of businesses that exist in markets because they want to change the branding of said markets, so that the stores I want to visit have disappeared and the only things left are the kinds of stores that I'd never frequent, thus forcing me to have to find what I want online? Yeah, it's a bit like that, TAR ICO.

    • @optimusprimer4392
      @optimusprimer4392 2 года назад +2

      Or they keep you in a loop same algorithms over and over same stuff you watched over and over keep popping up it's almost getting harder to find anything new other than the news

    • @djb3500
      @djb3500 Год назад +1

      @@tarico4436 Um, no. The items you want are at ankle level. The items they want to sell you, because they are higher profit with a shinier box, is at eye height. That is not a mistake, it is designed into the system. They do very nicely out of people in a hurry just grabbing what is in front of them.

  • @mikeg9b
    @mikeg9b 6 лет назад +26

    14:50 “Big data, data analytics, AI, ML, are not inherently more accurate, more fair, less biased than humans." True, but we should not then conclude that they can't be better. The speaker showed a method that can be used to grade the predictive power of an algorithm. We should use one that has an acceptable grade. Also, since government functions are inherently the public's business, only open source software should be used.

    • @sebastianphizone4808
      @sebastianphizone4808 2 года назад +3

      Lol. You write as if governments are representative of their tax payers. They aren't.

  • @hasansorkar9223
    @hasansorkar9223 6 лет назад +2

    Awesome speech about justice of crime we hope like a speech aware protect our self & discourage commits any kinds of crime so I highly appreciate to Mr. Farid his good speech. Thanks a lot him

  • @hamade7997
    @hamade7997 6 лет назад +22

    The data he showed literally told us that humans and the AI made roughly the same calculation regarding the risk factor, respectively 65 and 66%. It is easier to adjust the algorithm to give more accurate results than it is to convince all human beings to take the given ''hidden'' data into account. The AI is infinitely superior to humans, given the correct data.

    • @nbucwa6621
      @nbucwa6621 6 лет назад +6

      Given the correct data, yes. Unfortunately the programmers are still human and bringing human biases into it.

    • @hamade7997
      @hamade7997 6 лет назад +1

      @@nbucwa6621 I am not denying that, just pointing something out.

  • @3eyup
    @3eyup Год назад +1

    👏👏👏👏 Masterclass in speech format!

  • @HongFeiBai
    @HongFeiBai 2 года назад +2

    Predictive algorithms can't account for how human beings change, for good or bad. It also can't understand intention.

  • @zachmentalloadcoach
    @zachmentalloadcoach 6 лет назад +5

    This makes a lot of sense, if the data is created by purposeful or inadvertent racism, and the algorithm uses this data to form its algorithm, then the algorithm will perpetuate the trend. I wonder what other factors are most likely to help improve the decisions made by judges?

    • @jackiesifuentes1359
      @jackiesifuentes1359 6 лет назад

      Think Share there are extralegal factors judges ask during arraignment. Like jobs or education or other stuff like that

    • @zachmentalloadcoach
      @zachmentalloadcoach 6 лет назад

      Jackie MAtos I wonder which of those increases accuracy the most, thanks for responding :)

  • @GauravSingh-ku5xy
    @GauravSingh-ku5xy 6 лет назад +3

    THIS NEEDS MORE VIEWS!!!

  • @thejabalpurdialogues
    @thejabalpurdialogues 6 лет назад +3

    One of the best tedx talk, thanks👍

  • @Silious950
    @Silious950 6 лет назад +6

    This is so on point. I am a fan of Dr. Hany Farid. Thank you for bringing awareness to this critical issue.

  • @Bosshawg764
    @Bosshawg764 5 месяцев назад

    How well this video aged from 5 years ago is incredible.

  • @dataai514
    @dataai514 6 лет назад +7

    Very informative. Thank you for raising the awareness about AI

  • @realser9
    @realser9 6 лет назад +3

    Alles Gute! 💖🙌

  • @infomineco7111
    @infomineco7111 Год назад +1

    Georges Orwell was ahead of his time when he wrote 1984 back in 1948... frightening.

  • @ModernGentleman
    @ModernGentleman 6 лет назад +6

    We don't use algorithms because they're better. We use them because we're lazy and it's easier to let a machine do your job for you. That's the sad reality.

    • @kristinmeyer489
      @kristinmeyer489 Год назад

      Also sad is the unaccoutability when they create a casualty.

  • @Hakasedess
    @Hakasedess 6 лет назад +10

    I would honestly recommend watching PhilosophyTube's video on AI over this, it's far more informative and goes deeper than just "the algorithm could be wrong".

  • @brendarua01
    @brendarua01 6 лет назад +9

    It depends on the quality of the software and data that supports it. It doesn't seem too complex to me. It's basic statistics. I'm not surprised to find that the software reflects developers predispositions at all. It would be down right strange for my work to reflect yours.
    But your main conclusions need to be said. #2 regarding how assumptions of people impact outputs of programs can't be overstated. Other presenters have made the same case in other areas. Face recognition is a big one. Now we see this.

    • @abcdxx1059
      @abcdxx1059 6 лет назад

      It is complex once you put that data in those algorithms it's very hard to know what's actually happening it's like trying to know what happens with transitors in a computer and it's not the model that is wrong but the data they are feeding

    • @fernanddierkens2998
      @fernanddierkens2998 6 лет назад

      0

    • @britthill-noprodonwant6789
      @britthill-noprodonwant6789 6 лет назад

      I think it was albert einstein who said:
      "There are now three kinds of lies: lies, damned lies, and statistics."
      (no peace💘)

    • @brendarua01
      @brendarua01 6 лет назад +1

      Hi Britt. I think Mark Twain was first with this. But no doubt Einstein would have said it lol

    • @britthill-noprodonwant6789
      @britthill-noprodonwant6789 6 лет назад

      @@brendarua01
      Thank you, Miss Rua. No doubt i would have go on, leaving the quote incorrectly, many times. Much appreciated. I know i got it from a class called Physics & the Mind, in which Einstein figured prominently. But yeah, it does actually sound more like Twain, now that i think of it. Go figure. I was thinking maybe it was Whitehead, lol....

  • @sebastianphizone4808
    @sebastianphizone4808 2 года назад

    That's exsctly the problem. It all comes down to who decides the authority and rules we live by.

  • @CMoore8539
    @CMoore8539 6 лет назад +2

    Hopefully, The Creators Of Artificial Intelligent will study it, extremely well, before allowing it to take over Everything.

  • @WealthbuilderzTV
    @WealthbuilderzTV 6 лет назад +9

    Not eye opening but I’m glad this being talked about in a platform like Tedx

  • @onelife7247
    @onelife7247 Год назад

    What a blunt instrument for measuring alleged risk / predicted conduct

  • @srvbluesmsn
    @srvbluesmsn 5 лет назад +2

    There is one major flaw in this message. If they don't get caught how do you know whether they reoffend? Also what if this person has committed multiple crimes but only arrested once? Classified as a low risk?

    • @nadarjag
      @nadarjag 4 года назад +1

      It depends on how previous criminal history is defined. They can only work with data that they have, so I assume if there is no previous history of arrest, then that figure is low or zero and they would be deemed low risk to reoffend

    • @optimusprimer4392
      @optimusprimer4392 2 года назад

      How come they don't deduct points when you rehabilitate and don't offend like car insurance gives you better rates the better you are as a driver I'll tell you why they don't have respect for Humanity anymore they create these drug problems then punish the people for using them

  • @AnumV-jd9vy
    @AnumV-jd9vy Год назад

    One important factor you can include in not been given equal opportunity. Or financial instability due to false insecurities

  • @WorldwideWorley
    @WorldwideWorley Год назад

    I am harassed for over two years in Modesto Ca. Being targeted by the police. Being arrested with a felony fslsely for a violent crime. No evidence with a crime. Losing my home and all things inside. Disabled Vet. I believe i am on a list because i exercise my first amendment right to free speech.

  • @andreamarkos
    @andreamarkos 2 года назад

    there is enough data to implement 2 simple solution2: 1) balanced sub-sampling and 2) walk-forward validation

  • @rudeboyjohn3483
    @rudeboyjohn3483 3 года назад

    Surprised that I'm the only one coming back to this just after Florida (of course) started employing this exact method in pre-policing and started harrassing random citizens

  • @a.v.k.2852
    @a.v.k.2852 5 лет назад +1

    It is striking that this channel has 19 million subscribers, but has been viewed 41,435 times and has generated 112 responses in 8 months.
    There's something wrong.

  • @bypnozero3847
    @bypnozero3847 4 года назад +3

    Why in the world would they use an algorithm that has that large of an threshold for error?

    • @ThePathOfLeastResistanc
      @ThePathOfLeastResistanc 3 года назад

      Good question

    • @optimusprimer4392
      @optimusprimer4392 2 года назад

      Well if they weren't getting all there data offline by a bunch of people with nothing better to do when the guy said the company won't tell him their secret ingredient I'll give you a hint it's people on the internet with dollar surveys they don't want you to know that it's all based on popular opinion and not facts

  • @maattthhhh
    @maattthhhh 6 лет назад +8

    And this is why I do not like Psycho-pass.

    • @memojr4444
      @memojr4444 6 лет назад +1

      Its been a long time since I saw it but I thought psycho-pass was arguing à similar thing which is to not let computers do all the thinking for us

    • @maattthhhh
      @maattthhhh 6 лет назад +2

      @@memojr4444 I guess as a psychology major, I struggle with suspension of disbelief when it comes to watching Psycho-pass. There are way more things to consider before determining if a person can commit a crime. Heck even people whose brains are predisposed to be delinquent can still manage to live normal life provided they are in a nurturing environment.
      But in Psycho-pass, I find it hard to believe that intelligent people would allow such a technology to be utilized. That's why I dropped the show.

    • @GauravSingh-ku5xy
      @GauravSingh-ku5xy 6 лет назад

      thought the same

    • @Voidapparate
      @Voidapparate 3 года назад

      @@maattthhhh I developed anxiety after watching psycho pass and had multiple panic attacks for an entire year ! The premise was terrifying and depressing i had a horrible experience!

  • @EuDouArteHipHopArtCulture21
    @EuDouArteHipHopArtCulture21 6 лет назад

    this system is ridiculous . Thank you '

  • @rgflint
    @rgflint 2 месяца назад

    1) how did you adjust for one of the demographic variables namely race in the sample you used? That makes your choice itself biased.
    2 ) How did you collect juvenile offender records\history as they are not public records?
    3) If and when you did not disclose the race of the offender to those responding to the facts you disclosed, how did you conclude that their responses showed a racial bias?

  • @afinoxi
    @afinoxi 6 лет назад +15

    I like his accent and voice

  • @danny_thabandit3322
    @danny_thabandit3322 6 лет назад

    Great video. Loved all of it !

  • @HuskyRuski
    @HuskyRuski 3 года назад

    Yes and don't forget computers don't lie. Unless the technician or software engineer did. But when does that ever happen?

  • @gracemalcom7358
    @gracemalcom7358 5 лет назад +2

    A lot of people commenting on this don't quite seem to understand his point.
    For those of you saying "of course it predicts bad because it only has two classifiers" - he was proving that the algorithms that the courts are using are in comparison to the results that a two classifier algorithm would give you. Which is NOT good and should NOT be deciding whether or not someone should be put in jail.
    For those of you saying "the algorithm is proving that blacks commit more crimes" this is also simply not the case. He is saying that the algorithm predicted that more blacks would reoffend when they actually didn't. And that more whites would not reoffend when they actually did. It has nothing to do with who is committing more crimes. It has something to do with how the algorithm is classifying REAL life people and making decisions about people's lives and race ends up playing a huge roll in this.
    This research could say a lot about our criminal justice system, but it should really worry you that these algorithms are being deployed without actually KNOWING why it is getting the results it is getting. The courts that are using this probably had no idea that this was actually happening, because a human mindset is " well if a computer believes it will happen then I agree with the computer." The research done here should open up the community to challenge these AI's and their abilities.
    I do believe that technology is powerful and can solve many many things than us as humans cannot. But i do believe that if these algorithms are being deployed in the real world, than they need to have concrete evidence of their capabilities and should have proof that they are helping us and not ultimately hurting us.
    Awesome talk.

  • @midhileshbaburajan8510
    @midhileshbaburajan8510 Год назад

    This talk is a real gem 💎

  • @akashsoni4418
    @akashsoni4418 6 лет назад +7

    1 from india also watched Very informative .

    • @kushwantsingh3330
      @kushwantsingh3330 6 лет назад +2

      Jai Hind🇮🇳🇮🇳🇮🇳🇮🇳🇮🇳🇮🇳

    • @alnashraansari8484
      @alnashraansari8484 6 лет назад +3

      Akash soni me also from INDIA ✌✌✌

    • @S0S0ant1
      @S0S0ant1 6 лет назад +1

      And what’s so informative about it that we don’t know already!!??

    • @akashsoni4418
      @akashsoni4418 6 лет назад

      Ai and machine learning will be the future of world . the video gives very nice examples of how we can use ML on different field also the guy defined that the algos also need to be change somewhere . it still inaccurate on some measure things . and all the things should be understood to being a Data scientist or ml lover.

  • @johncooper9727
    @johncooper9727 2 года назад

    Stop petty crime felonies. No victim, no crime!

  • @er.piyushpandeytrainerb.te4661
    @er.piyushpandeytrainerb.te4661 6 лет назад

    So Great

  • @reyngirl6404
    @reyngirl6404 6 лет назад +1

    Very Good ❤

  • @JP-uk9uc
    @JP-uk9uc 2 года назад

    “Therefore, COME OUT FROM THEIR MIDST AND BE SEPARATE,” says the Lord. “AND DO NOT TOUCH WHAT IS UNCLEAN; And I will welcome you. “And I will be a father to you, And you shall be sons and daughters to Me,” Says the Lord Almighty.

  • @milesbrown1889
    @milesbrown1889 4 года назад +1

    Simply put, ai algorithms need to be updated.

  • @karenktq
    @karenktq 6 лет назад

    Came here because I'm very much interested in criminal justice.

  • @josh-richards
    @josh-richards 3 года назад

    great vid

  • @jackiesifuentes1359
    @jackiesifuentes1359 6 лет назад

    Just talked about this today

  • @alphastrength3402
    @alphastrength3402 6 лет назад

    Have you ever thought that :2+2=4 is the end of the world

  • @uybabayun
    @uybabayun 6 лет назад +1

    That’s oversimplification of what ML can do. If you feed in just two classifiers, of course it will generate the result that any Average Joe can guess.

    • @tiffanysanchez3181
      @tiffanysanchez3181 4 года назад +1

      They didn't feed in just two classifiers, they fed it many classifiers, but found that the algorithm only needed those two classifiers to get that rate of accuracy (which happens to be the same rate of accuracy of any average Joe on the internet), and was using those two most heavily in order to make its predictions.
      The problem that he is outlining here is that the algorithm is being fed biased data, because it is data that is coming from a biased society. That means that the predictions will, of course, also be biased. But part of the problem is that it is difficult to really start solving these problems because many people still believe that data can't be biased, that numbers can't be biased, and so obviously algorithms can't be biased. And those incredibly misguided views need to be addressed; because before you can fix a problem, you need to understand that there IS a problem

    • @uybabayun
      @uybabayun 4 года назад +1

      I agree 👍

  • @512upload
    @512upload 6 лет назад

    Great talk! Quick note: the automatically generated subtitles have, among others, this huge inaccuracy - "GDP our" where it should read "GDPR".

  • @Rahul-co3ie
    @Rahul-co3ie 6 лет назад +2

    You must have used SVM or Ensembles!! Not the classic linear models...

  • @mindset2billions663
    @mindset2billions663 5 лет назад +2

    so with that being said then a jury of ones "peers" is just as bias

  • @optimusprimer4392
    @optimusprimer4392 2 года назад

    This is about imprisonment not Rehabilitation if they cover every angle with the law that means everybody is always under government scrutiny especially the minority groups new boss same as old boss

  • @reyngirl6404
    @reyngirl6404 6 лет назад +1

    Like like like 🌷

  • @shivasirons6159
    @shivasirons6159 3 года назад

    Im against using a.i. For crime, but im curious ,if you remove the pot bust what will you get? Im always going to say the guy with 15 arrest will commit a crime befire the guy who,s never been arrested, maybe he never got caught but still i,d have to go with it.

  • @shivasirons6159
    @shivasirons6159 3 года назад

    10:30. If you're 70 yrs old and only have 1 arrest i,d say you are less likely to re offend than the 25 yr old with 9 arrest.

  • @shivasirons6159
    @shivasirons6159 3 года назад

    5:20. You know no such thing, you know if they GOT CAUGHT reoffending.

  • @shivasirons6159
    @shivasirons6159 3 года назад

    1:45. You said to make bail decisions,not whether youd be arrested again.

  • @hannahhavana1477
    @hannahhavana1477 3 года назад +1

    I’m sorry but this doesn’t make sense to me! With Neuroweaponry, the System can do far-out SyFy things like implant false memories, a precrime algorithm should be soooo easy to set right! 🤷‍♀️
    Unless a profitable and awfully convenient model is only being ...amplified 🙄 corrupt systems now go high tech huh?
    Why wouldn’t they?? Gotta justify a broken System. Scaled up to the world!

  • @Thefrankhere
    @Thefrankhere 3 года назад

    Random people on the internet may know know the race but the judge will.

  • @arlanbergoust
    @arlanbergoust 2 года назад

    I thought it was cool. It would be even more so if someone were able to superimpose the detention teacher, from The Breakfast Club, as thespeaker.

  • @zeynabstars2117
    @zeynabstars2117 6 лет назад

    جميل جدا ...

  • @AMEZINTECHNICAL
    @AMEZINTECHNICAL 6 лет назад +1

    🤔think

  • @harrisonlowe300
    @harrisonlowe300 3 года назад

    How much did they get paid to do the survey?

  • @CMoore8539
    @CMoore8539 6 лет назад +6

    Artificial Intelligence is a Trip!!! One day, Soon, AI will be running Everything. That should be very interesting...

    • @chisathot750
      @chisathot750 6 лет назад +2

      Why would anyone let and AI do that?

    • @CMoore8539
      @CMoore8539 6 лет назад

      arsenic sore, Perhaps AL will be used Much more by people. I think it could be helpful, provided that they are monitored closely.

    • @S0S0ant1
      @S0S0ant1 6 лет назад +1

      Cindy Moore AL?

    • @supportnepal2310
      @supportnepal2310 6 лет назад +1

      great sir

    • @CMoore8539
      @CMoore8539 6 лет назад +1

      First Dawn, I meant AI.😊

  • @agustasister5624
    @agustasister5624 6 лет назад +1

    I doubt these stats TOTALLY

  • @jameskonzek8892
    @jameskonzek8892 3 года назад

    I bet insurance companies use algorithms like this to some extent.

    • @fatos9832
      @fatos9832 3 года назад +1

      Daniel Kahnemans Co-authored book, Noise, explains it very well, and it varies very widely.

    • @optimusprimer4392
      @optimusprimer4392 2 года назад +1

      No but they can get your driver's report which also involves the police which could also involve a co operative release of Criminal Intent making your insurance agent deny you or hike your rate up so far highway robbery

  • @rainieday9474
    @rainieday9474 Год назад

    How they know the color of Their skin is by their names

  • @seancloser
    @seancloser 6 лет назад

    I hate how people with a small brain have to quantity everything so that they can not make a wrong decision. Come on, u r a human, that s why u re better than that. Dont be lazy and think.

  • @shivasirons6159
    @shivasirons6159 3 года назад

    15x more for weed will definitely skew it, what if you removed the marijuana ?

  • @bxnjxmxn2942
    @bxnjxmxn2942 4 года назад +5

    Where my LDers at lol

  • @reyngirl6404
    @reyngirl6404 6 лет назад +7

    Hi Iam Turkish🇹🇷

    • @SPS2684
      @SPS2684 6 лет назад

      allllahu akbaaar *boom* ...sad...sad religion

    • @reyngirl6404
      @reyngirl6404 6 лет назад

      @FOOD CHANNEL mavi tişörtlü kadın ne öyle 😂😂

    • @reyngirl6404
      @reyngirl6404 6 лет назад

      @@SPS2684 what

    • @huseyinabac9651
      @huseyinabac9651 6 лет назад

      Ulan adamlar bizi ortadoğu ülkesi sanıp tekbir getirmişler

    • @faiqcreates
      @faiqcreates 6 лет назад

      @@SPS2684 people like u are the reason we hate west. No idea whats going around u but believe whatever the media says.

  • @rubenpartono
    @rubenpartono 3 года назад

    Is the simplification presented here legit?

  • @catmeow9362
    @catmeow9362 6 лет назад

    Dedsec likes this

  • @najmaabdalle5895
    @najmaabdalle5895 6 лет назад

    Today I'm so early wow

  • @reyngirl6404
    @reyngirl6404 6 лет назад +1

    Sebepsizce beğen 👍

  • @shariecebrewster5962
    @shariecebrewster5962 Год назад

    I am there's for class

  • @larrywilliams2386
    @larrywilliams2386 6 лет назад +1

    Corrupt data in corrupt technology.

  • @alptekinserdenak2263
    @alptekinserdenak2263 6 лет назад +1

    Watch Dogs 2 already?

  • @shivasirons6159
    @shivasirons6159 3 года назад

    Is the race put into the data?

  • @unleashingpotential-psycho9433
    @unleashingpotential-psycho9433 6 лет назад +3

    I hope the police can figure out a formula to find out who will commit crimes before it happens.

    • @jacklonghearse9821
      @jacklonghearse9821 6 лет назад +2

      That can be potentially abused.

    • @somethingweirds3375
      @somethingweirds3375 6 лет назад +7

      You should probably read 1984, Fahrenheit 451, Brave New World and Animal Farm. We don't want thought crime to become a thing, we really, *really* don't. It's already bad enough that intent to commit a crime is in itself a crime, the law need not go further than it already has.

    • @DarthErdmaennchen23
      @DarthErdmaennchen23 6 лет назад

      Well, there are actually people trying to do this, so...

    • @jackiesifuentes1359
      @jackiesifuentes1359 6 лет назад

      We try but things change and longitudinal studies are crazy expensive

    • @achilles1373
      @achilles1373 6 лет назад +2

      A documentary i watched showed a govt agency that is trying to use parents behavior paterns to predict children's behavior and then will force an abortion. Be careful what you wish for.

  • @lynxions7687
    @lynxions7687 6 лет назад

    Hi

  • @ConnorEllisMusic
    @ConnorEllisMusic 6 лет назад +1

    Watch_Dogs?

  • @zbanch123
    @zbanch123 6 лет назад

    LOL! You HIDE information (race) and get algorithm with worse prediction. What's the problem?! Just unhide it :)

  • @stefanmironov6405
    @stefanmironov6405 2 года назад

    ok fed

  • @talha1579
    @talha1579 6 лет назад

    1 to comment

  • @hilaland7
    @hilaland7 6 лет назад

    Türkçeye çevirin

  • @weeyumbyron4027
    @weeyumbyron4027 4 года назад

    These are so boring

    • @arlanbergoust
      @arlanbergoust 2 года назад

      I thought it was cool. It would be even more so if someone were able to superimpose the detention teacherfrom The Breakfast Club as thespeaker.