How To Make Algorithms Fairer | Algorithmic Bias and Fairness

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 50

  • @JordanHarrod
    @JordanHarrod  4 года назад +10

    Sorry, some of the links got cut off because I hit the character limit for the description box 😅. In particular, here’s the link to the letter against the automated racial profiling paper -medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16

    • @CriminalJusticeExpert
      @CriminalJusticeExpert 4 года назад

      Excuse me, but you're soooo gorgeous!! 💔 i wish i met you in person.😞

    • @valmach1
      @valmach1 3 года назад

      The first & colossal error by the scientific community is the use of 'race' no such thing. It is clearly evident that nurture, not nature, is the premiere reason that 'racism' exists in algorithmic biases. Cultural & complex natural weights should be applied as the differentiating factors, not one's hue. If the fundamentals are all wrong, It is all a house of cards. Hate algorithms is a feature, not a bug. Your assessment is frankly Vasili, and surprisingly naive. You are merely participating in the Big Lie'

  • @nacoran
    @nacoran 4 года назад +7

    Finally, a use for philosophy majors! :)
    It's funny, as I watched this I was thinking of Cardinal Newman's letters on liberal education on how to make humans more human... and it still comes down to reading more and from a more diverse set of sources.
    Before she retired my mother worked for NYS Dept. of Education. She was one of the people in charge of trying to remove bias from the Regents exams. When you were talking about AI's having a hard time recognizing objects from different cultures it reminded me a lot of her talking about how tests could be racially biased by, for example, using objects that people of one culture were more likely to be familiar with than people from another culture. Both people might be able to work out the math but if one group is having to also try to remember names for things they aren't familiar with they end up at a disadvantage.
    One of the weird things that fascinates me about all of this... I was talking with her once about deconstructionism and she was not a fan of literary theory approaches to analysing things, but when I made the connection to meta-analysis, stepping outside of the data set and looking at the context of the data set in deconstruction and how that related to some of her concepts, all of the sudden it became apparent that the same sets of problems were being dealt with in all sorts of fields, from English Lit, to Education theory, to philosophy, to Sociology, (to computer AI learning). But each field can get caught up in it's own jargon so much that they don't realize they are talking about the same thing as the people at the next table over.
    I'm really enjoying these. My math is way to rusty to dive into the deep end but you do a great job explaining it. One of my friends was interviewed for an archeology site recently. He is very much into computer modeling, so, for instance, they put variables into a model and see how changing a single variable like how far people could reasonably travel for water (better buckets, pipes or whatever) changed the shape of their communities and then test that against the data, even using it to get ideas where they might find undiscovered archeology sites. I wish I'd had better math and computer programming teachers. They discouraged me so much that I never got into the advanced stuff, but it's fascinating what can be done with it.
    Edit- After watching this video I clicked on a Steve Lehto video. He's a lawyer who covers a mix of issues, particularly focusing on his home state of Michigan. His video today was about a story out of Detroit about a man who was arrested because his face came up in a facial recognition sweep for shoplifting. It was, predictably, not him.
    ruclips.net/video/gGFb4y2-g5c/видео.html

  • @teganthompson2811
    @teganthompson2811 4 года назад +16

    Congrats on getting sponsored!

    • @JordanHarrod
      @JordanHarrod  4 года назад +2

      Thank you!

    • @SupaCam1
      @SupaCam1 4 года назад +2

      ​@@JordanHarrod Here because of Vsauce and am loving it!

  • @robertcomeau7099
    @robertcomeau7099 4 года назад +6

    Great video! Now is absolutely the right time to address bias in AI! I will share this with high school seniors in the coming year. In my tech class, we experiment with facial recognition, and students see firsthand the biases of failed recognitions by race and gender. Your video explains the context, technical problems, and offers hope for solutions.

  • @dilettagogliaunipi6517
    @dilettagogliaunipi6517 4 года назад +5

    Great video! I'm actually writing a paper about algorithmic bias and this was SO helpful !

  • @legionnre2181
    @legionnre2181 4 года назад +5

    Rockin’ the Marques Brown merch

    • @JordanHarrod
      @JordanHarrod  4 года назад +2

      One of the two pieces of RUclipsr merch that I own. 😅

  • @bh-td1bg
    @bh-td1bg 4 года назад +5

    Dope shirt! You shouldve started off by saying "Hey whats up JBHHD here?........" sorry didnt realize how corny that sound until I finished typing haha. Congrats on the sponsorship! I hope your eventually as big as he is!

  • @ThaFedejp
    @ThaFedejp 4 года назад +5

    Here to congratulate you on your partnership with the Vsauce guys!

  • @teo7822
    @teo7822 4 года назад +6

    Very interesting video, I caught your channel a while ago and all the videos I've seen felt fairly informative. In this case, the part about what should and shouldn't be pursued is quite fascinating and quite scary (both in the way of "we should stop the unethical research" but also "if we do start doing so, where will we stop", perilous times but quite exciting), thank you for the case you exposed, gonna read it asap, really felt like Minority Report grounds.
    One "personal question", I'm not sure whether you have addressed this in your channel, but what made you pursue this path? I'm currently in between going for my Masters in CS and there are too many exciting things going on.
    Did you have a concrete idea of the field before jumping in? My greatest fear is that the warped reality of a bachelor (Europe, very theoretical) doesn't really give you a concrete idea of what each field really has to offer and we are stuck with the buzzwords "Quantum seems super", "DL is the big thing", "IoT sounds exciting" which aren't really descriptive on what is out there.
    Thanks a ton for your content, best of luck!

    • @JordanHarrod
      @JordanHarrod  4 года назад +1

      I've talked about this in an older video (ruclips.net/video/BWBea5J-ZJk/видео.html) but the short version is that I ended up in ML somewhat by accident - I'd planned to do tissue engineering and it turned out that I liked the data analysis better. I definitely started on the applied side of CS and have more recently moved into the theoretical side since starting graduate school, so I'd say I'm still learning a lot about the scope/realities of the field as I go (and this channel actually helps with that - I have to make sure I understand something before I can make a video about it).

  • @DustinGunnells
    @DustinGunnells 4 года назад +3

    OMFG! You are unbelievably thorough! Geez!

  • @twothreebravo
    @twothreebravo 4 года назад +3

    An algorithm to determine if someone is a criminal by how their face looks. If that just doesn't sound a like a shortcut to a miscarriage of justice I don't know what does.

  • @analisamelojete1966
    @analisamelojete1966 3 года назад +1

    I think this is an issue with many people getting into AI without knowing nothing or very little of inference models. For example, before this neural nets hype, researchers (at least some of them) were very careful regarding model
    bias and sample selection.
    Now I believe people just focus on getting data (whatever it is) and put into a model to have some results. They often forget that statistical models are GI-GO models.
    “Garbage in - Garbage out”.

  • @saadatmahmud9370
    @saadatmahmud9370 3 года назад

    Very well articulated!

  • @hemalshah1410
    @hemalshah1410 2 года назад

    I am watching this video considerably late , but sincerely appreciate your content Jordan . Concise yet through !!

  • @somashreechakraborty2582
    @somashreechakraborty2582 3 месяца назад

    Hi. Can you explain deeply and on a technical basis, explain, how is an algorithm made bias, lateral, rotational invariant?

  • @alanjenkins1508
    @alanjenkins1508 3 года назад +1

    Of course there is both real and learned bias. You don't want to get rid of the former. For instance men and women on average do not have the same interests, but biasing your model to believe this could harm both groups.

  • @lawrencebarras1655
    @lawrencebarras1655 4 года назад +3

    This is a very good topic, and data science has to be prepared to live up to its ethical responsibility. One thing that has to be considered is that the world is not a fair process, it generates unfair data. Meaning that the most honest collection of data and careful model derivation could still lead to unjust, or illegal practices if not tested for and corrected. It isn't just error or bias at root. Sometimes the ground truth data reflects a condition that has to be corrected through other intervention. Part of the ethical duty is to be vigilant of not just error, but of the just application of data-driven decisions.

    • @JordanHarrod
      @JordanHarrod  4 года назад +1

      Absolutely! One of the authors of some of the papers I cite has a great lecture on this topic as it relates to model development here sites.google.com/view/fatecv-tutorial/schedule?authuser=0

    • @MrMithfin
      @MrMithfin 4 года назад

      But is it fair to the humanity, as a whole, to delay and withhold the progress, just because the dataset you have is imbalanced, but collecting the balanced one would take years and millions? The logical way would be just openly label you product as 'for white people only'. If you develop, let's say a skin disease screening app, and you have data from some Eastern-European hospitals, and the app works great on white people, but does not work at all on other skin colours, is it fair to not release you app and deny the possibly lifesaving tool to millions of human beings around the world for years, until you collect the diverse dataset?

  • @CarlNeal
    @CarlNeal 4 года назад

    This was very informative! Thank you for the analysis and clear explanation of the subject matter.

  • @stefanwezel2744
    @stefanwezel2744 4 года назад +2

    Thank you for the great video! What do you think about the role of interpretability/explainabillity of models for fairer machine learning? In order for a model to be really fair, does it have to be interpretable/explainable?

    • @JordanHarrod
      @JordanHarrod  4 года назад +2

      Not necessarily - if you can rigorously test the model enough to prove that it has achieved a permissible level of fairness (by whatever definition you're using, ideally something that encompasses both the effects on the target task as well as off-target effects) and accuracy, I don't think it necessarily needs to be interpretable. However, testing at that level of rigor is often challenging because you might not know to ask a particular question that would reveal some bias.

  • @vipulkaushal1597
    @vipulkaushal1597 4 года назад

    SO, this means I am rightly in doubt while using simple ML models...are likes of SVM or Random forest fair? Data can be balanced using algos which balance them but again they don't represent the actual scene required out of data.

  • @susanne5803
    @susanne5803 4 года назад +1

    This was very interesting!
    I just wonder why my comment on your last video was erased. I asked exactly this: whether and how AI could help solving skin colour bias.
    (my comment on your last video:)
    "Fascinating though depressing content excellently delivered. Thank you very much!
    Doesn't AI also help in uncovering biases?
    I remember; when I was young a lot of at that time still mostly white mostly male scientists tried to prove that women were indeed the way the prejudices assumed them to be.
    The fun thing was that they often ended up debunking
    themselves. For example trying to prove that women were less intelligent as a whole than men they ended up confirming that individual diHerences are far more relevant and sex related differences of intelligence only showed sex bias during education.
    lsn't that the fun part of the present time: there exist so many people with scientific knowledge and scientific thinking. lt's harder to get by with biased studies. The internet spreads the research around the globe and some people will always do the math again and expose shoddy research!
    I would be interested very much how AI is used to fight bias, injustice and prejudice."

    • @JordanHarrod
      @JordanHarrod  4 года назад +4

      I don't see a comment from you on my last video - might be a YT glitch. And I'd say that it can help in uncovering bias if that is your intention - more often than not, people aren't looking for bias in the first place, so they don't see it.

    • @susanne5803
      @susanne5803 4 года назад

      @@JordanHarrod Thank you! I was worried I had said something wrong or misunderstanding.
      I will look into your earlier videos, since I find this very interesting. See you soon!

  • @bigsarge2085
    @bigsarge2085 2 года назад +1

    👍👍

  • @amoghkulkarni2239
    @amoghkulkarni2239 4 года назад

    Epic!

  • @aigen-journey
    @aigen-journey 4 года назад +2

    While I don't find a single positive use case for the 'criminal faces' model/paper, I would prefer it being criticised openly on merits, not retracted. Because people will keep trying ...

    • @JordanHarrod
      @JordanHarrod  4 года назад +2

      It’s definitely one of those papers that I wish had come out in pre-print and been criticized (or stopped) via the external auditing approach, but I also think it was important that the ML community took a solid stance against the paper and what it stands for.

    • @aigen-journey
      @aigen-journey 4 года назад +1

      @@JordanHarrod it's definitely better, that the community is pro-active not reactive. Still I think the scientific method, peer review etc. should be kept in place. Good example of similar problematic research is with Holocaust denial, as there are some fringe historians trying to at least put into question the scale of the atrocities. I personally think that all of those claims should always be rebutted, put to shame with facts, not silenced, as this will always end up producing a mix of conspiracy theories and martyrdom.

  • @SuperLLL
    @SuperLLL 4 года назад

    You got yourself a new subscriber! #InquisitiveFellowship

  • @gepisar
    @gepisar 4 года назад

    nice vid. Raises very important questions and spreads awareness. *cough cough clearview* i caught that cheeky reference!! and i saw this video in the AIDL FB group... the comments!! From members in a group focused on LEARNING(!)... oh hum. But that gave rise to a thought... and an experiment... How about re-recording this video, in a "staged" setting - and have a blue-eyed white man in a lab coat say exactly the same thing - then post in AIDL to see if the responses changes. Hmm...

  • @olliegrant5375
    @olliegrant5375 4 года назад

    Nice shirt

  • @radcow
    @radcow 4 года назад +2

    Love you love your channel However No social justice in AI please maybe in the future but now isn't the time

    • @JordanHarrod
      @JordanHarrod  4 года назад +17

      I'd say now is actually the perfect time, as well as at any other time. Algorithms affect people, there's no day to disentangle the two topics.

    • @xyldkefyi
      @xyldkefyi 4 года назад +2

      If there is no social justice in AI, there is social injustice.
      There are a lot of cases where a biased AI produces unjust results (Jordan provides many examples in this and past videos).
      If you want AI that is devoid of politics and just concerned about science and accurate observations, you need to examine your biases.
      If I asked a bunch of Democrats, who they'll vote for in November and told you, based on their results, that Trump won't take a single state in the 2020 election, you (rightfully) wouldn't believe me.
      In the same vein we shouldn't believe an AI that looked at a bunch of white men, when it tells us what a face looks like.

    • @xyldkefyi
      @xyldkefyi 4 года назад

      Also @radcow, when do you think the time for social justice in AI will be?

    • @sandwich2473
      @sandwich2473 4 года назад +2

      @@JordanHarrod The perfect time to discuss anything is always now.
      If you refrain from talking about an issue, the masses will remain ignorant to those affected by the issues until they are discussed. Regardless of that, who decides when the time comes to discuss a topic? The world is a fast moving place, so much goes on in the day-to-day that if no-one makes the time to discuss something, it won't get discussed.

    • @gepisar
      @gepisar 4 года назад +2

      @@JordanHarrod yeah, in reference to the original comment: the best time to stop a dictator, history has shown, is before they get to power. NOW is the time.

  • @dosomething3
    @dosomething3 4 года назад

    As a white person who keeps getting discriminated against - I can certainly understand racism.