How AI Preserves Systemic Racism

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024
  • ➡️ Wanna watch this video without ads and see exclusive content? Go to nebula.tv/jord... 👀
    Systemic racism, and the institutions built by it, have existed for far longer than AI has. As the newest tool available to build systems, how has AI preserved (and chipped away at) systemic racism?
    Recommend Reading on Algorithms and Racism (Affiliate Links)
    Algorithms of Oppression - bookshop.org/a...
    Race After Technology - bookshop.org/a...
    Resources for Black History - bookshop.org/l...
    Twitter - / jordanbharrod
    Instagram - / jordanbharrod
    Sources:
    en.wikipedia.o...
    faculty.haas.be...
    www.technology...
    www.pnas.org/c...
    www.nature.com...
    www.acponline....
    www.ncbi.nlm.n...
    www.brookings....
    onlinelibrary....
    pdfs.semantics...
    ir.lawnet.ford...
    www.businessin...
    hbr.org/2019/1...
    www.nature.com...
    www.nature.com...
    www.nap.edu/ca...

Комментарии • 115

  • @3picHamster
    @3picHamster 4 года назад +32

    my professor showed us an example of machine learning gone wrong where the network was supposed to differentiate betweeen wolves and dogs. In some cases, it was very accurate and in others it failed badly. As it turned out, the network wasnt looking at the animal to decide if it was a wolf or a dog, but at the background and checked for snow. If there was snow in the background, even a pug got classified as a wolf.

  • @NinjaMomtastic
    @NinjaMomtastic 4 года назад +36

    I would love to see a video deep diving health care bias as it is currently and how AI could change it or make it worse. Your stuff is always awesome.

  • @zhubajie6940
    @zhubajie6940 4 года назад +29

    Another great example of no how fancy the math: Garbage In Garbage Out.

  • @PureBeloved91
    @PureBeloved91 4 года назад +18

    This was a good intro. I’ll be following you on twitter as well. I’m currently in my PhD journey also.

  • @nacoran
    @nacoran 4 года назад +14

    So, when an algorithm isn't doing things fairly how do you correct for that? Do you program it so it can't access certain fields of data, weight data until it comes out with a balanced result, start over from scratch, feed it more data?
    I'm trying to remember the details... my mother was telling me that when she first applied for a car loan they asked really invasive questions like 'are you on birth control?' because they were worried she was a bad risk if she wasn't because she wasn't married at the time and they were worried that if she got pregnant she wouldn't be able to pay back the loan. Sort of the female version of redlining.

  • @TyroneLT
    @TyroneLT 4 года назад +11

    New subscriber here! This was excellent, well put together and timely. Thank you especially for putting all those sources in the description. 👍

  • @MiraPijselman
    @MiraPijselman 4 года назад +6

    A great overview. Love your channel! I’m pursuing similar studies at Oxford in the fall and I always find your videos insightful, informative, and thought provoking - keep up the great work 🎉

  • @cortezforever
    @cortezforever 4 года назад +4

    Jordan, 3. and 4.a. apply to me. I have been in a dispute with my credit card company since December 2019 in regard to an issue that started in 2016. I pay in full every month with low utilization of around 30% but yet the credit card company lowers my credit score by 200 points or more sometimes which prevents me from obtaining a better card.
    I began documenting the credit report patterns every month and I noticed that whenever my score is around 790 and due to reach 850 to 999 with offerings of credit cards with 2.2% APR, my credit score is misreported for no reason lowering it to between 488 - 630 which is extremely poor.
    I wrote the first complaint letter, the credit card company investigated the issue and did not conclude anything or correspond appropriately. I went to the second step which is the financial ombudsman and it has taken them 6 months to begin an investigation and I am waiting for a final response.
    If I am not fully compensated by the credit card company the third step will be legal action. I determined that an A.I. algorithm is been used as a form of discrimination, the scary thing is the staff are complete sociopaths because even with all the evidence of direct debits, evidence from all credit report agencies, screenshots of my credit score every month, and low utilization, they are still not acknowledging the problem in totality. Last month they finally admitted there has been an issue but they have tried to isolate it to the most recent report error I complained about, offering me $75 in compensation while Ignoring the other 4 years of errors which are estimated at $2250 not including interest and damages which would come to at least $5500. It is evident they are trying to cover up the situation hoping I will just go away but I have no choice other than to see things through.
    Please speak on the subjects you mentioned as they are almost always neglected or hidden from the mainstream within a report.

  • @wtaylorjr2001
    @wtaylorjr2001 4 года назад +3

    Thank you for this, I am a machine learning entrepreneur and one of the most boring and tedious jobs is dataset preparation. You list where we got it wrong, but dataset cleaning must be done with respect to racial biases. This is important to me.

  • @TaliaOutwrong
    @TaliaOutwrong 4 года назад +5

    This is such great coverage of this topic, I really appreciate your presentation style, very accessible!
    I would love to know more about studies and recommedations attempting to correct or address bias in algorithms trained on prejudiced data. Are there any groups that specialise in that sort of work, are they purely based in the comp science or do they collaborate with sociologists, ebonics, accessibility researchers etc?

  • @gracez6663
    @gracez6663 4 года назад +5

    Excellent video!! Thank you for those links in the description!

  • @gever
    @gever 4 года назад +7

    Eye opening, and riveting, please make more

  • @djohnjimmy
    @djohnjimmy 4 года назад +1

    I've been trying to find a good document/video online to explain this better to my friends without sounding too technical. This is a problem that most people don't even know exists and could potentially impact our future generations in ways that are very going to be very very difficult to reverse once these biases are set in the systems. These biases would not be transparent to the non technical people and it would be really difficult to show/prove in court.
    Thank you for a good introduction into this very important issue. You've earned my trust and subscription. Looking forward to more such amazing videos. Cheers!

  • @JaneTheBrain.
    @JaneTheBrain. 4 года назад +7

    i would be interested in hearing about biases against black women, since they have to deal with racism and sexism

    • @JordanHarrod
      @JordanHarrod  4 года назад +3

      That would definitely be interesting, I did a video on bias against women (ruclips.net/video/x385ZDaGb34/видео.html) a few months ago but haven't focused on black women specifically.

  • @vmp916
    @vmp916 4 года назад +2

    Thankfully, a lot more has been written about this issue lately. I have often recommended Weapons of Math Destruction by Cathy O’Neil. In it, O’Neil talks about what she calls “algorithmic WMDs” which reinforce the (in)visible systemic biases in hiring, education, law enforcement, etc. The general conclusion was that algorithms are not immune to socioeconomic (often racial) bias but are specially equipped to exasperate these issues. It just goes to show that all of us in the industry need to be aware of this and there should be lots of discussion on how to prevent putting bias in your code. Anyways, great video!

  • @SophsNotes
    @SophsNotes 4 года назад +4

    Really interesting! I’d very much like to see the rest of iceberg in future videos. Also great jumper hahah

    • @JordanHarrod
      @JordanHarrod  4 года назад +1

      Thanks! And I think I got it from Stitch Fix a while back? The brand is Harper Lane.

  • @Letsnotforget
    @Letsnotforget 2 года назад +1

    Computers/robots will outsmart their creators at some point since they are constantly learning to think for themselves at a rate humans cannot fathom, so maybe they might even solve the racism issue in their own way.
    Its early days but at some point AI will think for itself without us, that's the whole concept of AI.

  • @agustinpichardo3515
    @agustinpichardo3515 4 года назад +3

    Loved ur insight. Please keep making your videos.

  • @AstroSquid
    @AstroSquid 3 года назад +3

    Does everything need to be looked at through the lens of race? Gap data does not mean racism, culture that is different is not racist, systemic racism is an abusive perspective on human behavior or anit-human.

    • @JR-iu8yl
      @JR-iu8yl 3 года назад

      It would be easier to say you dont like black people beacuse of your racial bias. Rather than to sugarcoat your bigotry by playing devils advocate.

    • @AstroSquid
      @AstroSquid 3 года назад +2

      @@JR-iu8yl I could say the exact same thing about you.

    • @AstroSquid
      @AstroSquid 3 года назад +4

      @@JR-iu8yl light bulbs where designed to add "whiteness" (light) to a room, cameras where designed to capture "whiteness" (light). Are you going to extract a victim narrative from that? And if you do, are you not racist?

    • @JR-iu8yl
      @JR-iu8yl 3 года назад

      @@AstroSquid Take your bigorty elsewere clown

    • @AstroSquid
      @AstroSquid 3 года назад +4

      @@JR-iu8yl I know that's the best comment you got, that's the best attack of the victim narrative you've been indoctrinated with can offer.
      If the lady in the video where to explain the history of the chemistry of photography, and explain how photo sensitive crystals work, and how light works, then her saying "cameras where designed for white skin" is a racist statement designed to cause people to hate each other for no real reason. Your comments are proof of it, you don't even know me and your making assumptions, that's called prejudice.

  • @LeonardoMarquesdeSouza
    @LeonardoMarquesdeSouza 4 года назад +4

    Some scientist are worried about 'black swan theory', but simples things about data universe studies has left when we do AI. We need to estudie more statistics and math before using AI tools. At lest, we need to now possible problems.
    Some times i use a hypothetical example: Suppose your enterprise need a studies to see what each gender prefer to eat in lunch, apple or a donuts. Your enterprise has 200 people, 50% man and 50% woman, but you dont know or dont care. You did a lot questions to man at random. But ask to only one woman, and that woman answer "apple".
    You made you beautiful charts, that show 20% man prefer apple, and 80% man prefer donuts. And 100% woman prefer apple.
    Curiously , at statistics view point, it not wrong. You lie to formulas that woman represente all woman in enterprise, but in real its extremely incomplete. At this data analysis with more information , like total people,we have 100% woman eating apple, but with 99,9....% of error, low confidence, etc,etc.
    So always you need to question if the data you have, how much it represents all possible data universe in your studies, and if its relevant, if its possible to obtain, all this BEFORE start data extrapolations, classifications, and all stuff yous AI tools can do.

  • @annp322
    @annp322 4 года назад +9

    Nice summary. When I was at Amazon, a group (not mine) created a system to evaluate resumes, and trained it with resumes of employees who had been successful at Amazon, who happened to be predominantly white and Asian males. So the system picked....white and Asian men. IIRC, the group spent a year trying to wring the bias out of that system before they finally gave up and abandoned the project.

  • @stevec7081
    @stevec7081 2 года назад +2

    No offence Numbers Do Not Lie

  • @hindigente
    @hindigente 4 года назад +6

    First time viewer here. I'm impressed by how thorough you managed to be in only 10min.
    Time to binge-watch the rest of your videos. :)

  • @TravisGilbert
    @TravisGilbert 4 года назад +2

    Yesss! Thank you for raising the big questions im a pretty curious person but I didn't think about this

  • @dawn8293
    @dawn8293 4 года назад +6

    I really appreciate this video. If you feel inclined, please do more race + algorithm videos

  • @AlexanderTersakian
    @AlexanderTersakian 4 года назад +6

    Great video on this topic, and I appreciate the inclusion of reccomended reading in the description.

    • @JordanHarrod
      @JordanHarrod  4 года назад +1

      Thanks for watching! Hoping to do a video other AI books in the near future, just need to find time to read the books :)

  • @AstroSquid
    @AstroSquid 3 года назад +1

    The human eye can perceive about 22 magnitudes of light. So details in light area and shadowed areas can be noticed in extreme value ranges like being outside in the sunlight and looking at something in shadow. Camera's especially old ones, can maybe deal 5 or 6 magnitudes of light, so the range of values before the colors burn out happens a lot faster, so values outside the range either turn all white or all black. i.e you can't take a picture outside and have both the light and shadow area turn out clear with lots of detail because camera aren't nearly as good as the human eye in value ranges. To phrase it that camera where designed for white skin is wrong, and contextually is a racist statement. Dark skin can get lost to an all black value quickly, you can argue that the default settings on a camera, or the lighting set up for photography can be better for light to medium tone skin, but cameras in general loose detail in values outside the magnitude range of the camera. When dealing with an absence of light how that get represented on a medium such as film poses the same difficulty and a technical challenge even today, there are cameras now that get close to the human eye, but camera design back in the day could never been better for dark colors, that's just how it is.

  • @qwerty_and_azerty
    @qwerty_and_azerty 4 года назад +5

    As an AI researcher myself, I can attest to how difficult it is to deal with these issues. Say I have an idea for improving how facial recognition works. I know ahead of time that all datasets I could use are biased towards white men. So what can I do about it? I can try to set out to gather a brand new dataset that isn’t biased. But this is very difficult because there are so many hurdles. For example, it’s very expensive, it’s very time consuming, it’s hard to capture every combination of demographics (e.g. black trans woman with facial vitiligo aged 35-40 who has Down’s Syndrome), etc. Even if I managed to get the perfect dataset assembled, with millions of images of all types of people, I now have more problems. First, I’ve run out of grant funding and I’ve used up all the years of my PhD, so there is neither time nor money left for me to implement my original idea. Second, data gathering is not inherently an AI problem, so I won’t be able to publish my work in an AI conference. Data gathering by itself is generally not regarded as a scientific contribution, so it might not get published at all. Third, the AI field moves so fast that whatever idea I initially had is likely passé after all these years. Fourth, some demographic categories will have very few samples because they are rare combinations, so most existing AI algorithms will tend to ignore these categories, even if are are (marginally) present in the data. So even though I made this contribution, it won’t be recognized and will significantly hinder my career prospects. On top of that, many minority categories will still be ignored and biased against. So there’s not a great incentive to do this. One solution here would be to change scientific institutions to create the incentive, but I’ve seen no push for this or other similar issues (like lack of reproducibility).
    So what do people end up doing instead? They stick with the biased datasets and say “my algorithm could work with non-white non-male faces if I had the data but I don’t.” As a small-time researcher, I just hope that some large institution, like the government or Google or somebody, will eventually make a better dataset. But because of structural racism, that never happens. Ultimately, this would be the ideal solution, but requires significant political will that isn’t here (yet!). It also requires the next point:
    Algorithmic fairness. The other thing I could do (regardless of dataset quality) is try to use fairness constraints when training the algorithm. But fairness is an open problem that no one knows the answer to yet. Maybe eventually this could be a real possibility, but not for the time being.
    In the end, both representative datasets and fairness to protect very small minority classes are needed. One is a structural problem and one is an open research problem.
    Just goes to show how many layers there are that disincentivizes algorithmic inclusivity.

  • @cabbking
    @cabbking 4 года назад +1

    Thank goodness we have young people like you coming of age. Gives me some hope.

  • @teganthompson2811
    @teganthompson2811 4 года назад +5

    Well done Jordon! If I understand correctly, becuase of systemic racism in society, our data sets are biased and becuase our data sets our biased, our algorithms are biased.
    Interesting stuff! Hopefully all those programmers out there will learn about this and try to keep their own algorithms from falling victim to the same flaws. Thx for the video :)

  • @mfbe73
    @mfbe73 4 года назад +1

    Hey Jordan, I've been thinking about how to describe the behavior of things like institutions by considering them to be physical embodiments of a way of thinking (like an algorithm) in the way that a zoo is a kind of living hieroglyph (representing that which it can never be).

  • @Mozart2024
    @Mozart2024 Месяц назад

    I think it is universally accepted bias to blacks my experience growing up in Detroit and traveled to 30 countries. I recall the reaction my dog saw his first black then immediately started barking! AI is wisdom.

  • @georgestavroulakis9407
    @georgestavroulakis9407 4 года назад +1

    AI's racism is exactly the same as people's: being grown with biased data. So, how easy is to remove the bias from the data, when in some cases are so subtly embedded? Very nice video and points.

  • @Jezpigott
    @Jezpigott 4 года назад +2

    When I first discovered your channel while doing research I immediately wondered to myself what your position was on this issue considering your vocation and location. You're a very intelligent young woman and I'm quite certain I don't need to explain to you what I mean by that in light of today's social/moral landscape combined with the attention economy.
    I will be honest, I didn't think that you would hold this position but I still enjoyed your content. It is a monumental relief that you do have your intellectual and moral compass at the ready to see what things are really around you and know why they are there and how they can affect you.
    I say this because some people will lie to themselves and deny reality in order to get along and get ahead. They will sacrifice truth and morality, courage and justice for acceptance. It pleases me beyond words that you are not among them. Please continue creating great A.I. content, I hope to see you grow and do great things in your field in the future.

  • @hindigente
    @hindigente 4 года назад +3

    I'm curious to know whether and how AI could be implemented to actually (as opposed to the example you cited) help diminishing systemic racism in your opinion.

    • @JordanHarrod
      @JordanHarrod  4 года назад +4

      Definitely something on the list of topics to cover, since one of the big challenges of AI fairness research is determining when something is “fair”

  • @rrrosecarbinela
    @rrrosecarbinela 4 года назад +1

    Thanks, Jordan. And yes, you are right. It should be American History including Black, Native, Hispanic and Asian contributions to the development of the country, as well as White abuse and oppression of these sectors of the society.Hopefully things will change.

  • @nerdlarge4691
    @nerdlarge4691 4 года назад +6

    I would like to see the same Tech companies who are promoting Black Lives Matter and Juneteenth publicly, spend time and resources making sure their products don't promote systematic racism.

    • @JordanHarrod
      @JordanHarrod  4 года назад +3

      Yeah, even the tech companies that are dedicating money towards BLM issues are often still selling services/forming partnerships that promote racist institutions. The money is nice but divesting from those partnerships/services would be better.

  • @ionelmiu921
    @ionelmiu921 4 года назад +2

    I guess it has an easy fix; just remove the race feature and re-train! But I get your point... Good luck with your PhD!

  • @michaelkees7303
    @michaelkees7303 2 года назад +1

    RUclips will work overtime oppressing this video

    • @GiganticWeen
      @GiganticWeen Год назад

      youtube loves the victim mentality.

  • @pingnick
    @pingnick 4 года назад

    Hey I wouldn’t necessarily recommend an intro of a number of seconds with visuals too that is standardized of the type Anton Petrov and Smart Primates etc use but if you like particular music and art and you think it could help viewers get in a different mindset maybe consider it!?🎬🎬🎬🎬🎬🎼🗽☮️💟🌈🙏🏿🙏🏽🙏🏻🙏🏼🙏🙏🏾🌏🌎🌍🚀🪐

  • @mlyle1000mobile
    @mlyle1000mobile 2 года назад

    100% ps you can't ask the algorithm, but you can "talk back".

  • @MarkisCouch_1WhatJustHappened
    @MarkisCouch_1WhatJustHappened 2 года назад

    Great video! Thanks.

  • @basmam.2058
    @basmam.2058 2 года назад

    Thank you so much for this video. Can you do another video on bias against gender

  • @JonathanEBoyd
    @JonathanEBoyd 4 года назад

    Wow Very Interesting Video as usual A very Relevant Video these days Learning lots

  • @ashleebritton1234
    @ashleebritton1234 2 года назад

    Smart black woman. I love it

  • @nathanlopes1327
    @nathanlopes1327 3 года назад

    Came for your video on Tom Scott's channel, stayed for the content.

  • @twizzlerfree
    @twizzlerfree 4 года назад +4

    The facial recognition makes so much sense now. We always see so many false convictions in the black community and those of darker skin tones. The facial rec AI needs to be fixed!!!

  • @ARUchannel1
    @ARUchannel1 3 года назад +1

    It's so sad to see minorities being oppressed, I am Hispanic but I feel like I always gonna be a weird person according to artificial intelligence

    • @alexturner2116
      @alexturner2116 3 года назад +1

      You won't don't worry. The first step is being aware because then we can work to fix it.

  • @africanqueenmo
    @africanqueenmo 2 года назад

    Wow great content

  • @alwayslookingnorth
    @alwayslookingnorth 4 года назад +2

    You’re creating AMAZING content and you're a women of color in tech! Im so glad a found your channel!

  • @GiganticWeen
    @GiganticWeen Год назад +1

    the level of delusion in this video is breathtaking.

    • @romanski5811
      @romanski5811 Год назад

      What do you mean?

    • @GiganticWeen
      @GiganticWeen Год назад

      @@romanski5811 Is it that hard to decipher? This person has a deluded and heavily biased opinion on AI based on 0 evidence.

  • @Grimcookie_ww
    @Grimcookie_ww 4 года назад +1

    yikes

  • @lovingyou2220
    @lovingyou2220 3 года назад

    Excellent. Thank you!

  • @mrfincher
    @mrfincher 4 года назад

    it would be nice if i would be more interested in ai, then i would watch more of these videos because they are really good ;D

  • @myhamismad
    @myhamismad 4 года назад +2

    Yes! Please more videos on race and AI, thank you!

  • @pastorelizabethmosley3483
    @pastorelizabethmosley3483 3 года назад

    I agree also credit score is racism

  • @BuRningUniique
    @BuRningUniique 3 года назад

    thats horrible how could a machine say theese things :((((((

  • @comradecat3678
    @comradecat3678 2 года назад

    Well Ghana is working on some AI same with the Congo o wait lol. Whites (indians are white) and Asians have done all of the work in this industry all the innovation. make ur own stop squaking

  • @mfbe73
    @mfbe73 4 года назад +2

    Excellent.

  • @DerpyLaron
    @DerpyLaron 4 года назад +1

    Great video. I'd love more in this series about AI

  • @Pretaxorpheus00
    @Pretaxorpheus00 4 года назад +1

    Omg thank you vsauce Jordan you’re amazing keep up work plssss!!!

  • @monkey3964
    @monkey3964 2 года назад

    My life matters

  • @israelkouakou179
    @israelkouakou179 2 года назад

    👏👏👏

  • @Nachtom
    @Nachtom 4 года назад +2

    The "problem" is that AI sees real-world data (and the real world doesn't match your biased view of the world). If someone "claims" minority communities are high risk do you even consider it being an objective statement or you are instantly trying to find racism?
    Even in that definition of Racism - "is the primary determinant of human traits". I'm sure a wealthy black woman will have a much better score than a poor white man. The race (or gender) might be determinant (based on data and of course some generalization), but it's definitely not the primary one. And it's definitely not because of direct racism. Might be indirect - like historic racism somehow causing BY AVERAGE people in minorities to handle money worse, but it's just how it is - a fact. Learn them financial literacy instead of bashing AI for adjusting according to real data.
    Also, correlation is not causation.

  • @mfbe73
    @mfbe73 4 года назад

    This is because every time I hear someone try to describe these types of systems, common language or even traditional logic fails to fully describe them. It seems similar to understanding how AI systems function. Which got me thinking, maybe new logic is needed, have you studied any type theory?

    • @DouwedeJong
      @DouwedeJong 4 года назад

      That is why Solomonoff created a mathematical formula for Occam's razor. Unfortunately it can be proven in mathematics and is not computable.

  • @Uthyrexe
    @Uthyrexe 3 года назад

    ✨SUBSCRIBED✨

  • @amoo3505
    @amoo3505 3 года назад

    Just followed u on IG..

    • @Zaptosis
      @Zaptosis 3 года назад

      wow you're so cool

  • @LOKI0186
    @LOKI0186 3 года назад

    What's the definition of race? Race does not exsist it is a human idea brought into thought that you are different based on you features. Big point here we are all the same

  • @jurajkundrik538
    @jurajkundrik538 4 года назад +2

    You cannot blame rasism, If you don't prepare your own dateset for training.

  • @jcnwillemsen
    @jcnwillemsen Год назад

    Yes its all racism.. pfff

  • @briankrupski6242
    @briankrupski6242 3 года назад +6

    this is cringe