Are We Automating Racism?

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 474

  • @Vox
    @Vox  3 года назад +1297

    [UPDATE May 20, 2021] CNN reports: "Twitter has largely abandoned an image-cropping algorithm after determining the automated system was biased." www.cnn.com/2021/05/19/tech/twitter-image-cropping-algorithm-bias/index.html

    • @Neyobe
      @Neyobe 2 года назад

      That’s amazing!

  • @AtiqurRahman-uk6vj
    @AtiqurRahman-uk6vj 3 года назад +15210

    Machines aren't racist. The outcome feels racist due to bias in training data. The model needs to be retrained.

    • @Ana-im4dz
      @Ana-im4dz 2 года назад +7

      Lol 15K likes, no comments

    • @onee1594
      @onee1594 2 года назад +6

      Well. Now I would like to see stats on distribution between black and white software engineers and ML specialists.
      And no. I don't say it should have quotas. I just wonder wherever it was tested at all

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 года назад +1

      @@onee1594 Feel free to look for that at your nations government DB or draw a conclusion from a credible sample size. I am not obligated to provide that for you.

    • @onee1594
      @onee1594 2 года назад +4

      @@AtiqurRahman-uk6vj You are not obligated and I didn't ask you to provide it.
      There's no need to be so uncivilized unless you think world turns around your comment and personally.

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 года назад

      @@onee1594 Since you replied under my comment instead of opening a separate comment it is a logical assumption that you were placing the request to me and I declined.
      You and your colonial mindset of xenophobia are a few centuries too late to call others "uncivilized" simply for declining to do your bidding. Good day

  • @Fahme001
    @Fahme001 3 года назад +10576

    lets not forget how light works in camera. I am a dark skinned person and I can confirm that a light skin would physically reflect higher amount of photon which will result in higher probability of the camera to capture that picture better than that of a black counterpart. same goes for computational photography and basic algorithm that are based on photos that we upload. it only makes sense that it would be bias towards white skin. why does everything have to be taken as an offensive scenario? We are going too far with this political correctness bullshit. Again, I am a person or dark skin and even I think this is bullshit. Now if you use it as if this is an issue in identifying person's face for security reasons or such, then, yes I am all for it to make it better to recognize all faces. But please, please make this political correctness bullshit stop.

    • @jezuconz7299
      @jezuconz7299 2 года назад +16

      This is all indeed going to a point where everything has to be taken to the correctness debate instead of factual and objetive responses or solutions

    • @daniae7568
      @daniae7568 2 года назад +5

      this is how barcodes work

    • @faithm9284
      @faithm9284 2 года назад +5

      AMEN! There is no such thing as racism, there is only one race, the human race! Let's stop speaking the negatives! Words are very powerful. When you speak it, you give even fantasy power to 'become'!

    • @airdrummond241
      @airdrummond241 2 года назад +20

      Light is racist.

    • @theoaglaganian1448
      @theoaglaganian1448 2 года назад +1

      Amen+2
      This video is the definition of greed

  • @aribantala
    @aribantala 3 года назад +5400

    Yes, as a Computer Engineering Bachelor and someone who's working with a Camera for almost 4 years now it's good to address the apparent weakness for Camera to capture darker object can mess up AI detections.
    My own Bachelor Thesis was about Implementation of Pedestrian Detection and its really hard to make sure the Camera is taking a favourable image... And since I am from Indonesia... Which, you guess it, have less white skinned population... Its really hard to find a good experiment location, especially when I use already developed Algorithm as a backbone. There are a lot of False positives... Ranging on "misses counts" due to the person is darker, to double counts due to a fairer skinned person passes while there are human shaped shadows.
    We need to improve the technology of AI with better Diversity for its Training datasets. It's good to address that weakness to create a better technology than to point fingers... Learn from our mistakes and improve from that... If a hideous person like Edison can do that with his electric lightbulb? Why aren't we doing the same while developing even more advanced tech than him?
    The title is very nuanced... But hey, it gets me to click it... And hopefully others can stand through the Headline.

  • @NANI-ys5pc
    @NANI-ys5pc 3 года назад +1344

    This video also seems a bit biased, I don’t believe “racism” is the most appropriate reality to associate this phenomena with.

  • @kumareth
    @kumareth 3 года назад +4457

    As a machine learning enthusiast, I can confirm there isn't much diverse set of data available out there. It's just sad but it's alarmingly true.

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +1

      @Sigmaxon in the case of training datasets, because they can be so expensive to produce, the demand is actually constrained by supply rather than the other way around. Changing the demographics in the dataset is a slow process.

    • @randybobandy9828
      @randybobandy9828 2 года назад

      Why is it sad?

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +1

      @@randybobandy9828 because it leads to less optimal outcomes for everyone, duh

    • @randybobandy9828
      @randybobandy9828 2 года назад

      @@kaitlyn__L it's a issue not Worth addressing.

  • @rodneykelly8768
    @rodneykelly8768 3 года назад +2306

    At work, I have an IR camera that automatically measures your temperature as you walk into my facility. How it is supposed to do this is by locking on to the face, then measuring the person’s temperature. Needless to say, I want to take a sledgehammer to it. When it actually works, it’s with a dark face. The type of face it has the most problem with is a light face. If you also have a bald head, it will never see you.

  • @fabrizio483
    @fabrizio483 3 года назад +502

    It's all about contrast and how cameras perceive darker subjects. The same thing happens when you try to photograph a black cat, it's very difficult.

  • @gleep23
    @gleep23 Год назад +27

    This is first video I've seen with "Audio Description" to assist the vision impaired. I'd like to commend Vox for putting in the effort to help differently abled people, especially considering this videos subject matter. Well done for being pro-active with assistive technology.

  • @the_void_screams_back7514
    @the_void_screams_back7514 3 года назад +1176

    the level of production on this show is just
    * chef's kiss *

  • @HopeRock425
    @HopeRock425 3 года назад +928

    While I do think that machines are biased I think that saying they're racist is an over statement.

    • @faychel8383
      @faychel8383 2 года назад +5

      IQs under 83

    • @randybobandy9828
      @randybobandy9828 2 года назад

      You're a simpleton

    • @Unaveragetrainguy
      @Unaveragetrainguy 2 года назад +12

      The piece carefully de-emphasized that the technology was 'racist'; but that the technology seemingly had 'racist outcomes'.

    • @jordanreeseyre
      @jordanreeseyre Год назад +6

      Depends if you define "racism" as requiring malicious intent.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 8 месяцев назад +1

      They were using racist as a description which the AI outcomes were. They even said in this video that it doesn't mean there has to be malicious intent. Tho there probably is some cause the AI just learns from the prejudice stereotypes and beliefs of what people post online.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes excluding other kinds of activities or jobs has hardworking too.

  • @veryblocky
    @veryblocky 3 года назад +425

    I feel like a lot of these things aren’t the result of anything racist, but of other external factors end up contributing to that. The example of the hospital algorithm looking at expensive patients, for instance, isn’t inherently racist. The issue there should be with the factors that cause minority groups to cost less (ie. worse access to insurance), not with the software.

    • @Zaptosis
      @Zaptosis 8 месяцев назад

      Could also be due to non-racist factors such as cultural preferences like opting for at home care, or even subsidizes for low income areas/households which reduce the recorded expenditure of a patient.
      But of course as a media organization they need to jump to the most rage & offence inducing headline which gets them the most clicks, this is why I never trust Vox & other companies like this.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 8 месяцев назад +3

      @@Zaptosis This Vox video did say that there could be factors that didn't have to do with just active racists too. So what are you talking about? You were the one who jumped to rage like you were accusing this vox video.
      Also, you jumped to conclusions that racism doesn't exist too. When it always does and there's evidence.
      You also shouldn't just assume "most African Americans want home care" when the African woman in this video said otherwise. Same with some other African yt-ers I've watched. You should see different perspectives.
      It just seemed like you wanted to not care that there are people negatively impacted by this or racism.
      It's double standards cause if there was prejudice against you or your group you would want the injustice to be amended.
      So far I think Vox is pretty educational.
      There's also conservatives who falsely cry about prejudice hoaxes towards them or caucasians too.
      There's people who received resulted from AI art that were racist or sexist stereotypes.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes saying other kinds of activities or jobs were less hardworking too.

  • @Lightningflamingice
    @Lightningflamingice 3 года назад +1840

    Just curious, but was it randomized which of the faces (darker/lighter) was on top, and which was on the bottom? It wasn't immediately apparent with the tests that were run after, but in both the Obama/McConnell and the 2 cohosts tests, the darker face was on top, which may be why there was an implicit bias towards the lighter face.
    If not that, the "racist" face detection can largely be boiled down to the algorithm being fed more training data of white people rather than black people, a consequence of darker skin tones comprising a minority of the population. As such the ML cropper will choose the face it has a higher confidence is a face. That could be the source of a racial skew.

    • @kjh23gk
      @kjh23gk 11 месяцев назад +1

      The Obama/McConnell test was done with two versions of the image, one with Obama at the top and one with Obama at the bottom. The face detection chose McConnell both times.

  • @bersl2
    @bersl2 3 года назад +874

    There's also the possible issue of "white balance" of the cameras themselves. My understanding is that it's difficult to set this parameter in such a way that it gives acceptable/optimal contrast to both light and dark skin at the same time.

    • @unh0lys0da16
      @unh0lys0da16 Год назад +1

      That's why you use multiple models, one to detect whether there is a black or white person in view and then have one model for each.

  • @syazwan2762
    @syazwan2762 3 года назад +1597

    16:15 that got me trippin for a second until I realize they probably just mirror the video so that the writing comes out right and she's not actually writing backwards.

  • @theguardian3431
    @theguardian3431 3 года назад +560

    I just started the episode but I would think that this has something to do with the basic principles of photography. When you take a photo, the subject is usually in the light and the background is darker for obvious reasons. So, the algorithm simply sees the darker faces as part of the background.

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +6

      Indeed - but why didn’t the researchers train it to not do that? Because insufficient testing was done. Which comes back to a blind spot re race in humans. These algorithms are amazingly good at fitting to the curves we ask them to fit, so these problems aren’t inherent to the technology. It’s with the scope of the problem researchers are asking it to solve.

    • @randybobandy9828
      @randybobandy9828 2 года назад +1

      @Kaitlyn L ya because light just happens to reflect off of lighter skin... oh no.. how dare light behave this way!!

    • @rye419
      @rye419 Год назад +3

      Do you think this issue would occur in a society of majority dark faces? Think about that

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c 8 месяцев назад

      3:37 What about how they used pics of people with all white backgrounds? Why isn't AI thinking the light faces are the light background then? Why is the AI able to pick up the dark hair of caucasians as not part of the background?

  • @andysalcedo1067
    @andysalcedo1067 3 года назад +311

    I'm sorry Joss, but how did the only two people in this video that actively work in the tech industry, that are building these automated systems, only have a combined 5 minutes on screen? You don't talk to the computer scientists about solutions or even the future of this tech, but yet you talk to Dr. Benjamin and Dr. Noble (who don't code) about "implications" and examples in which tech was biased. Very frustrating as a rising minority data scientist myself, to see this video focused on opinion instead of actually finding out how to fix these algorithms (like the description says.)
    Missed an excellent opportunity at highlighting minority data scientists and how they feel building these algorithms.

    • @jezuconz7299
      @jezuconz7299 2 года назад +7

      These people don't seek facts or objetiveness, only points to blame others for not being politically correct...

    • @kaitlyn__L
      @kaitlyn__L 2 года назад

      I would’ve certainly liked to have seen input from Jordan B Harrod, as she’s done a number of great videos on this subject, but with Vox’s traditional print journalism background I can understand gravitating toward book authors.

    • @anonymousperson1771
      @anonymousperson1771 2 года назад

      That's because the intent of the video is supposed to impart the outcome of racism regardless of how it actually works. Emotional perception is what they're after.

  • @NDAGR-
    @NDAGR- 3 года назад +2905

    The hand soap dispenser is a real thing. Straight up

    • @TheSands83
      @TheSands83 2 года назад

      The black guy didn’t have his hand underneath it correctly clearly…. N I’ve had soap dispensers not work…. U r so oppressed

  • @robina.9402
    @robina.9402 3 года назад +806

    Can you please put the names of the people you are interviewing in the description and links to their work/social media? Especially if they have a book we could support!

  • @SpaceWithSam
    @SpaceWithSam 3 года назад +905

    Fact: Almost everyone go straight to the comments section!

  • @divinebitter1638
    @divinebitter1638 3 года назад +928

    I would like to see the contrast pic of the two men Joss took at the beginning and uploaded to Twitter been repeated, but with the black man on the bottom. The background at the top of the pic they took was quite dark, and the lack of contrast might have contributed, along with Twitter's weighting bias, to the white face being featured. I don't think Twitter would switch to picking the black face but it would have helped control for an extra variable.

  • @elokarl04
    @elokarl04 3 года назад +663

    It's been forever since I've last seen Joss in a video. I've almost forgotten how good and well-constructed her videos are.

  • @Vox
    @Vox  3 года назад +164

    On this season of Glad You Asked, we explore the impact of systemic racism on our communities and in our daily lives. Watch the full season here: bit.ly/3fCd6lt
    Want updates on our new projects and series? Sign up for the Vox video newsletter: www.vox.com/video-newsletter
    For more reading about bias in AI, which we covered in this episode, visit our post on Vox.com: bit.ly/3mcZD4J

  • @WMDistraction
    @WMDistraction 3 года назад +662

    I didn’t realize I missed having Joss videos this much. She’s so good!

  • @Oxideacid
    @Oxideacid 3 года назад +269

    11:50
    We're just gonna gloss over how she writes backwards so perfectly?

  • @bellatam_
    @bellatam_ 3 года назад +602

    Google photos thinks that all my Asian family and friends are the same person

  • @jasonpce
    @jasonpce 3 года назад +162

    Ya know what? Good for black people. We don't need facial recognition in today's society, and I genuinely perceive it as a slippery slope when it comes to surveillance. If computers are having trouble recognizing black people, all that means to me is that corporations and the government will have a harder time collecting data on them. I swear to God, we should be having conversations about wether or not facial recognition software should exist, not wether or not it's racist, because imo the former conversation is of much more importance.

    • @CleverGirlAAH
      @CleverGirlAAH 2 года назад

      Yeah, we can certainly agree on this without even bringing the possible racial incongruencies into the conversation. The militarized police state is evil. Period.

    • @CarlTelama
      @CarlTelama 2 года назад

      They literally discuss whether it should exist at all if you watch the video the whole way through

  • @booksandocha
    @booksandocha 3 года назад +846

    Funnily enough, this reminds me of an episode in Better Off Ted (S1:E4), where the central parody was on automated recognition systems being "racist" and how the corporation tried to deal with it. Well, that was in 2009...

    • @MsTifalicious
      @MsTifalicious 2 года назад

      That sounds like a funny show. Sadly it won't fly today, but I'll be looking for it online now.

  • @civ20
    @civ20 3 года назад +437

    The most important thing when it comes to training A.I is the raw data you feed it. Give the A.I 51% images of white people and 49% images of black people and the A.I will have a ~1% bias towards white people.

  • @vijayabhaskar-j
    @vijayabhaskar-j 3 года назад +281

    This is exactly why AI powered software are not for 100% automation, they should always be used as a support tool to the human who is responsible for the job, for example: In your health risk prediction task, the threshold of predicting high risk patient should be lowered from 90%+ to 70%+ and a human should verify they are indeed high risk patient or not, this will both save time(as humans are looking at only mid risk-high patients) and resources, and reduce the bias.

  • @aguBert90
    @aguBert90 3 года назад +176

    "the human desicions in the design of something (technology or knowledge)" is what actually means when academics say "facts are a social construction" it doesn't mean it is fake (which is the most common and wrong read), it means that there are some human externalities and non intended outcomes in the process of making a technology/knowledge. Tech and knowledge is presented to the public as a finished factual black box, not many people know how them were designed, investigated, etc

  • @user-vn7ce5ig1z
    @user-vn7ce5ig1z 3 года назад +55

    2:58 - Lee was on the right track, it's about machine-vision and facial-detection. One test is to try light and dark faces on light and dark backgrounds. It's a matter of contrast and edge- and feature-detection. Machines are limited in what they can do for now. Some things might never be improved, like the soap-dispenser; if they increase the sensitivity, then it will be leaking soap.
    8:13 - And what did the search results of "white girls" return? What about "chinese girls"? 🤨 A partial test is useless. ¬_¬
    9:00 - This is just regular confirmation bias; there aren't many articles about Muslims who… sculpted a statue or made a film.
    12:34 - Yikes! Hard to deny raw numbers. 🤦
    12:41 - A.I.s are black-boxes, you _can't_ know why they make the "decisions" they make.
    13:33 - Most of the people who worked on developing technologies were white (and mostly American). They may or may not have had an inherent bias, but at the very least, they used their own data to test stuff at the beginning while they were still just tinkering around on their own, before they were moved up to labs with teams and bigger datasets. And cats built the Internet.🤷
    14:44 - I can't believe you guys built this thing just for this video. What did you do afterwards? 🤔

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +2

      Re 9:00, and what is the underlying societal reason that the majority of English language newspaper reports about Muslims have that negative tilt…? The implications extracted from training data merely reflect society.

  • @I_am_Theresa
    @I_am_Theresa 3 года назад +1656

    I swear I know some of those AI people! Imagine seeing your face pop out of that face randomiser!

  • @mequellekeeling5029
    @mequellekeeling5029 3 года назад +358

    At the beginning of the video i thought this was dumb but by midway through I’m like this is what we need.

  • @arturchagas7253
    @arturchagas7253 2 года назад +2

    love the fact that this video has audio description! this is so important

  • @Dan007UT
    @Dan007UT 3 года назад +97

    I wish they tested the same picture test in the beginning put put a bright white background on both guys.

  • @michaelfadzai
    @michaelfadzai 3 года назад +199

    So Twitter said they didn't find evidence of racial bias when testing the tool. My opinion is that they were not looking for it in the first place.

  • @atticusbrown8210
    @atticusbrown8210 2 года назад +3

    In the hand video the black person was tilting his hand so that it would go around the area that the sensor could detect easily. The white hand was directly under it. That would most likely cause a difference.

  • @Bthepig
    @Bthepig 3 года назад +143

    Wow, another amazing video. I love the high-minds meet middle-school science fair feel of these videos. They're so accessible but also tackling really massive questions. Each one is so well put together and so thought provoking.

  • @danzmachinz2269
    @danzmachinz2269 3 года назад +246

    Joss!!! Why did you print all those photos!!!!!?

  • @virlives1
    @virlives1 2 года назад +7

    Un claro ejemplo comienza cuando en un canal de RUclips que publica contenido a nivel internacional. Recibe comentarios de varios idiomas. El tema es que los yanquis o estadounidenses, no toleran que las personas hablen otro idioma. Entonces desvalorizan cualquier comentario en otro idioma. Lo hemos estado experimentando.

  • @TheAstronomyDude
    @TheAstronomyDude 3 года назад +64

    Not enough black people in China. Most of the datasets every algorithm uses were trained by CCTV data from Chinese streets and Chinese ID cards.

  • @terrab1ter4
    @terrab1ter4 3 года назад +358

    This reminds me of that book, "Weapons of Math Destruction"
    Great read for anyone interested, it's about these large algorithms which take on a life of their own

  • @d_as_fel
    @d_as_fel 3 года назад +138

    15:50 how she can write in mirror image effortless??

  • @chafacorpTV
    @chafacorpTV 3 года назад +363

    Me at first: "who even asks these questions, sreiously?"
    Me after finishing the video: "Aight, fair point."

  • @venusathena3560
    @venusathena3560 3 года назад +204

    Thank you so much to make this free to watch

  • @wj35651
    @wj35651 3 года назад +69

    18:36 why are they pretending they are talking on a video chat, when they had crystal clear picture from another camera? Reality and perception, subtle differences.

    • @Vox
      @Vox  3 года назад +78

      We had a camera crew on each end of our zoom call, since we couldn't travel due to Covid. - Joss

  • @jordanjj6996
    @jordanjj6996 3 года назад +172

    What a thought provoking episode! That young woman Inioluwa not only knew the underlying problem but she even formed a solution.. when she said that it should be devs responsibility to proactively be conscious of those that could be targeted or specified in a social situation and do their best to prevent it in advance. She’s intelligent, and understands just what needs to be done and stated in a conflict; A solution.. Hats off to her..

  • @Bethan.C
    @Bethan.C 2 года назад +1

    Haven’t seen any new videos come from Joss, miss her so much~

  • @luizmpx833
    @luizmpx833 3 года назад +216

    very good information, it reminded me of your video from 5 years ago...
    "Color film was built for white people. Here's what it did to dark skin"

  • @marqkistoomqk5985
    @marqkistoomqk5985 7 месяцев назад

    I just took a C1 English exam and the third listening was literally a clip from this video. It was nice to see a youtube video as part of such an exam.

    • @ivanromerogomez4049
      @ivanromerogomez4049 6 месяцев назад +1

      I’m in the same situation as you. The problem were the questions. I really don’t know how I did on the exam. I hope we have luck…

  • @TubOfSun
    @TubOfSun 3 года назад +223

    Waited for Joss for what feels like years

  • @greyowul
    @greyowul 3 года назад +152

    People seem to be noticing how nicely the professor can write backwards...
    Fun fact: That's a camera trick!
    ruclips.net/video/eVOPDQ5KYso/видео.html
    She is actually writing normally, (So the original video shows the text backwards) but then in editing, the video was flipped again, making the text appear normal. Notice that she is writing with her left hand, which should only be a 10% chance.
    Great video btw! I thought that the visualization of the machine learning process was extremely clever.

  • @emiliojurado5069
    @emiliojurado5069 3 года назад +260

    it will be funny when machines start to preffer machines and ai than humans itself.

  • @danielvonbose557
    @danielvonbose557 2 года назад +3

    There should be an analog to be precautionary principle used in environmental politics that would be a similar principle when applied to social issues. That is if there is a signficant amount or reasonable risk in doing something, then that thing should not be done.

  • @meredithwhite5790
    @meredithwhite5790 3 года назад +62

    Algorithms of Oppression is a really good book if you want to learn about racial and gender bias in big tech algorithms, like Google searches. It shows that machines and algorithms are only as objective as we are. It seems like machine learning and algorithms are more like groupthink and are not objective.

  • @caioarcanjo2806
    @caioarcanjo2806 2 года назад +4

    Why don't just post the same picture changing both positions, so we can get already a good estimative :)

  • @Dallas_AWG
    @Dallas_AWG 3 года назад +55

    Joss is so good. She has the perfect voice

  • @rizkypratama807
    @rizkypratama807 3 года назад +40

    Stop with the Joss Fong comments, I can't stop liking them

  • @daylight1nsomniac536
    @daylight1nsomniac536 3 года назад +39

    as always the editing is absolutely superior.
    keeps me hooked.

  • @alvakellstrom9109
    @alvakellstrom9109 Год назад

    Great video! Really informative and very important. Thanks for a great watch

  • @lorentianelite63
    @lorentianelite63 3 года назад +48

    I'm a simple man. I see Joss, I click.

  • @ChristianTheodorus909
    @ChristianTheodorus909 3 года назад +102

    long time no see Joss!

    • @torressr3
      @torressr3 3 года назад +5

      Right? I missed her too. She's a great jornalist and freaking cute as all hell!

  • @RealCosmosry
    @RealCosmosry 3 года назад +47

    She is one of my fav vox hosts. Her videos are structured very well and have something interesting that hooks me up for the entire video. Nice Initiative by Vox 'Glad You Asked S2'

  • @MR.CLEAN777
    @MR.CLEAN777 3 года назад +154

    Next thing u know toasters are ganna be racist

  • @pranavkakkar7637
    @pranavkakkar7637 3 года назад +51

    I missed seeing Josh in videos. Glad she's back.

  • @AnthonyVasquezEndZz
    @AnthonyVasquezEndZz 2 года назад +2

    Could it be contrast? What if you photoshop the skin tones to green, yellow, or red and hair color with inverted color. Then use people like dark skin and light colored hair and light hair light skin to see if the contrast difference is what's causing this.

  • @vinceb8041
    @vinceb8041 3 года назад +89

    Thought provoking video! However, using AI generated faces is probably the worst thing to do in this case, since whichever model generated the faces would presumably suffer the same systemic bias. There is a reason we bother collecting actual real-world data instead of using simulated data.

  • @noahjohnson5845
    @noahjohnson5845 3 года назад +22

    These videos are so solid! I'm about to graduate college with a degree in sociology and so far these videos are hitting a ton of the main points that I've learned about over my four years of education.

  • @testos2701
    @testos2701 Год назад +1

    This is happening everywhere right now, when you go shopping, to restaurants, to buy a house, to buy a boat, to get a job. It is designed that way, from start to finish, and there are always excuses of why this is happening and promises that it will change but I have yet to see any changes! As a matter of fact the more you dig the more you will find! 😅🤣😂

  • @RushilKasetty
    @RushilKasetty 3 года назад +48

    So great to see Joss again! One of my favorite hosts

  • @bananaluvs111
    @bananaluvs111 3 года назад +42

    I am amazed with the look of the studio. I would love to work there, the atmosphere is just different, unique and everyone have a place there 😍

  • @MissSarcasticBunny
    @MissSarcasticBunny 3 года назад +57

    This is a really interesting look into machine learning - great job Glad You Asked team! It stands to reason that there would be bias no matter what because even if the machine doesn't have any inherent bias or self-interest in focusing on one face over another, people are still feeding information into the machine and the machine is basing its results on that information. And humans are still flawed beings who bring with them their own personalities, thought patterns, biases, childhood backgrounds, class backgrounds, et cetera. The only solution is to focus on what information we're feeding machines.

  • @santif90
    @santif90 3 года назад +59

    I'll take this video as it has a good intention of creating an important conversation. But your data is kind of funky

  • @HerrZenki
    @HerrZenki 3 года назад +214

    Software's only as good as the guy who programmed it i say.

    • @Suavocado602
      @Suavocado602 3 года назад +96

      Confirmed. I’m a programmer and both me and my software suck.

  • @augustlions
    @augustlions 3 года назад +84

    I see Joss Fong I click

  • @gabrielposso179
    @gabrielposso179 3 года назад +39

    I subscribed a while ago, and now most videos seem to be related to racism. I respect the choice, but it’s getting monothematic.
    Anyway, hope the US gets better. Good luck over there

  • @LouisEdouardJacques
    @LouisEdouardJacques 3 года назад +28

    Good video! It is nice that you are taking the time to point out where it manifests concretely in the present world. Many other discussions on the subject revolve around automatic guilt. This video seems to start with a different approach and hopefully will appeal to a wider audience, not simply preaching to the choir. The counter-bias briefly mentioned at the end is what worries me. It should be part of the solution but we have to be careful. First, is it being done at all or just talked about when the public is listening? And most importantly to me, are we careful with which framework is used to design this counter bias? Some are worried that letting people think for themselves is amplifying racism. Others think that we are made blind to other and future kinds of diversities when we don't let people bring their own analysis and solutions, even if they can be contradictory.

  • @MikosMiko
    @MikosMiko 2 года назад +1

    I am black and I build models. The theory is: bad data in, bad data out. Whatever data and rules that these algorithms were built on is what should be in question. Machines are not racist, the people (in tech companies, vendors, agencies) who build them are.

  • @cheesecakelasagna
    @cheesecakelasagna 3 года назад +30

    Love the production especially on the set!

  • @killianbecker1164
    @killianbecker1164 3 года назад +19

    This feels like a pbs kids show. With the set and all!

  • @mnengwa
    @mnengwa 3 года назад +69

    Aren't we the creators of the machines, passing our own image (strengths & short-comings) to what we create? If they were to become sentient and seek to learn from the human race won't the machines pick up bias and hatred??

  • @ronxaviersantos3184
    @ronxaviersantos3184 3 года назад +51

    Joss talking about twitter in 10:09 then went straight to ad, an you guessed it: twitter

  • @Sayora77
    @Sayora77 3 года назад +111

    "A program is only as smart as the one who coded it."

  • @pavanyaragudi
    @pavanyaragudi 3 года назад +42

    Joss Fong!❤️🔥

  • @kuldeepaurya9178
    @kuldeepaurya9178 3 года назад +8

    Wait.. how did the soap dispenser differentiate between the two hands???

  • @andrewnicorn
    @andrewnicorn 3 года назад +100

    Anything by Joss Fong is legit.

  • @vivekburuj414
    @vivekburuj414 2 года назад +4

    Or you could rather be less insecure of such negligible differences and be happy. Just make yourself stronger from inside and let the machine do its work.

    • @CleverGirlAAH
      @CleverGirlAAH 2 года назад

      My man.

    • @johnmaris1582
      @johnmaris1582 2 года назад +3

      Unintended consequence or outlier are itself an interesting things to discover. Not to mention this experiment would have stronger implications down the line when algorithm play a much bigger role in society decision making for job, crime background check, recruitment, remuneration and etc... You are right that this example is not a big problem. What it can or will be are worth understanding.

  • @hotroxy240
    @hotroxy240 3 года назад +42

    Is this why the automatic sink in public restrooms barely work for me because it’s designed to read lighter skin 🧐🥲🧐🥲

  • @ShionChosa
    @ShionChosa 3 года назад +29

    There was a Better off Ted episode. Corporate head office decided to discontinue use of the energy saving technology to save money.

  • @mrush8057
    @mrush8057 3 года назад +35

    the camera thing is not racist it is just black colors blend with the back ground while whites are more strong and bright so white is hard not to see for a computer

  • @dr_jj
    @dr_jj 3 года назад +64

    Worlds of Math Destruction by Cathy O’Neil really touches hard on this subject, where biases of the designers or the customers of the algorithm have big negative impacts to society. There seriously needs some kind of ethical standard for designing algorithms but it’s so damn hard... :/

  • @kingjulian420
    @kingjulian420 3 года назад +10

    4:30. Why are you filming and driving!! No don’t read a quote!! JOSS NOOO. *boomp*
    *beep beep beep*

  • @mariofrancisco6717
    @mariofrancisco6717 2 года назад +1

    As máquinas não são racistas, elas são mal programadas ou mal configuradas por pessoas que não tomaram os cuidados adequados durante o projeto.
    The machines are not racist, they are poorly programmed or misconfigured by people who did not take proper care during the project.

    • @bluebutterfly4594
      @bluebutterfly4594 2 года назад +1

      And why do the creators still not bother to take care this is not the first time these issues have been raised. Its not getting better.
      So why do you think they choose to disregard part of the population?

  • @Intriguing_stuff
    @Intriguing_stuff 3 года назад +41

    Wait, the cameraman is in both rooms but they are face timing each other???

  • @dEcmircEd
    @dEcmircEd 3 года назад +7

    maybe it was more tech focused but it was way more interesting to me than the one about assessing is own racism, which seemed a bit more frivolous in its sourcing and it's overall process.
    Joss does really great stuff

  • @bellesasmr
    @bellesasmr 2 года назад

    why didn’t we try the two guys switching places in that picccc

  • @aronjohnreginaldo1913
    @aronjohnreginaldo1913 3 года назад +45

    When you first see Joss's face on the thumbnail for sure every topics are interesting 😅

  • @JanetCanning-p1s
    @JanetCanning-p1s Год назад

    We found in class that this video cuts off at about 5 minutes from the end.

  • @rosstully5960
    @rosstully5960 2 года назад +1

    In a year the video has been up you have only allowed 295 comments? Do you just delete everything? Or does nobody care about Vox?