How I'm fighting bias in algorithms | Joy Buolamwini

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024
  • MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn't detect her face -- because the people who coded the algorithm hadn't taught it to identify a broad range of skin tones and facial structures. Now she's on a mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." It's an eye-opening talk about the need for accountability in coding ... as algorithms take over more and more aspects of our lives.
    TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
    Find closed captions and translated subtitles in many languages at www.ted.com/tra...
    Follow TED news on Twitter: / tednews
    Like TED on Facebook: / ted
    Subscribe to our channel: / tedtalksdirector

Комментарии • 654

  • @MarkArandjus
    @MarkArandjus 7 лет назад +248

    Okay let me break it down for you folks real simple-like: if a webcam finds it difficult to detect, for example, dark skintones, then the functionality of the webcam is biased towards dark-skin users, because it performs poorly due to their appearance. She's not saying this is a result of racism on the part of programmers, or that webcams are racists, its just an unfortunate by-product of the technology and she's working to correct it. Facial recognition is a powerful tool with a wide application from privacy to security to entertainment, this isn't some SJW nonsense. Jeez.

    • @Samzillah
      @Samzillah 7 лет назад +15

      Seriously. Imagine cops are trying to find a criminal with the technology but cant find them because of this mistake? There are billions of non-white people, so it needs to work on them too.

    • @DenGuleBalje
      @DenGuleBalje 4 года назад +6

      @gbmpyzochwfdisurjklvanetxq You obviously have no idea how a camera works.

    • @DenGuleBalje
      @DenGuleBalje 4 года назад +17

      @gbmpyzochwfdisurjklvanetxq Are you unaware that a camera relies on light hitting the sensor? Darker skin reflects less light. The less light the longer the exposure needs to be to give a good visual representation of what you're looking at. A webcam sets the auto exposure to get a set amount of light to the sensor. If the background is a lot lighter than the person's skin then the face will look even darker, because the camera shortens the exposure to reduce the overall brightness.
      Another factor is that face recognition relies on contrast to make out what an eyebrow, mouth or nose is. Dark brown on black is just harder for a computer to define than for example brown on "white".

    • @amychittenden8993
      @amychittenden8993 4 года назад +8

      This would not happen if the coders had dark skin. It depends on who the coders are. So, yes, it really is racism, albeit a more passive form, but with the same results.
      ruclips.net/video/gV0_raKR2UQ/видео.html

    • @MarkArandjus
      @MarkArandjus 4 года назад +5

      @@amychittenden8993 Okay sure if we look at it by outcome then it is racism, the same way systemic racism is racism even if that was not the intended design.

  • @Nick-kb2jc
    @Nick-kb2jc 7 лет назад +70

    The real reason some people are triggered in the comments: they hate seeing a smart, educated African American MIT student.

    • @IronLungProductionsOfficial
      @IronLungProductionsOfficial 3 года назад +12

      Haha nope, she's got nothing productive to bring to the table in the field of AI, constantly touting lol, first world problemos

    • @mtngirl4944
      @mtngirl4944 2 года назад

      😂😂😂

  • @Nick-kb2jc
    @Nick-kb2jc 7 лет назад +166

    I can see that lots of the people in the comments have no clue how machine learning works...

    • @HarikrishnanR95
      @HarikrishnanR95 7 лет назад +26

      Nick just like the lady in the video, who has no clue how facial recognition works

    • @thebrochu1983
      @thebrochu1983 6 лет назад

      Red

    • @fakeapplestore4710
      @fakeapplestore4710 6 лет назад +20

      "MIT grad student Joy Buolamwini was working with facial analysis software "

    • @ckilr01
      @ckilr01 4 года назад +9

      Do you understand how bias works? Bias is the Unconscious preference. It's related to the mirror principal where we are unconsciously attracted to those like us and repeled by things we dislike unconsciously. She is saying she is fighting your unconscious choices, in other words forcing your choices not based in racism or hate.

    • @SuperMsmystery
      @SuperMsmystery 3 года назад +18

      @@HarikrishnanR95 she has a PhD ,but continue mansplaining.
      How about you publish your reasoning?

  • @whipshaw
    @whipshaw 7 лет назад +118

    I liked her introduction, "a poet of code", as a programmer I'm feeling flattered

  • @ShonTolliverMusic
    @ShonTolliverMusic 7 лет назад +39

    reminds me of that video explaining how Kodak film couldn't replicate brown tone people right until the mid 80s

  • @brendarua01
    @brendarua01 7 лет назад +25

    Unless there is a wide variance in the delivery of email notices at least 17 people disliked this before saw more than they saw the first 5 minutes. That says a lot about some very foolish people.
    This is a great presentation! She's a wonderful presenter who is dynamic and entertaining on what is a technically and socially complex topic. It's exciting to have and example of discrimination that, while unintended or even unconscious , is very real and has very concrete results. Thank you for sharing, TED.

    • @ArtInMotionStudios
      @ArtInMotionStudios 7 лет назад +3

      It is the title more than anything and the way she starts off the video. The issue is more complicated than just, it can't detect black faces. Which is simply not true, it does just not as many.
      I met someone who has been trying to solve this for years because I live in a black majority country and well it is easier said than done,

    • @Wegnerrobert2
      @Wegnerrobert2 7 лет назад +3

      Brenda Rua tell me, don't you ever like a video the moment you click on it?

    • @brendarua01
      @brendarua01 7 лет назад

      Golo, I don't click like or dislike until I've listened to at least have of a clip. Sure, I have plenty of topics and presenters that I'm attracted to. But whether I agre or not, I try to listen objectively and critically. I can't recall ever disliking something because of the subject matter. I will do so and I'll post a comment, if I find "alt facts" or fallacious arguments.

    • @brendarua01
      @brendarua01 7 лет назад +1

      Ok, Golo. I can see how using "bias" in the title would be a trigger. One would have to listen for several minutes to realize she wasn't talking about social issues but about skewed data used in the training. Even then that might not get through to some listeners.

    • @Wegnerrobert2
      @Wegnerrobert2 7 лет назад +1

      my point is that most people on youtube frequently immediately like videos because they simply expect that it will be good. Disliking a video directly is just as normal as liking it.
      But since that doesn't apply for you I will give you some other arguments why immediately liking a video is no problem.
      You mentioned it already in the other comment; I think that a title can be enough for a video to get a dislike because it's a simple method of feedback.
      And a rating is only temporary anyways. I don't pretend that my brain doesn't immediately form an opinion when it sees the video in a feed. But you can always change the rating.

  • @ijousha
    @ijousha 7 лет назад +31

    Personally , I wish my face was less detectable by surveillance cameras.

    • @AjahSharee
      @AjahSharee 4 года назад +25

      Yeah until you get misidentified and arrested because of an algorithm.

    • @SuperMsmystery
      @SuperMsmystery 3 года назад +1

      The problem that you don't see is why surveillance in the first place?

    • @insearchof9903
      @insearchof9903 3 года назад

      @@AjahSharee but if you're innocent why worry? You will just be like bye when they see they have the wrong person.

    • @aelf_ears9119
      @aelf_ears9119 2 года назад +2

      @@insearchof9903 but if the system that determines right/wrong is flawed then it doesn't matter if you view yourself to be innocent or not...

  • @BERNARDO712
    @BERNARDO712 7 лет назад +59

    Nice resume:
    Joy Buolamwini is a poet of code on a mission to show compassion through computation. She is a Rhodes Scholar, Fulbright Fellow, Google Anita Borg Scholar, Astronaut Scholar, A Stamps President's Scholar and Carter Center technical consultant recognized as a distinguished volunteer. She holds a master's degree in Learning and Technology from Oxford University and a bachelor's degree in Computer Science from the Georgia Institute of Technology.

    • @austinjohn8713
      @austinjohn8713 2 года назад +3

      and she does not understand how AI works. The problem she is calling bias isn't bias. it was not coded. an AI algorithm is given a data set to learn features and this learning is then used to make prediction given new a data. if the AI struggled with her face, it was because it wasn't trained with a data set that contained dark skin tones. to fix the problem feed more black faces to it. it has nothting to do with bias. if the AI was trained only on black faces, it would not recognize white faces except the face was covered with a black mask

    • @abdullahb4453
      @abdullahb4453 2 года назад +4

      @@austinjohn8713 That is what she said. the algorithm is biased because of that. :)

    • @austinjohn8713
      @austinjohn8713 2 года назад +3

      @@abdullahb4453 algorithms are not biased. what a machine learning algorithms does is not explicitly programmed so it makes no sense to accuse it of bias. if she is an expert in the field she was supposed t retrain it using black faces only and see that the same way it behaved with black faces it would with white faces. she is looking for racism where it does not exists. I say this as a black person

    • @randomguy_069
      @randomguy_069 2 года назад +8

      @@austinjohn8713 Correct. Algorithms are learning from what we are teaching them. They are not biased we are biased in feeding the training data. It feels as if humans are masking an unethical aspect of their society by calling it the fault of AI.
      And in recent years I have actually seen many leaders in this Algorithmic Bias movement moving ahead and designing proper ethical AIs instead of crying about bias or whatnot and blaming everything on an AI Evil god which they, ironically, trained themselves.

    • @austinjohn8713
      @austinjohn8713 2 года назад

      @@abdullahb4453 it is not algorithmic bias. if any bias exits at all it should be in the data fed to the AI.

  • @chinweogedegbe5449
    @chinweogedegbe5449 Год назад +20

    This is really great..... even 7 yrs later .... this information is very relevant, Thank you Joy !!

  • @robbieh6182
    @robbieh6182 7 лет назад +6

    You can tell most of the people who disliked the video don't have a science background. It IS a bias if the algorithm only recognizes a certain type of face. The word bias has no negative connotation by itself, it simply means preference or "works better with". She isn't saying the algorithms are "racist".

  • @laurencross6240
    @laurencross6240 5 лет назад +20

    This is so interesting! Joy Buolamwini rocks.

  • @Ashberrysoda
    @Ashberrysoda 7 лет назад +26

    🙄 I am always confused by people who comment on and dislike things they haven't seen. Clearly shows a bias of the viewers. Good job Joy.

  • @TheEndofThis
    @TheEndofThis 7 лет назад +9

    how interesting that a video on algorithmic bias has equal parts likes to dislikes.

  • @DeoMachina
    @DeoMachina 7 лет назад +20

    >nonpolitical video about software
    >mass dislikes
    Tell me again how there isn't a problem with racism in this channel's audience

    • @DeoMachina
      @DeoMachina 7 лет назад +4

      What's debatable about it? Why doesn't the same thing happen when white guys talk about software?

  • @daniels7568
    @daniels7568 7 лет назад +5

    1,092 ppl didn't watch the video before rating

  • @MedEighty
    @MedEighty 7 лет назад +9

    The process by which the majority of viewers of this video decided to rate the video:
    1. Notice that the person presenting is black.
    2. Notice that the person presenting is a woman.
    3. Notice that the title of the video has "fighting" and "bias" in it.
    4. Switch on racist and sexist brain circuitry.
    5. Click thumbs down, before the video beings.
    6. Move on.

    • @JamesJeude
      @JamesJeude 6 лет назад

      As of May 2018 the like/dislike ratio is about 51/49, so "majority" might be an overstatement ... but ... the problem of inadequate training data is worth discussing and is the obligation of everyone in the AI field to discuss. Look at the google search for something as basic as 'grandpa' and you'll see almost entirely whites in the top few dozen results. This is not a result of bias at Google Image Search, but the preponderance of examples that have the word 'grandpa' on (EXIF) or near the photograph, and the link and click history as processed by an algorithm that is proprietary to Google or Bing or Yahoo or whatever. The softer side of the question, the less technical side, is whether a company like google has an obligation to un-do the bias it picks up from the actual clicks, link, and web design behaviors of its billions of websites and users. So to the point of her video - does an image-recognition engineer have an obligation to look beyond the mass of evidence and check for bias in the less common cases. It's analogous to a statistician designing a survey to 'oversample' certain segments with low representation. ("oversampling" doesn't mean "over-representing", contrary to the misunderstanding some politicians had during the 2016 election. The numbers are normalized before the survey is published.)

  • @gg-wk2ww
    @gg-wk2ww 2 года назад +2

    Keen curiosity brings things to light, good and bad, good job

  • @tbpp6553
    @tbpp6553 7 лет назад +6

    More Dislikes than Likes ?? MY GOD ! This is a Real issue. My racist coolpad camera doesn't recognize my face when I select face-detection mode. It is so embarrassing !!

  • @skdooman
    @skdooman 5 лет назад +16

    I appreciate the content of the video, but I wish she would have included more statistical examples. Its one thing to claim your face wasn't recognized. Its another thing to display data on many people who's faces were scanned and were or were not recognized.

  • @moneysittintall3611
    @moneysittintall3611 3 года назад +3

    why does this have so many dislikes, she makes a valid point

    • @impanthering
      @impanthering 3 года назад +4

      Willful ignorance

    • @GhostMillionairesTV
      @GhostMillionairesTV 3 года назад +4

      Because she doesn't make a valid point and you can't think well enough to even figure out why.

  • @maximilianpohlmann9106
    @maximilianpohlmann9106 7 лет назад +1

    Thank you TED for not disabling comments!

  • @tigerlilly9038
    @tigerlilly9038 2 года назад +2

    Human humans forget that computers are only as you are only as smart as you make them there is no secret to be unfolded this was a wonderful talk

  • @miss118jess
    @miss118jess 3 года назад +11

    This is such powerful talk! Let's imagine that the reason facial recognition works 'better' on light skinned people because cameras are good at picking up light, (and not biased datasets). If facial recognition technology is not working on darker skin then it's not working at all. It's not recognising faces! Facial recognition is still unreliable using high quality photos of members of congress and honestly, surveillance cameras using this tech would be low quality anyway. AI, years later, still excludes a large proportion of the population and needs to be a big topic of discussion as society increasingly relies on its decision making.

    • @mouath_14
      @mouath_14 2 года назад +1

      Laying concern over this technology's ability, or better phrased, inability to detect and identify faces is like scratching the tip of the iceberg, because facial analysis is physiognomy and we all know that that idea is horrible. Yet, facial analysis is also becoming super popular. Some propose complete bans from certain domains and they are not wrong for proposing that either...

    • @karthickkrish5098
      @karthickkrish5098 Год назад

      It isn’t because of the brightness/contrast/camera quality, it’s because of the datasets we got till now. These algorithms have been trained with certain age group and certain colour, if the subject matches with the face it recognises in fraction of seconds, if it’s quite opposite, there’s a problem. Lack of brightness/contrast/Camera quality just make’s the problem worse!
      I’m a MSc Artificial intelligence student, So I know what we use to train the systems!
      It’s hard to digest but it’s the truth!

    • @PatrickInCayman
      @PatrickInCayman Год назад

      @@karthickkrish5098 Right, so I guess Microsoft also failed at this as well because their entire team didn't think to train their facial recognition on people of color.. I think MIT and other AI students should return to learn basic physics.

  • @letsgoiowa
    @letsgoiowa 7 лет назад +177

    Making machines discriminate is probably _a very bad idea._

    • @ArtArtisian
      @ArtArtisian 7 лет назад

      +

    • @cybercat1531
      @cybercat1531 7 лет назад +6

      Computing Machines ONLY discriminate, they're binary after all. So are you a 1 or a 0?

    • @agamergirl9801
      @agamergirl9801 7 лет назад

      letsgoiowa +

    • @kevinscales
      @kevinscales 7 лет назад +6

      Discrimination is not a bad thing, it's what computers and minds are supposed to do. It's being unfair and not inclusive that is the problem.

    • @letsgoiowa
      @letsgoiowa 7 лет назад +1

      THIS IS HOW YOU GET SKYNET

  • @erricomalatesta2557
    @erricomalatesta2557 7 лет назад +21

    You could use this to your advantage instead of fixing it.
    Being anonymous these days is a gift

    • @Melusi47
      @Melusi47 3 года назад

      Snitch to the whole race. Now they will play attention 😂

  • @hvbris_
    @hvbris_ 4 года назад +11

    She's great

  • @IshtarNike
    @IshtarNike 7 лет назад +52

    This always annoys me. Taking selfies with my mates, never get facial recognition. It's a small peeve, but it's quite annoying.

    • @ArtArtisian
      @ArtArtisian 7 лет назад

      +

    • @dansadler
      @dansadler 7 лет назад +1

      But it also means you have kinda natural visual surveillance protection because your face is less contrastive.

    • @premier69
      @premier69 7 лет назад

      +Dan Sadler rofl

  • @missachol24
    @missachol24 7 лет назад +51

    Omg goodness did people even watch the whole video? People are crazy 🙄

    • @jacob5208
      @jacob5208 7 лет назад +4

      missachol24 the algorithm she is talking about is outdated and will soon be replaced by pattern tracking software

  • @NoahWillCrow
    @NoahWillCrow 7 лет назад +123

    Aside from perhaps a poor teaching sample, this did not make sense. I am a software engineer interested in AI and Machine Learning, and the solutions this woman sets forth do not make sense. "Who codes" doesn't matter - the programmer's aren't telling the computer how to recognize faces directly but rather to analyze images and compare them with past data.
    If anything needs to be improved, it is that cameras need better contrast.

    • @dabidoe
      @dabidoe 7 лет назад +3

      Perhaps not "who codes" but how computers interpret the data they have? People can complain about cameras, lighting, shadows, contrast as the reasons that they don't work but the best algorithms find new ways to get desired results with inferior input/existing bugs.

    • @brendarua01
      @brendarua01 7 лет назад +20

      Noah you miss the point of the presentation slightly. The point is that even good people with no ill will can make choices that have differential effects. If the coders and scanners were aware of the lighting issue they would have made adjustments. But they didn't think to check. If you're in the biz then you know these kinds of oversights are rife in development. It's why we do phased releases and controlled testing.

    • @arthurdent6256
      @arthurdent6256 7 лет назад +7

      Noah Crowley Well, who's your first test sample? You and your co-workers. If you have people with darker faces working with you then you aren't going to overlook the problem. It's just common sense.

    • @NoahWillCrow
      @NoahWillCrow 7 лет назад +3

      Barry Manilowa I just don't like how she is giving her argument as if it is a people problem. She made it an emotional appeal when it is purely technical.

    • @ArtArtisian
      @ArtArtisian 7 лет назад +5

      She's not wrong - there's this really interesting paper from a few years back about a particular dataset used in ML theory for a few decades. The improvement over time should have followed the state of the art on other tasks - but it doesn't, it goes up exponentially as the *researchers* got to know the data better. They started removing samples that they knew intuitively were hard to train with, or shuffling the test sets differently. Presumably all unconsciously.
      Plus, more diverse coders means more diverse test cases. Who doesn't run their handwriting recognizer on their own handwriting?

  • @tunjilegba
    @tunjilegba 7 лет назад +30

    Hopefully when Robocop comes into fruition it will mistake me for a Tree 😊

    • @dividedperceptions6626
      @dividedperceptions6626 7 лет назад +3

      Tunji Legba That is the kind of positiv thinking we all should learn from:)

    • @jillgaumet8416
      @jillgaumet8416 4 года назад +1

      I hope we don't have Robocops. I want smart humans, not smart machines.

    • @hannahl1387
      @hannahl1387 3 года назад

      @tunji legba you can but dream.

  • @Deathmachine513
    @Deathmachine513 7 лет назад +191

    Perhaps because the shadows that the software requires to detect your face are less visible on your already dark skin. Occam's razor people, perhaps explore more likely sources than instantly jumping to racism.

    • @Dielawn69
      @Dielawn69 7 лет назад +24

      Nope, it's obviously because the programmers are racist lolz. On the real though, these people don't care about the real reasons. They just care that they can misconstrue it as racism. They think instead of addressing the real problem, we should just change the laws of physics lol.

    • @tunjilegba
      @tunjilegba 7 лет назад +11

      Deathmachine513 Not racism but an oversight excuse the pun

    • @belakthrillby
      @belakthrillby 7 лет назад +22

      Deathmachine513 when did she say the coders were racist?

    • @dabidoe
      @dabidoe 7 лет назад +27

      Can you dial down your white victimhood? This is an issue of technical glitch: the technology should work and they didn't bother to fix it.

    • @tunjilegba
      @tunjilegba 7 лет назад +2

      blehTM Its not racist unless it was intentional. The word racist isnt offensive to people who are not racist. Like calling a tall person short or a thin person fat. The people who take umbrage are usually the people the shoe fits

  • @giordanoparisotto5617
    @giordanoparisotto5617 Год назад +1

    Excellent!!! Loved her! Shes awesome!

  • @aizenplay
    @aizenplay 3 года назад +2

    what's the name of the facial tracking system thanks

  • @onnoderkman3760
    @onnoderkman3760 7 лет назад +103

    I love how she is not playing the racism card, as we have seen in other scenarios. She is trying to solve the problem, instead of whining about it

    • @anthonycoleman4631
      @anthonycoleman4631 7 лет назад +22

      I love that aspect of it also but I think a large part of the viewers didn't watch the whole video or based it on the caption. An overwhelming number of dislikes. Bias perhaps?

    • @godofthisshit
      @godofthisshit 7 лет назад +3

      +Onno Derkman Yet you people still crying about the video.

    • @msms47
      @msms47 7 лет назад +6

      still not working raceist dipshit get triggerd just by her on the stage .

    • @NathanGatten
      @NathanGatten 7 лет назад

      godofthisshit msm47 hate is still hate no matter who you throw it at. No need to take a page from their books.

    • @godofthisshit
      @godofthisshit 7 лет назад

      @Nathan Gatten What?

  • @jacobcromer7192
    @jacobcromer7192 7 лет назад +1

    Nobodies fighting anything, no one is trying to stop you from fixing this. Why is she framing this like a civil cause?

  • @viperxiiii
    @viperxiiii 7 лет назад +7

    Love how her example was as she called it a cheap webcam and not a more complex one.

  • @inachu
    @inachu 2 года назад +1

    In years to come this will be an issue. Companies will need test subjects or images of all races to make sure technology truly works for all races.
    What if the next super smart techie nerd is born in india and the camera only works with people from India? It can and will happen.

  • @socdemigod
    @socdemigod 6 лет назад +1

    Brilliant. But I don't think anyone should want to have their face recognized by a software. Doesn't that seem a bit intrusive?

  • @israelip
    @israelip 4 года назад +7

    For those who don't know how machine learning works and can't even hear her. Try to read about training sets.

    • @austinjohn8713
      @austinjohn8713 2 года назад

      if she knew it was due to training set and not bias she would not make this talk calling it bias. the AI would do the same to a white face if it was trained only on black faces

  • @stew_baby7942
    @stew_baby7942 6 месяцев назад +1

    Not good for some computer to judge by facial features...or to judge anyone

  • @SergioLongoni
    @SergioLongoni 7 лет назад +1

    I agree with the content of the video about the potential bias of algorithm but I have a problem with the example of face recognition. My phone as no problem in tracking the speaker face, so I think that this is not a real problem for marketable applications not trained with outdated and small data sample

  • @coulorfully
    @coulorfully 6 лет назад +10

    The people programing/coding the algorithms that impact us all have implicit (internal) biases that become implicit in their code.

  • @nerdtuts797
    @nerdtuts797 7 лет назад

    All those who are saying that she doesn't make sense doesn't know anything about machine learning. If the training data doesn't have enough images of black people, the algorithm will have a hard time detecting them. It's not about the lighting or the camera. I am surprised by the number of dislikes on this video!

  • @thierrybock3847
    @thierrybock3847 7 лет назад +2

    that ain't no bug it's a privacy feature. don't break features.

  • @theaminswey9733
    @theaminswey9733 7 лет назад +2

    Great talk, I'll leave without checking comments section now, thank you, Joy❤

  • @Zoza15
    @Zoza15 7 лет назад +3

    Well, i once had to put my face on a cam and the cam didn´t recognize my face either..
    She actually does something about it, so why the dislikes for this video?.
    I support her actions, as long as it doesnt results in consequences that leave other groups out for the sake of the main group..

    •  5 лет назад

      Because she displacing science for ideology.

  • @sundar72
    @sundar72 5 месяцев назад

    A few years ago I was in Frankfurt airport on transit. The restrooms have automatic soap dispensers.. it could only detect light skinned hands for some reason. I am from India. There were two people in that restroom trying every dispenser. A black person and myself .. we were telling each other that the damn things are broken. In walks a white guy and uses it. We asked him to try the other soap dispensers and they all worked for him! We laughed, shook our heads and moved on saying "someone did not design and test their product right!" . At the end of the day, design and test should always consider the spectrum of end users. Always remember this mantra when designing things with AI "You are not your user"!

  • @pinegulf
    @pinegulf 7 лет назад +1

    I'd like to see the code she writes.

  • @theegreatestever2420
    @theegreatestever2420 4 года назад +6

    This was such an important TD Talk. I cant believe I just recently found out about it when diving deep into AI and using it in my apps but I am glad I didnt find out later.
    Its unfortunate the domain name is now for sale and no longer operated by them but I loved this

  • @jyotiswamy6305
    @jyotiswamy6305 6 лет назад +3

    Thank you Joy! This world needs you, because programmers (as evidence from the comments), have no understanding of the SOCIAL IMPACT of their work. Of course there may be other solutions, but it is the SOCIAL structure of your field that matters. This would not be a TECHNICAL issue if every programmer was black, but because RACIAL MINORITIES are highly disadvantageous due to unethical practices and historical processes, (that have kept them from learning about such software relative to others dependent on race and gender), cannot be due to a "glitch" in the system. WAKE UP PEOPLE!!! Racial paradigmatic bias exist in computer science as well. Also, this is a BLACK WOMAN talking about facial analysis software which is changing the SOCIAL STRUCTURE of the field, and needed to prevent issues like this in the future. You can most definitely argue that there are other ways to fix this solutions, but you can't argue that the minority elite does not look like Joy. I swear this world needs to be more reflexive........UGH. JOY YOU ARE A QUEEN! Thank you so much for speaking up and being a voice for the voices in a very underrepresented field. (Simply look at the representation of the audience). WAKE UP YALL.

    • @austinjohn8713
      @austinjohn8713 2 года назад

      no. she is mis characterizing the problem. the AI would behave the same to a white person if it was trained only on black faces. it is not bias. the dataset was skewed

  • @BERNARDO712
    @BERNARDO712 7 лет назад +1

    Great accomplishments, Joy:
    Joy Buolamwini is a poet of code on a mission to show compassion through computation. She is a Rhodes Scholar, Fulbright Fellow, Google Anita Borg Scholar, Astronaut Scholar, A Stamps President's Scholar and Carter Center technical consultant recognized as a distinguished volunteer. She holds a master's degree in Learning and Technology from Oxford University and a bachelor's degree in Computer Science from the Georgia Institute of Technology.

  • @Dee-jp7ek
    @Dee-jp7ek 7 лет назад +5

    This isn't her screaming racism kids, this is her saying that we should probably test facial recognition on a more diverse group of faces to make sure they work on more than just one face type.
    Its like I'm attempting to make a one size fits all sweater, trying it on a bunch of S - M people and saying 'it fits!' without trying it on L or XL people.

    • @kiernan3148
      @kiernan3148 7 лет назад

      people shouldn't be L or XL though

  • @luisfernandez7426
    @luisfernandez7426 2 года назад +4

    Great talk, Joy! It’s important that this topic is getting visibility. Great work you’re doing toward combatting these biases

  • @mr_lrb6879
    @mr_lrb6879 3 года назад +4

    I'm a bit embarrassed to admit that every time she said "Coded Gaze", I heard it as "coded gays" and got really confused about what that had to do with coding until she showed us the Coded Gaze webpage. Still a good and eye-opening talk though.

    • @tammieknuth6020
      @tammieknuth6020 3 года назад

      That would mean shes biases against gays and the LGTQ+ and shes literally a different race

  • @jddebr
    @jddebr 7 лет назад

    Awful lot of folks in this comments section who don't understand what the word bias means in the scientific community. Bias is a real thing in machine learning. All algorithms have inductive bias. She is not saying that algorithms are racist...

  • @dud3man6969
    @dud3man6969 3 года назад +8

    In my opinion the ability to defeat AI facial recognition is an advantage.

  • @bunkertons
    @bunkertons 3 года назад +4

    This is so informative, thank you for sharing.

  • @Dataanti
    @Dataanti 7 лет назад +4

    i bet it has more to do with the camera having a hard time picking up darker skin tones... because.... cameras in general have harder times with darker colours. I dont see how you will be able to fix this without upgrading all cameras to have direct light sources or IR Depth sensors. this has nothing to do with algorithms

  • @tenacious645
    @tenacious645 7 лет назад

    This is actually a legitimate video. Actually watch the fucking thing before disliking..

  • @shell9918
    @shell9918 3 года назад +1

    she is so well spoken and likeable

  • @matthewfanous8468
    @matthewfanous8468 7 лет назад

    i was waiting for her to say "if you cant tell, this is because i was black" BUT SHE DIDNT AND NOW I DONT KNOW WHY THEY COULDNT DETECT HER

  • @barneypotts9868
    @barneypotts9868 7 лет назад +4

    It turns out that if you get a cheap webcam with a cheap face recognition software, you don't get very good face recognition

  • @stephenclement3349
    @stephenclement3349 7 лет назад +9

    Cool initiative! I am sure coders would love you helping them identify their bugs and provisioning them with free data. Just make sure you remember they probably didn't do it intentionally and approach them kindly. Otherwise you will end up as crusaders fighting someone who isn't really your enemy.

    • @Skinny97214
      @Skinny97214 3 года назад +5

      Might want to google "tone policing."

  • @timojissink4715
    @timojissink4715 7 лет назад +6

    I've actually studied everything to do with 3D printing and so also 3D scanning, I've learned that there are 3 things that are difficuld to scan by a 3D scanner. the first is shiny objects, the second in translucent objects and the last was black objects...
    "Black objects" it's true, licht gets absorbed by the color.

  • @TripodJonas
    @TripodJonas 7 лет назад

    Also, why not use things other than visible light to do the same thing, darker faces are darker in visible light, not under other sources.

  • @LKKMD
    @LKKMD 3 года назад +1

    Good work thank you

  • @milanpaudel9624
    @milanpaudel9624 7 лет назад +2

    wtf.. Whats with those many Dislikes ? This is genuinely good ted-talk.

  • @TheCreativeKruemel
    @TheCreativeKruemel 7 лет назад +89

    MBY BECAUSE THE BIAS ISN'T REALLY A BIAS?!?!?

  • @canUfeelMYface
    @canUfeelMYface 4 года назад +7

    "Someone else will solve this problem. "

    • @phantomcruizer
      @phantomcruizer Год назад

      Yes, “Colossus/Guardian…SkyNet/Legion” !

  • @siarheipilat8152
    @siarheipilat8152 7 лет назад +2

    I personally encountered that issue while working on a image processing project at my university, what does some justice have to do with it? It is a valid problem. The project was about hand detection, though, and the program was trained on white students. But wait, I just read some comments below, so I'll just go throw up. You guys enjoy.

  • @st8of1der
    @st8of1der 7 лет назад +2

    Couldn't this be addressed by using a light that's outside of the visible spectrum? What about combining a high-resolution camera with infrared light?

  • @CrazySondre
    @CrazySondre 7 лет назад +3

    What happened to TED Talk...?

  • @josephinegrey4517
    @josephinegrey4517 4 месяца назад

    i wonder how this can be applied to ageing faces?

  • @Chronomatrix
    @Chronomatrix 7 лет назад +1

    Nothing wrong can come from a facial analysis software...

  • @tracykarinp
    @tracykarinp 7 лет назад +3

    Thank you for a "Very Informative" presentation! It's wonderful that you brought this issue to the front burner! Kudos Joy! :-)

  • @Sirius_Blazing_Star
    @Sirius_Blazing_Star 9 месяцев назад

    If the Training sets aren’t really that Diverse, any Face that Deviates too much from the Established Norm will be Harder to Detect...

  • @morgrimx5732
    @morgrimx5732 7 лет назад +5

    It seems there is also a human bias toward facial recognition software. The bias gives more credibility to it since it's a computer !

  • @Brutaful
    @Brutaful 7 лет назад +61

    People complaining about the like/dislike ratio in 5, 4, 3, 2, 1.....

    • @tunjilegba
      @tunjilegba 7 лет назад +19

      Brutaful People being hyper defensive over the word bias in 3...2...1...

    • @mridulpj
      @mridulpj 7 лет назад

      Brutaful I'm not complaining. The like/dislike ratio is perfectly balanced. Let's keep it that way.

  • @evilplaguedoctor5158
    @evilplaguedoctor5158 7 лет назад +1

    I wish she did more research, as in, the details as to what part of the algorithms that causes them to fail with different skin colours, and how to fix those issues. because it kind of sounds like she is just complaining wanting others to fix this problem for her.. but I could be mistaken.

  • @BornIn1500
    @BornIn1500 Год назад +3

    oh god, they even call ROBOTS racist. Literally everything is racist to them. This is so toxic.

    • @khalillindo1918
      @khalillindo1918 Год назад +4

      Lord. She is not calling robots racists. She is literally saying that the algorithm can’t recognize darker tones because camera’s rely on the reflection of light you idiot. Watch the damn Talk and know what you are talking about.

  • @leviengstrom7359
    @leviengstrom7359 4 года назад

    why the background look like it was built out of solo cups

  • @MysticScapes
    @MysticScapes 7 лет назад

    She is just seeing the problem from only one perspective. Hardwares like webcam are so important as much as these algorithms. I'm not a black person and even my webcam sometimes doesn't recognize my face cause I would have a long hipster beard. She was trying to make this bias as political and racial as possible however science doesn't care about all these labels. To avoid over fitting simply look outside the box and stop blaming others.

  • @vonneely1977
    @vonneely1977 7 лет назад

    Is this to be an empathy test? Capillary dilation of the so-called "blush response?" Fluctuation of the pupil, involuntary dilation of the iris?

  • @derekvaillant6303
    @derekvaillant6303 4 года назад +2

    Take back the algorithm and open up those black boxes. Thanks for the encouraging message, Joy.

  • @sbongisenimazeka8652
    @sbongisenimazeka8652 7 лет назад +3

    Black people aren't "people"... they are Gods.

  • @Tripp393
    @Tripp393 7 лет назад

    Guys this is just something she's doing with her life. It would be dumb if they didn't talk about it

  • @robertsolem9234
    @robertsolem9234 2 года назад

    Yes, what we need to do is *improve* facial recognition technology /s

  • @Alitari
    @Alitari 7 лет назад

    I agree that this is a problem, but she seems to really have a shotgun approach to trying to create new or take over existing phrases / memes / acronyms ... feels to me like she's hoping one or more of them will gain traction for her own self aggrandisement ... self promotion is one thing, but it feels like this speaker took it to another level, beyond that which TED is normally known for.

  • @Dumass88
    @Dumass88 7 лет назад +1

    I really hate the word inclusion now.

  • @adamred5449
    @adamred5449 2 года назад +3

    maybe her face just looks weird

  • @Deathmachine513
    @Deathmachine513 7 лет назад +103

    Sigh, social justice talk AGAIN, when will TED just stop being disingenuous and rename their channel Social Justice Talks, because clearly this is almost all the channel is at this point.

    • @Horesmi
      @Horesmi 7 лет назад +1

      Deathmachine513 meh, they do some legit stuff too.

    • @Deathmachine513
      @Deathmachine513 7 лет назад

      What percentage of the content is that? Only taking into account recent content. It's clear the focus of the channel is not "legit stuff" at this point.

    • @Brutaful
      @Brutaful 7 лет назад

      To be fair, I've only read the description so far but this looks like one of those videos with someone complaining about "lack of diversity" or something similar. I'm already wary about this one.

    • @Deathmachine513
      @Deathmachine513 7 лет назад

      Thus far, it seems to be her complaining that it's hard to detect black people's faces. Likely because it relies on shadows to detect shadows on her dark skin, which is difficult. And now onto the diversity part of the talk.

    • @Deathmachine513
      @Deathmachine513 7 лет назад +2

      A basic summary would be "software is racist". "Inclusive code". Oh god, we need more diversity hires!!!

  • @emmanuelezenwere
    @emmanuelezenwere 7 лет назад +2

    Nice one Joy, I'll join your fight!

  • @JoaDrath
    @JoaDrath 7 лет назад +3

    As long as she isn't making other people fight "algorithmic bias", I support her.

  • @dermihai
    @dermihai 7 лет назад

    Wow, people do overreact... It is very true what she said, just watch the whole video. One thing tho... When she says that we need diversity among coders, so that they can fill each other's gaps, I hope she means diversity of exeperience and of field of study, not racial/national/sexual diversity.

  • @Yirsi
    @Yirsi 7 лет назад +1

    While the problem that an algorithm does not work correctly is true in this case, I don't think it's connected to bias at all.
    But you certainly have to point out where the problem lies within the code, so the people behind it can fix it. Focusing on that issue seems more important to me.

    • @Joe-yr1em
      @Joe-yr1em 4 года назад +2

      Bias just means it is geared towards certain features more than others. It is connected to bias. Not in the sense that you have a coder that's biased or anything but in the sense that the model is making predictions based on datasets that dont accurately represent the target market.

  • @davlmt
    @davlmt 7 лет назад

    Yep the face detection on my sony camera always ignore black pple's faces and track white faces flawlessly

  • @hinonashi9439
    @hinonashi9439 7 лет назад +6

    Let's talk about facial recognition algorithm. There is some facial recognition algorithm that the pattern are putting online right now. In this case, I will put the facial recognition algorithm that calls "Viola-Jones" like the example for everyone can easily understand. It works by repeatedly scanning through the image data calculating the difference between the grayscale pixel values underneath white boxes and black boxes. So what are the black boxes and white boxes mean? Just look at your nose, the bridge of your nose is usually lighter (white boxes) than the surrounding area on both left and right sides (black boxes). The middle of your forehead (white boxes) is lighter than the size of it (black boxes). These are the crude test for facial recognition algorithm, but if they find enough matches in on an area of the image, it concludes there a face there and mark it that was your face. This algorithm can't find your face if you're tilted or facing sideway to the camera, but this algorithm is very accurate for frontal faces. This is how the digital cameras, the smartphone camera have been putting a square box around your face. The algorithm must do more to detect your eyes, your lips, your nose that why big data come in. It's gathering images on the internet, around the world to make it more accurate. So why the algorithm in your video isn't detected your face. First, it's not an algorithm fault; it's you. I had already mentioned above; you have to put your face in front of the camera. In the wear glasses video, your head is tilted and keep moving all the time. And when you put your face in front of the camera, you have to look down, because the webcam is looking up the ceiling, that why I can see the light on the ceiling. Your webcam doesn't have the algorithm that will balance the light around you. Just use your smartphone or any digital camera and put it in front of the light, you will see the camera will automatically balancing the light. And then you say, you sit in front of the cheap camera (let me say it a second-time cheap camera), so for the god's sake, are you kidding me? You know the algorithm, and you put your face in the environment that the algorithm can't detect your face. Stop lying, playing the victim, please. Maybe the algorithm is racist, bias........ but it happens because you put it there, your fault. In this case, you have the bias, not the algorithm, not the people who made it.

    • @alisonnguyen4483
      @alisonnguyen4483 7 лет назад

      the world need more people like you.

    • @DeoMachina
      @DeoMachina 7 лет назад

      You...made all of that up.

    • @jacobkleifges5246
      @jacobkleifges5246 7 лет назад +3

      Your description of the algorithm is true and correct, however, I believe that you have missed the point; Ms. Buolamwini argues not that the code is at fault, but rather that the implications of this unintended outcome are detrimental, and that she wishes for awareness, and aid in rectifying the side effect of bias in these algorithms. Also, it was clear that the Hong Kong startup did not create their code, nor did Ms. Buolamwini. The problem is not that the code is poor. The problem is not an intended bias. The problem is not thin skin. The problem is that the flawed code is not only used but widespread. She is worried that similar flaws could exist in algorithms that are used in policing and sentencing, which would lead to greater problems with constitutional law. Regardless of political outlook, this would be a monumental issue.

    • @hinonashi9439
      @hinonashi9439 7 лет назад +1

      I had already said that she put her face in front of the cheap camera. It's getting light pollution, so the algorithm can't see her face. There is no bias in any algorithm out there, people made the algorithm to serve human desire. You can see algorithm everywhere around you. Your laptop, phone, car.... In the end, she made it all up.

    • @jacobkleifges5246
      @jacobkleifges5246 7 лет назад +3

      Again, you miss the point. She argues not that there is bias, but that these algorithms work with differing effectiveness across varying demographics, and that the implications of that flaw in more important fields could be disastrous. It is not that the algorithm is at fault, it is that the algorithm could be used, flaw intact, to perform operations that involve a necessity for absolute neutrality.

  • @garfield2406
    @garfield2406 4 года назад +2

    Could never be related to the fact that dark colors are harder to get contrast on.

  • @CandyLemon36
    @CandyLemon36 10 месяцев назад

    This content is relevant and timely. A book I read on this topic was equally pertinent. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @jorgerincon6874
    @jorgerincon6874 4 года назад

    Ok I wasn't to keen on seeing this video mainly because the title, but it's a good theme honestly.