How AI Image Generators Make Bias Worse

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024

Комментарии • 103

  • @SankofaSongs
    @SankofaSongs Год назад +8

    Thank you for this very important post. Given that 'seeing is believing' for many, developing critical skills for creating, curating, and viewing content is truly essential.

  • @DavidBlatner
    @DavidBlatner Год назад +16

    The topic is fascinating and important, and the images (and especially the AI-generated videos with lip-synched audio) are stunning.

  • @originalwhig
    @originalwhig Год назад +44

    Ask yourself the opposite question: what might unbiased images look like? Bet you'll find that really difficult but, even if you can come up with a description, you won't be able to find images that "everyone agrees with". The problem is that the word "biased" suggests there is some kind of neutral, objective reality that we can all agree on - but we can't. Social issues are not mathematics. All that generative AI does is reflect the world and our experiences back to us. It's not "bias" at all.

    • @gedr7664
      @gedr7664 Год назад +4

      but there is mathematical foundation in the statistics that were shown before, at least with regards to jobs and gender ratios. On the other points I'm not sure if data exists

    • @gedr7664
      @gedr7664 Год назад +2

      of course this would be biased towards the US

    • @engerim
      @engerim Год назад +2

      all these bias and social norms exist even without AI. So why regulate it? It's called recency bias..

    • @BrettCooper4702
      @BrettCooper4702 Год назад +2

      The data set is bias.

    • @camrodam
      @camrodam 11 месяцев назад +5

      I think you entirely missed the section on representational bias.. which is exactly the issue you claim can't be fixed.

  • @lovely-shrubbery8578
    @lovely-shrubbery8578 Год назад +11

    Yeah, it's a fantastic idea to artificially modify datasets for political reasons when using AI. That couldn't go wrong at all.

    • @batsy3
      @batsy3 Год назад +1

      no, that's dumb, the point is to make the data sets the same as real life

    • @greenockscatman
      @greenockscatman Год назад

      Datasets are just someone cropping a bunch of images to 512 x 512 and writing a note on each image saying what it's a image of. It's not really feasible for governments to regulate that.

    • @armondtanz
      @armondtanz 9 месяцев назад

      @@batsy3 that was a sarcastic post???

  • @BrettCooper4702
    @BrettCooper4702 Год назад +4

    Gender and skin colour can be defined in the text prompt. Using minimal text prompts does show a bias, but the user can correct that with good prompt engineering.

    • @LetterSignedBy51SpiesWasA-Coup
      @LetterSignedBy51SpiesWasA-Coup Год назад +4

      It's not really a bias. AI is showing you the most common result and if you want something different than reality, that's your bias you are welcome to introduce it with parameters.

    • @Razumen
      @Razumen Год назад +1

      @@LetterSignedBy51SpiesWasA-Coup AI doesn't reflect reality, it reflects it's training data. It will ALWAYS be biased, no matter how much people like this complain about it.

    • @armondtanz
      @armondtanz 9 месяцев назад +1

      @@Razumen So if you were to depict a festival concert being built, how would you depict the roadies and light fitters and the people who assemble the scaffolding?

    • @Razumen
      @Razumen 9 месяцев назад

      @@armondtanz Try to come up with a question that's relevant, please.

    • @armondtanz
      @armondtanz 9 месяцев назад +2

      @@Razumen 100% relevent. Realist like me would say old rocker type , males, big build etc...
      Gen Z (im offended by everything types) would argue and call me a bigot...
      Literally 100% legit. Stop Gatekeeping peoples points???

  • @vladoportos
    @vladoportos Год назад +5

    Was the midjurney trained on US only images ? how about to world wide statistic ? Looks like mixing terms between reality and "fairness"... you get all male top CEO pictures because that what you asked it to do... if you want 50/50 you need to ask for it with correct prompt.

  • @eiRuNLiMiteD
    @eiRuNLiMiteD Год назад +6

    why did buzz feed delete the barbie article? AI terrifies me.

    • @rodnee2340
      @rodnee2340 10 месяцев назад

      Because it is a waste of time even for them! AI sucks, but there are much worse things it is doing. Like destroying the careers of artists and musicians.

  • @FPA4
    @FPA4 Год назад +6

    The clip at around 6:50 of the politician asking if the TikTok app accesses the home WiFi appears to be used in this video to demonstrate a politician not understanding the technology, while in effect what he was asking was if the TikTok app was mapping the user's home WiFi network and sending the data back to China. One of the reasons someone might wish to do this would be, at some time in the future, to infect via the app a vulnerable router which has had its internet-facing management disabled as a security measure. Compromising the router from the device side is relatively easy and would add one more node to a home-router based botnet. Botnets of tens of thousands of home routers, all connected to the network, can be used to knock out websites via DDoS attacks and frequently are. I found the video fascinating until that point, which sorta ruined the overall impact for me after that...

  • @LetterSignedBy51SpiesWasA-Coup
    @LetterSignedBy51SpiesWasA-Coup Год назад +7

    It's not really bias. AI is showing you the most common result and if you want something different than reality, that's your bias you are welcome to introduce it by refining your request such as "black scientist" or "Asian inmate."

    • @thetranstan
      @thetranstan Год назад +4

      the “most common result” is drawn from available datasets, not straight from real world numbers. one of the arguments here is that there is no such thing as a neutral or ‘objective’ database, so any data used by AI generators is already skewed by bias.

  • @Exegesis66
    @Exegesis66 4 месяца назад

    Def. need to define "bias" in all discussion of this. The issue of representational bias has to do with whether AI should show the world as it is, or as it could be, right? If my daughter asks Dall-E to produce a picture of a CEO and sees only men, that reinforces the idea that this career path is not for women. Surely within the four images it offers one can be a woman. The same with astronaut, doctor, lawyer, but also bricklayers and construction workers. What they will do eventually is build in diversity as the default and then you can search more specifically in your prompt after that.

  • @MikhailKutuzov-wx2gy
    @MikhailKutuzov-wx2gy 6 месяцев назад

    Okay, yes to all the contents in this video. Generative AI does produce stereotypical outputs and programmers should and do treat those biases in datasets all the time (with mixed results). We understand that AI cannot be trustworthy if it learns biased associations or propogates social injustics.
    But not all algorithmic bias originates from training data.
    There's a misconception that all AI does is "hold up" a mirror to society, but for the AI experts who actually model algorithmic bias, they look at biases sources from the entire AI learning process. This is includes instrinsic tendencies in the neural net (like temporal baises) to how users sometimes attribute "objectivity" to an algorithmic output.
    I think the misconception about the "mirror" idea is that it gives AI enthusiasts an excuse to say, "oh AI is not biased, we are." Or even say, "aha, there was no bias. We just live in an unfair world." Sometimes that's true, other times not. Bias is a relative concept. You can train an AI to reflect your prejudices about the world, or you can "naturalize" your predjudices by assuming a stereotypical output is an accurate reflection of reality. It may not be. Some things to pay attention to is what data is included in the model, how that data is labeled, and what reinforcement protocols are used to train your product.
    My argument, just as somebody who works with generative AI everyday and teaches AI ethics, is that there is no magic escape route for either users or programmers to not really refelct about how AI works, how it is biased, and be honest about what your expectations for AI are. You should not look to AI for an accurate description of the world (that's what other academic scholarship is for) because AI models have to be selective about their data pipeline to have particular functionality desired.

  • @JS-vn9zj
    @JS-vn9zj Год назад +12

    Fantastic use of AI multimodal tools to make the case and show how bias manifests and spreads through use of AI. Please keep creating and sharing these exceptional learning tools.

    • @jasonchatto
      @jasonchatto 2 месяца назад

      The only bias I see is in the narrative. AI just reflects reality.

  • @causeitso
    @causeitso 8 месяцев назад +2

    Isn't it also a bias to assume that the janitor or the social worker professions are inferior to doctors and engineers?

    • @sussybaka6347
      @sussybaka6347 24 дня назад

      those jobs are OFTEN (maybe not always) "inferior' in terms of income as compared to a doctor/engineer and that is generally our objective standard.

    • @marshallodom1388
      @marshallodom1388 14 дней назад

      @@sussybaka6347 um, that "objective standard" IS the bias

  • @brackcycle9056
    @brackcycle9056 Год назад +1

    There is bias in this too... you portray being a CEO & being a criminal as being different.

  • @smule77
    @smule77 Год назад +8

    AI depicts more or less what's real - and not some utopia where everybody can be anything. That's just not how it is.
    Instead of going on about why AI is biased and "bad" (when it's not really), it would be much wiser to tell people - from a very young age - that they don't have to take stereotypical depictions of society too much to heart and follow what they want to do and be who they choose to be. But one should also be honest that most people don't end up where they dreamt themselves to be - most people will have to make some compromises in their lives.
    If you're smart enough to study medicine no one is going to stop you because you're female and doctors "are usually male". Your chances depend a lot more on individual circumstances like class or ability and determination than on "harmful stereotypes" which really only harm those who allow themselves to get influenced by it.
    That's the devil that should be fought, not the stupid AI pictures.

    • @Razumen
      @Razumen Год назад +1

      "AI depicts more or less what's real"
      Wrong, it depicts what it's trained on. These models are not AI, and have no ability to distinguish what's "real", much less any sort of idea what "real" means.

    • @ardoren5442
      @ardoren5442 Год назад

      @@Razumen What PRESICELY leads you to believe that the datasets don't accurately mirror the real world? Is it based on your own biased perspective of how you wish the world's numbers to be? If you're not familiar with the dataset's composition and its creation process, how can you definitively assert that it exhibits an undesirable bias and doesn't genuinely portray reality as it is?

    • @Razumen
      @Razumen Год назад +1

      @@ardoren5442 All datasets are biased, especially when we're talking about things like photographs, which have to be taken and collected by someone, who will be affected by their own biases, whether they are part of themselves, or external biases imposed on them by their environment.
      A better question is, how can YOU know they do represent reality accurately? If that's what they claim to do, and you think that's what they do, you should be able to confirm this somehow.

    • @SergyMilitaryRankings
      @SergyMilitaryRankings 9 месяцев назад

      ​@@Razumenin America most high paying jobs are held by men, most white collar jobs are held by white people.
      These are not Offensive it rAcIsT it's just reality

  • @spookymulder945
    @spookymulder945 Год назад +7

    What? Cenrtal and south America have ALOT of lighter skinned people because many italians, spanish, germans, French and other Europeans migrated there.
    We are not all short and brown. Lol

  • @makaila8860
    @makaila8860 Год назад +4

    yea..ai is scaring me

  • @africaart
    @africaart Год назад +1

    you can prompte it to change the race or gender of your result

  • @jshap31
    @jshap31 Год назад

    The feedback loop point is really interesting

  • @poisonapple146
    @poisonapple146 7 месяцев назад

    Everyone who uses the internet should take the time to understand feedback loops. That’s exactly what you get when you spend a lot of time on apps that customize content based on past viewing habits etc. When I notice that I’m constantly getting the same type of content I make a point to search for totally new topics to switch things up.

  • @user-ue8li7ni4b
    @user-ue8li7ni4b 7 месяцев назад +1

    What editing software did you use when editing this video?

  • @JeffreyHamlin
    @JeffreyHamlin Год назад

    Pandora's box has been opened, image datasets are but one problem in bias, consider insurance - where GLMS are being used in risk assessment, medicine etc. These models are in use around the world and being trained on new data all the time, how can we possibly put the genie back in the bottle?

  • @nikolepascetta5836
    @nikolepascetta5836 Год назад +1

  • @extremekiller1205
    @extremekiller1205 4 месяца назад

    Having to give an example of ‘93% of prisoners are men’ shows how woke it is to represent all jobs equally by gender. I wish a future where it is not needed to give examples like that to counter-argue woke ideology.

  • @izzyworks8789
    @izzyworks8789 Год назад

    Ignoring those who wish to pander, AI bias will ultimately revolve the grey zone where ones frame of reference is part of the facts, we can’t model our reality only approximate. So, Id use that as an argument, that this bias is a free market problem to be solved. We need to protect against monopoly and censorship of ideas. We need to support open source, and ideally, regulate the technology as a utility, not a luxury service. In a perfect world, there would be models built by government branches, specialized NGOs, by for profit private entities, by artists, and through public companies, etc. Let, the effectiveness of said data be the real measure -garbage in garbage out will do the weeding.

  • @jasonchatto
    @jasonchatto 2 месяца назад

    AI just represents reality, it is not bias. What YOU WANT TO DO, is make it bias. FACT.

  • @user-uj1ub2zr9z
    @user-uj1ub2zr9z Год назад

    Wow this has really made me think. Great vid

  • @Itsprez93
    @Itsprez93 10 месяцев назад

    I don't usually comment on ads that appear on videos I watch, but this one triggered me. then the quote at the start about humans are biased. generative AI is even worse? should read the other way round, AI is the result of humans, the problems will always lie with humans. So that dataset put into the algorithm would have to be regulated as you said in the video because preconceptions and biased views is where the problem sounds like where it starts.

  • @anniecberry
    @anniecberry Год назад +3

    Love it!!! Great topic

  • @ImprovementGang
    @ImprovementGang Год назад +4

    This NEEDS to be discussed!
    Stories by nature are stereotypical, so it's NO surprise that images, films, and other media forms have a dilated perspective of the world.
    Where the data comes from matters too. If the database came from India, I'd bet the images would be dramatically different.
    Also, who is to say what is fair representation? I am Latino and have always loved rap, and I want to listen to GREAT rap artists NO matter their ancestry.
    It's about HOW we identify. Identifying with a group because of skin color/ancestry is just TOO simple. I'd say we should start to align with principles, ideas, and values RATHER than your ancestry. Is that such a bad idea? Or is it too simple as well?

    • @armondtanz
      @armondtanz 9 месяцев назад

      In woke community, thats white suprem talk!

    • @anthonyprice1743
      @anthonyprice1743 8 месяцев назад

      Just keep capitalizing non proper nouns. It'll get picked up 😂

  • @cristianymiguel6432
    @cristianymiguel6432 11 месяцев назад

    Next time try prompts on Leila Lopez (Miss Univere 2011) as African Barbie.

  • @armondtanz
    @armondtanz 9 месяцев назад +2

    This sums up woke BS.
    'We want a 50,50 in the good stuff, but not a 50.50 in the bad stuff'
    Slow handclap, your saying the quiet parts outloud...

  • @inongekhabele
    @inongekhabele 11 месяцев назад +1

    So... we should train AI to depict the utopian world we want, instead of the factual data of the world as it is. Is that not deception?

  • @greenockscatman
    @greenockscatman Год назад +2

    Well the solution to this problem would be to make more datasets of underrepresented folks and train AI with them. Funnily enough, what might be missed out in the "skin tone" discussion is that the AI datasets you have in let's say Stable Diffusion models aren't actually biased towards overrepresenting what you might think are European features, but instead it tends to be skewed towards East Asian features.

  • @petem3883
    @petem3883 Год назад +9

    An AI portraying the real world accurately means that the AI is well designed.

    • @thetranstan
      @thetranstan Год назад +1

      the AI portray *representations* of the real world based on datasets that are in turn also representations of the real world. available datasets are not one-for-one statistics for real world numbers. a representation of a representation often distorts the original - the video specifically talks about this ‘feedback loop’.

    • @Gauldoth06
      @Gauldoth06 Год назад

      ​@@thetranstan "feedback loop" where did you study and what do you do for living? "feedback loop" is a separate problem from what people call "AI bias". "available datasets are not one-for-one statistics" again this is not a problem with the data but with our world. If you really think that average cleaner or fastfood worker is white then you need a reality check. Our world sucks. What will you do about it? Fight for equal rights or pretend that the problem is not happening by forcing statisticians to produce actually biased data? 🤡 You are not helping.

    • @ardoren5442
      @ardoren5442 Год назад +2

      @@thetranstan You seem to be suggesting that these representations might not be accurate. But what if they actually are? What if the datasets truly reflect the real-world numbers as they are? In that case, criticizing the representation would simply mean disagreeing with the factual state of affairs. For instance, if there are more men working as bricklayers and more women working as caregivers worldwide, and you dispute this representation, it doesn't necessarily imply that the dataset is biased. AI doesn't take into account your personal view of reality or the specific outcomes you desire from the representations. However, you can certainly customize your prompt to instruct the AI on the exact representation you're interested in. This can be done by introducing your own bias through the way you frame your input.

  • @Edgeye
    @Edgeye 10 месяцев назад +1

    They are not biased. Most people from Latin American countries (indigenous and original) were not black, this is not biased, you are biased, this is not racist, you are, you are racist for pushing on people and persisting the belief that for something to be equal, to be fair, to be racially accepted, It must be including black people entirely, no it must not. We don't go into Africa saying that there should be more white people there and that they're racist just from their world views and traditions do we? so why should white people have to face this constantly exercised lack of respect? not everything needs to be about black people, not everything needs to be about white people. Just because an AI showed Latin American barbie dolls and LATIN AMERICAN and not African does not mean it is racist? ALSO WHY SHOULD IS BE AFRICAN WHEN THE DOLLS ARE LATIN AMERICAN?

  • @lexaviles1378
    @lexaviles1378 Год назад +1

    As someone who uses AI and loves what it’s been doing for me. This is 💯 important to discuss.

  • @RobertLoud-ft4gk
    @RobertLoud-ft4gk Год назад

    I wonder if "A I" will fight against
    "A I." Is that possible.?

  • @j7ndominica051
    @j7ndominica051 3 месяца назад

    Woke people want to have their cake and eat it too, eh? If you want an even split among the CEOs, then you have to also women plumbers, construction workers and other dirty, taxing jobs, and the prisoners. You can't pick and choose.
    It's up to the artist to show the aspects he wants. With AI, he can try to give a prompt that combines less common attributes. How would you feel if there was a law requiring that you switch your talking CEO mid programme to a more standard one for fair representation?
    The AIs are already heavily limited for NSFW and deepfakes.

  • @feagaifaalavaau392
    @feagaifaalavaau392 Год назад

    Im getting INTERFACE vibes rn.

  • @Fres-no
    @Fres-no Год назад

    Food for thought...great vid!

  • @cortext_io
    @cortext_io 11 месяцев назад

    Why have government regulation, on this specific issue?
    Create the option to select "Make the AI generated images I see representative of [Reality, Population, Birth Rates, What my leaders want for me, What'll make me feel good]"... You're saying "It's good that THAT is true for THEM but not for THEM OVER THERE"... that is a fault in the truth telling of this video

  • @MtheBarbarian
    @MtheBarbarian 9 месяцев назад

    Based ai

  • @user-fp8ov6gc3m
    @user-fp8ov6gc3m Год назад

    Woah!

  • @gigicollins3498
    @gigicollins3498 Год назад

    Skynet

  • @sierralvx
    @sierralvx 11 месяцев назад +1

    I don't see why this is so alarming, the AI is not to blame for these biases, but the existing stereotypes and prejudices that already exist in culture and shared on the internet. That's all the AI is drawing from so of course it would make these images if requested to. To treat the software as biased itself like it has agency or ill-intent isn't fair since it's a only a generator, not a creator.
    To put it another way, it's a mirror of humanity, both good and bad.
    Stop using AI image generators altogether and you avoid this.

  • @tareaslizeth4376
    @tareaslizeth4376 10 месяцев назад

    Thank you for this useful information.

  • @marthakatharina3491
    @marthakatharina3491 11 месяцев назад +1

    AI is just holding a mirror to us. It's time to face our own biases.

  • @filicefilice
    @filicefilice 10 месяцев назад +1

    Save Ukraine!

  • @DanODNC1
    @DanODNC1 Год назад

    The gun wasn't included by a prompt? Really? I smell bullshit.

  • @petem3883
    @petem3883 Год назад +1

    Seems like the problem here is that you have a woke bias.

  • @GQ2593
    @GQ2593 Год назад +2

    Reality isn't egalitarian. AI understands this, woke academics not so much.

    • @thetranstan
      @thetranstan Год назад

      AI only understand the available datasets, which any academic knows is not the same thing as real world numbers or “reality”

  • @404TVfr
    @404TVfr Год назад +1

    Cringe.

  • @weelewism8442
    @weelewism8442 10 месяцев назад +1

    first world problems much? 😂

  • @matulopez5347
    @matulopez5347 Год назад

    Kek

  • @marshallodom1388
    @marshallodom1388 14 дней назад

    this video made me vomit

  • @SergyMilitaryRankings
    @SergyMilitaryRankings 9 месяцев назад

    Who cares ?

  • @SergyMilitaryRankings
    @SergyMilitaryRankings 9 месяцев назад

    So you're upset at statistical reality lmao

  • @Razumen
    @Razumen Год назад +4

    Yes, because a black coat jacket with no other distinguishing details is SOOOO reminiscent of an Nazi SS uniform. 🙄
    This video really reaches in its attempts to drum up outrage.🥱

    • @suburbanyobbo9412
      @suburbanyobbo9412 10 месяцев назад +2

      What the conclusions illustrate first and foremost is the bias of the author.