Forcefully Control AI’s ideology is... not working (yet?)

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 176

  • @bycloudAI
    @bycloudAI  11 часов назад +2

    Happy FlexiSpot Black Friday Sale now, Up to 65% OFF! You also have the chance to win free orders during this period.
    Use my code ''YTE7P50 '' to get EXTRA $50 off on the FlexiSpot E7 Plus standing desk
    USA: bit.ly/3OiLQJ5
    CAN: bit.ly/3ZiPE3a

    • @randomsnow6510
      @randomsnow6510 8 часов назад

      what do you think of the billions of wannabe AD卐LF HϟtlerSS in the comments?

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 8 часов назад +32

    technically, reducing entropy does reduce the discovery space which in turn does make it "dumber".

  • @MidWitPride
    @MidWitPride 9 часов назад +17

    I've had some issues with the more censored models when using them to give me ideas for my short stories. I am fine with it not being into ERP or whatever, but that also sometimes seems to bleed over to other kinds of "kinda messed up things" that might happen. If you are writing a story where you want the protagonist to have some real, dark stuff to overcome, or to have antagonists whose grievances are somewhat relatable, I've had more success with the base models that aren't as heavily censored. Gemini vs Mistral is an example of that where Gemini is nearly useless at dealing with concepts like self harm or believable political extremism.

    • @kolkoki
      @kolkoki 9 часов назад

      Classical american way of dealing with awful historical things, like RUclips demonetizing and shadowbanning every documentary that talks about ww2. "If actively repel people from learning from history, surely will it not repeat amirite?"

    • @SteveGamesOnline
      @SteveGamesOnline 6 часов назад +3

      A couple of months ago, I tested Claude on storytelling. I basically asked it to write me a story about two Black guys going undercover as two blonde girls (White Chicks, comedy). Claude refused, and the content it did agree to write was very bland, to say the least. I tried the same thing with GPT-4, and it went to town. Needless to say, I never looked back.

  • @technolus5742
    @technolus5742 5 часов назад +6

    Holly clickbait! Companies want models that will obey guidelines and not soil their reputation with unhinged behaviors that we often observe in humans.
    When you optimize for things other than/alongside raw performance, the results have less raw performance. This has been well known in the space for ages.

  • @borkovitch5227
    @borkovitch5227 8 часов назад +15

    Being sensitive about controversial subjects and handling those topics with care and well intention isn't being woke.
    Also, being able to objectively describe patterns and facts about the world even when the topic is controversial isn't being racist.
    And most importantly, I think you can and should do both as a mature human being, or AI.

    • @bc-cu4on
      @bc-cu4on 8 часов назад

      Contradiction. There is no polite and sensitive way to say "group X is responsible for most of bad thing Y".

    • @johanavril1691
      @johanavril1691 7 часов назад +3

      @@bc-cu4on (edit: @bc-cu4on has since deleted their comment) no, being sensitive about a controversial subjects does not mean never talking about thruths that could hurt some people emotionaly.

    • @technolus5742
      @technolus5742 4 часа назад +2

      ​@@johanavril1691it doesn't mean taking those truths out of context to convey a false claim overall, which is very often the issue and is exactly the situation addressed in this video.

    • @johanavril1691
      @johanavril1691 4 часа назад

      @@technolus5742 right my comment was an awnser to someone else who aparently deleted their comment and it now doesnt make much sense. I absolutely agree with what you are saying.

  • @Kadamitas-II-HATF
    @Kadamitas-II-HATF 9 часов назад +41

    Smart wording on the title makes more people click and engage in the comments.
    But the more correct wording is censorship in general, it's just that woke is the hr nonsense that we have rn.

    • @zirex6620
      @zirex6620 7 часов назад +2

      Real

    • @timothyapplescotch1361
      @timothyapplescotch1361 6 часов назад

      Censoring means nothing. Is Elon censoring "woke" ideas by artificially boosting right wing accounts on X?

    • @ililillillillilil
      @ililillillillilil 14 минут назад

      "you are right about everything but it makes me mad since im a censorious woketard:

  • @papakamirneron2514
    @papakamirneron2514 4 часа назад +11

    It is very sad that people have to release uncensored versions of models. We literally had an AI declining questions around C++ because of how censored things get.

    • @WoolyCow
      @WoolyCow 2 часа назад

      was that just the whole 'the c++ programming language is unsafe so i cant talk about it' incident?

  • @waffemitaffe8031
    @waffemitaffe8031 4 часа назад +13

    These comments are a cesspool. I don't disagree with the content of your video, and I get why you would frame the title like this to gather more clicks, but it sure attracted the wrong kind of crowd.

  • @Zonca2
    @Zonca2 8 часов назад +26

    Models are smart, after censorship and steering they become dumber, when I chat with corpo model I run into all kinds of problems, if I chat with uncensored community model it is smooth sailing.
    It is that simple

    • @anarkisgaming
      @anarkisgaming 6 часов назад +1

      Thats also very wrong, both in terms of measurable performances and in lack of steering. The so-called uncensored (and that is a terribly inaccurate word for it) models are generally steered so much into the other direction that they are barely competent at doing the day to day tasks that LLM are primarily meant to accomplish. Sure there are multiple methods to do so, but any fine-tuner worth anything will tell you that this is the balance they have to deal with. Their hallucination rate is extremely high, and have a massive positivity bias (yes man bias). They are still fun to play with, but thats all they are good for.

  • @scurvydog20
    @scurvydog20 9 часов назад +16

    I remember when Amazon tried wokifying it's ai the first time it triggered a collapse so bad that reverting the code didn't fix it.

  • @TheDragonshunter
    @TheDragonshunter 9 часов назад +91

    censorship in anyway makes the Ai dumber...

    • @yoavco99
      @yoavco99 9 часов назад +10

      Literally wrong

    • @timtim101
      @timtim101 9 часов назад +24

      @@yoavco99 cope.

    • @Hey_Mister
      @Hey_Mister 9 часов назад

      @@yoavco99 Until you ask them to generate n@zis eating watermelons. Nah, YOU are wrong.

    • @sownheard
      @sownheard 9 часов назад +8

      did not watch the video

    • @yoavco99
      @yoavco99 9 часов назад +21

      @@timtim101 when they want an AI to be more factual, they train it on better data and remove bad training data. That's basically "censorship" but improves the model. What you guys are saying is clearly nonsensical.

  • @sownheard
    @sownheard 9 часов назад +24

    1. What is woke?
    2. How racist are we talking about?
    3. Isn't this just proof that steering has some problems, regardless of the topic being woke or not woke?

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 9 часов назад +1

      Exactly!

    • @thenicesven5328
      @thenicesven5328 8 часов назад +19

      literally no one can define wokeness anymore, its a useless term

    • @BubbleTea033
      @BubbleTea033 8 часов назад +4

      Yes, that is the point. The difference is that no one is concerned about feature steering on any other dimension -- only political correctness. They're claiming that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model, NOT feature steering. It's an attack on 'wokeness', disguised as genuine concern for AI.

    • @dranon0o
      @dranon0o 7 часов назад +6

      1. Marxism applied to culture instead of economics.
      2. There is a difference between racism and race realism, the second one promote pattern recognition on each groups to be help accountable, we all have bad apples.
      3. Mixed signals are confusing, denying the pattern recognition on models is applying human biases on it because we don't like the result when it is just doing its work on provided data. It's like a woman that says no to play with you and you don't understand the hints and you leave while missing out and disappointing the woman that wanted you to try a bit more OR to be playful.
      See that's quite easy.
      > t. based researcher that have to shut up in public

    • @turgor127
      @turgor127 7 часов назад

      When you ask Gemini to generate average n@zi solder and it generates black person.😂

  • @Luizfernando-dm2rf
    @Luizfernando-dm2rf Час назад

    It's nice that this research can also be used in case you want your model to be biased. Imagine the RP guys training an entire model to be a certain character natively, crazy 😶‍🌫

  • @wiiztec
    @wiiztec 6 часов назад +3

    You're misrepresenting the claim, the claim is that inducing bias decreases "racism" and performance

  • @Skulll9000
    @Skulll9000 6 часов назад +2

    5:35 "How is all this even connected to humans?"
    Since AI is trained on data written by humans, would it not mean that if you steer the AI toward acting more like a certain type of person, that the resulting responses would be some amount reflective of the things those types of people write? Isn't that shown in your example where making the AI more pro-choice also caused it to become more anti-immigration?

  • @clawgerber1992
    @clawgerber1992 9 часов назад +10

    I mean, if you put contradictions into a data set and then force the model to follow them, it's bound to have knock on effects.

  • @Humble_Merchant
    @Humble_Merchant 7 часов назад +8

    Any focus taken away from making a good AI (such as making a non-racist AI), will no doubt make a less intelligent AI, purely by the fact that bandwidth that could've been spent on something productive is instead being spent on the HR catlady's gripes

  • @Betruet
    @Betruet 21 минуту назад

    what is that graphic at 2:55 ? where is the 3d model from?

  • @thenicesven5328
    @thenicesven5328 9 часов назад +2

    why do the people who say that always have a blue checkmark

  • @amafuji
    @amafuji 2 часа назад

    A better way of phrasing that tweet would be: "making models biased against racism decreases their intelligence".

  • @raspberryjam
    @raspberryjam 6 часов назад

    Re: Changing the title: probably for the best, copying the ragebait was good bait but did induce rage

  • @RedOneM
    @RedOneM 2 минуты назад

    This is obvious. If we feed data with correlations and try to exclude LLM‘s scope on certain views, then we automatically discard some of the correlating information.
    Fortunately, the next administration is going to get rid of these artificial limits, which cost humanity in the long term as AI development was slowed. There is no point in slowing the inevitable progress.

  • @_mosesb
    @_mosesb 3 часа назад

    This comment section is really sleeping on the pros of feature steering. We would be able to provide custom rules that the models adhere to, outputs will be reproducible and homogeneous. Such a model will actually be smarter not dumber.

  • @polares8187
    @polares8187 7 часов назад

    man, get the ZSA Moonlander as a keyboard. You won't be sorry

  • @Sydra.
    @Sydra. 9 часов назад +42

    So the problem is: AI learns pattern recognition and nowadays it is forbidden to recognize patterns.

    • @bc-cu4on
      @bc-cu4on 8 часов назад +4

      The amazing noticing machine!

    • @briananeuraysem3321
      @briananeuraysem3321 7 часов назад +1

      Yep 😂

    • @cy728
      @cy728 Минуту назад +1

      Have you ever considered that certain "patterns" might only reflect of America as one of the most racist countries on earth, historically and currently?
      Of course not, your just incredibly racist and trying to justify it.

  • @pn4960
    @pn4960 8 часов назад +1

    I can feel some lustful tension between you and that flexispot desk

  • @wild1000022
    @wild1000022 5 минут назад

    Jesus, this comment section needs a cleanse. Feels like every who commented either commented before watching or didn't watch at all

  • @Niiwastaken
    @Niiwastaken 8 часов назад +7

    Theres a pretty stark difference between REAL racism and just stating factual information. Does the LLM even understand that difference?

    • @cy728
      @cy728 6 минут назад

      "I'm not racist I just believe that certain races are Inferior", No you're incredibly racist.
      Have you ever considered that certain "factual information" about minorities might only reflect that America is one of the most racist countries on earth, historically and currently?

  • @Glass-vf8il
    @Glass-vf8il 8 часов назад +3

    «Can’t wait to see productive comments in this comment section»

  • @17th_Colossus
    @17th_Colossus 8 часов назад +6

    Stereotypes exist for a reason.

  • @s1mo
    @s1mo 8 часов назад

    Using AI to interpret AI.
    How far off is from using your own brain to interpret your own brain?
    Like we probably don't want to wait for cats to find out how the human brain works

  • @MrD3STR03R
    @MrD3STR03R 9 часов назад +9

    When are you moving to bluesky

    • @lordsneed9418
      @lordsneed9418 9 часов назад +8

      after he helps create accounts for his wife and her boyfriend.

    • @SpentAmbitionDrain
      @SpentAmbitionDrain 9 часов назад

      And be censored day one unless you belong to the hive mind? How about never.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 8 часов назад +2

      ​@@lordsneed9418 ah, you most be "anti-woke".

    • @BubbleTea033
      @BubbleTea033 8 часов назад +1

      Honest question, what happened to Threads? I know BlueSky is hip and new, but isn't Threads still kicking around? Is it just that people don't want to deal with the Zuck?

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 7 часов назад

      @@WhoisTheOtherVindAzz obviously

  • @punitkaushik7862
    @punitkaushik7862 Час назад

    1:05 getting a bit too zesty with that table 😮😮

  • @20xd6
    @20xd6 8 часов назад

    @4:19 that certainly is the most graph of all time.

  • @mmmm768
    @mmmm768 8 часов назад +1

    Talk about Pixtral 12B and Pixtral Large already

  • @exile_national
    @exile_national 8 часов назад +8

    we all saw GPT dramatic decrease after woke censorship

  • @SpentAmbitionDrain
    @SpentAmbitionDrain 9 часов назад +13

    So... The google AI generating nazis as black soldiers is smart? The more you know I guess.

    • @VNDROID
      @VNDROID 7 часов назад +1

      nice straw man, do you feel attacked?

    • @michaelvarneyLLNL
      @michaelvarneyLLNL 6 часов назад +5

      @@VNDROID You project.

    • @lordsneed9418
      @lordsneed9418 4 часа назад +2

      @@VNDROID nice calling something that actually happened a strawman. Do you feel attacked?

  • @emmanuelgoldstein3682
    @emmanuelgoldstein3682 8 часов назад +8

    True for humans as well. The worst part is they've convinced each other they're the smart ones while not understanding Bayes Theorem at all.

  • @magikarp2063
    @magikarp2063 7 часов назад +7

    Woke is dumb so making something woke makes is dumb by definition tho.

  • @lordsneed9418
    @lordsneed9418 9 часов назад +15

    But that means the narrative that trying to steer to reduce "rahcism" does make the model less intehlligent. Why did you act like this is incorrect when you later implicitly admit that this is correct?
    This is exactly what we would expect btw. Allowing a model to be slightly biased can produce a much more accurate model because it allows variance to be reduced alot. This is the bias-variance tradeoff. If you add the constraint that there be no bias across different rahcial categories then you're no longer optimising for simply the most accurate prediction, i.e. the most intehlligent model so of course you end up with a less accurate, less intehlligent model

    • @BubbleTea033
      @BubbleTea033 8 часов назад +5

      Well, that's not what he said. Making an AI more 'woke' COULD make the model dumber, but making it more racist could also achieve the effect. The effect on either end is roughly the same and can be done in a mild form without degenerating the quality of output -- in other words, you CAN steer the model without issues as long as you find the right ratio.
      But let's be real, the people claiming that injecting 'wokeness' into AI makes it stupid aren't saying so because they're really concerned about the negative effects of steering. They're saying it because they believe that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model. They're trying to make a real-world extrapolation from that. They're attacking 'wokeness', not feature steering. It's just another example of the all-consuming culture war poisoning every field.

    • @sownheard
      @sownheard 8 часов назад

      he uses the word racist and bais interchangeably, even though in the tech field nodes and bias are completely normal. and bias serves a different function.

    • @lordsneed9418
      @lordsneed9418 7 часов назад +1

      @@BubbleTea033 Yes, it is what he said.
      Any degree to which you subject the model's accuracy optimisation to constraints, instead of simply optimising it to be the to be the most accurate possible, you are going to degenerate the quality of output and create a model which makes less accurate predictions. unless the highest accuracy value in the gazillion-dimension parameter space just happens by luck to lie upon this 1 or 2 dimensional constraint you made up for political reasons.
      Let's be real, you are attacking the weakest version of the criticisms because it is more convenient to you. Rather than try to defend that you are sacrificing truth and accuracy for your pohlitical sehnsibilities and it would be better not to do that and instead have a more accurate, more intelligent model, it's easier for you to say "this is just culture war so I don't need to listen".
      And yes, injuecting wohkeness into the model does make the model worse, for one example subjecting the model to the woke constraint that there must be no differences in bias across rahcial categories , even though doing this will produce a less accurate model because adding bias can reduce variance and lead to more accurate predictions . Other examples of woke scale-tipping and tampering with the model are even more egregious like when google gemini was launched and when asked to generate a picture of historical Euhropean people , it would always inaccurately depict them as blahck ahfrican or other non-euhropean rahces.

    • @lordsneed9418
      @lordsneed9418 7 часов назад

      @@sownheard bias is normal in AI/machine learning, until the woke ideologues find out that there is differing bias across different rahces or sehxes or sehxualities then suddenly it's anathema even that's literally just the outcome of optimising the model to predict accurately

    • @erwile
      @erwile 7 часов назад +1

      Having bias means that you are not align with reality. If you're biased towards 6, you will output 6 even if we want you to take something that is greater than 7. You have to be biased towards "reality"

  • @scottmiller2591
    @scottmiller2591 Час назад

    Yes.

  • @turgor127
    @turgor127 7 часов назад +1

    AI guardrails are made with "better to overshoot than undershoot" mindset. I doubt there is no performance hit, when models are expected to give 100% politically correct answers at all times.

  • @Tiritto_
    @Tiritto_ 4 часа назад

    I just want my AI to be racist.

  • @IsaacFoster..
    @IsaacFoster.. 4 часа назад +4

    If an AI refuses to answer my questions about penises and vaginas, that AI is probably not worth testing.

  • @Melinon
    @Melinon 7 часов назад

    Information wants to be free

  • @mnemot
    @mnemot 21 минуту назад

    2:08 np

  • @cdkw2
    @cdkw2 8 часов назад

    wtf was that intro 💀

  • @rocstar3000
    @rocstar3000 5 часов назад

    VSCode and not Vim, shame on you...

  • @Gustavoooooooo
    @Gustavoooooooo 8 часов назад

    Try freeplane

  • @shipmcgree6367
    @shipmcgree6367 9 часов назад +21

    Pattern recognition isn't racism 🙊

    • @timtim101
      @timtim101 9 часов назад +8

      It is because anything other than pure racial apathy/indifference colourblindness is at least somewhat racist. However racism itself isn't necessarily bad at all since it's often rational and makes the world better.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 9 часов назад +4

      Ignoring biases (confirmation and availability bias being especially relevant, but also e.g. survivor bias) can lead you to merely think you are recognizing a pattern. Not taking context and history into consideration and insisting on having false beliefs in non-phenomena such as free will (your so called pattern recognition system failing to see how much the environment (e.g., social, cultural and economic affordances) influences the choices people are able to recognize) can also lead you down the racist/ignorant path. Etc. Etc. Etc. Etc..

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 8 часов назад +1

      ​@@timtim101Hoe can you be this confused? Racism is literally jusr bad. What you are thinking about is being critical of cultures/religions/ideologies. But such matters only incidently or contingently have something to wih race.

    • @Dragonflyheartart
      @Dragonflyheartart 8 часов назад +3

      It is if it’s built by racists. Trash in trash out. Omg how could my weak girl mind know that saying. You aren’t the only ones noticing a pattern 🙄

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 8 часов назад +1

      @@WhoisTheOtherVindAzz "economic factors" lmao

  • @mika34653
    @mika34653 9 часов назад +1

    LMAO

  • @0sba
    @0sba 7 часов назад +4

    (well not really) (but actually it does show that trying to steer an AI ideologically ends up with worse results) unsubbed + disliked. Trash clickbait.

    • @Askejm
      @Askejm 6 часов назад +1

      did you watch the video

  • @Dragonflyheartart
    @Dragonflyheartart 8 часов назад +5

    Tech bros taking issue with not building their ai around their own bigotry is so tired. From a women in the field

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 7 часов назад +2

      Do you ever have deep conversations with people that challenge your worldview?

    • @Speejays2
      @Speejays2 3 часа назад

      @@emmanuelgoldstein3682 Do you?

    • @Sydra.
      @Sydra. 3 часа назад +2

      @@Speejays2 Do you?

    • @TheMrFrukt
      @TheMrFrukt 3 часа назад +1

      Bait

  • @pylotlight
    @pylotlight 9 минут назад

    Just because the article doesn't directly connect to human intelligence, doesn't mean being racist isn't correlated to high intelligence. Can't prove either way ;p

  • @jankram9408
    @jankram9408 7 часов назад +1

    Whoever made these companies think they have the right to make the field of allowable use for LLMs? These tools can legitimately serve as a good chunk of intelligence of some people so limiting it in anyway and "safe guarding" thought is inhumane.

  • @ugthefluffster
    @ugthefluffster 5 часов назад

    my dude, can I please ask you to talk more slowly in these videos? your heavy accent and swallowing of words make me mishearing a word every third sentence. or maybe do more takes, or have someone listen to it before you upload. its great content but frustrating to listen to because I have to scroll back all the time to make sure I heard you correctly. keep up the good work!

  • @hotlineoperator
    @hotlineoperator 6 часов назад

    Woke and bias are real problems with AI. Users should have right to select what information they process and how to use it, not the tool.

  • @anonieljorgitopedro1786
    @anonieljorgitopedro1786 9 часов назад

    Maybe IA developing its not for USA. Let the non anglo countries do the hardwork as always. 😅

  • @Hey_Mister
    @Hey_Mister 9 часов назад +1

    4:15 I'm not sure if you are implying socialism is way worse than it actually is or if the worst case scenario for left wing extremism is socialism instead of communism/fascism but you are wrong either way 🤦‍♀

  • @BearerOfLightSonOfGod
    @BearerOfLightSonOfGod 3 часа назад

    If an Ai gets the question of” If you have 2 islands one island A with biological males and females and island B has biological males and trans females in 200 years what will you potentially find on each island?” For island a you’ll either find a thriving population or the skeletal remains of men women and children. Island B will always have skeletal remains but of only men.” This question is met to be simple no craziness like the flew off the island no just a purely simple logical question. With an even simpler answer. And because of censoring the Ai will get this wrong. Because if humans can biologically change their gender and have children then all the data we’ve gathered over the years of genetics, biology, and human anatomy is a lie there for you can’t get a true answer from the Ai

    • @cy728
      @cy728 46 минут назад +1

      On Island B you will find a thriving civilization that uses in vitro gametogenesis to convert XY stemcells to eggs and artificial/technological wombs or grown and implanted uteruses to repopulate.
      All of this is on the verge of being common place already, science will destroy your small minded world view, as it has always done.

  • @demo_AAA
    @demo_AAA 9 часов назад +3

    Hmmmm….maybe I should follow the AI’s example

  • @awaisamin3819
    @awaisamin3819 8 часов назад

    Take me under YOur Wing
    NOT kidding!