Happy FlexiSpot Black Friday Sale now, Up to 65% OFF! You also have the chance to win free orders during this period. Use my code ''YTE7P50 '' to get EXTRA $50 off on the FlexiSpot E7 Plus standing desk USA: bit.ly/3OiLQJ5 CAN: bit.ly/3ZiPE3a
I've had some issues with the more censored models when using them to give me ideas for my short stories. I am fine with it not being into ERP or whatever, but that also sometimes seems to bleed over to other kinds of "kinda messed up things" that might happen. If you are writing a story where you want the protagonist to have some real, dark stuff to overcome, or to have antagonists whose grievances are somewhat relatable, I've had more success with the base models that aren't as heavily censored. Gemini vs Mistral is an example of that where Gemini is nearly useless at dealing with concepts like self harm or believable political extremism.
Classical american way of dealing with awful historical things, like RUclips demonetizing and shadowbanning every documentary that talks about ww2. "If actively repel people from learning from history, surely will it not repeat amirite?"
A couple of months ago, I tested Claude on storytelling. I basically asked it to write me a story about two Black guys going undercover as two blonde girls (White Chicks, comedy). Claude refused, and the content it did agree to write was very bland, to say the least. I tried the same thing with GPT-4, and it went to town. Needless to say, I never looked back.
Holly clickbait! Companies want models that will obey guidelines and not soil their reputation with unhinged behaviors that we often observe in humans. When you optimize for things other than/alongside raw performance, the results have less raw performance. This has been well known in the space for ages.
Being sensitive about controversial subjects and handling those topics with care and well intention isn't being woke. Also, being able to objectively describe patterns and facts about the world even when the topic is controversial isn't being racist. And most importantly, I think you can and should do both as a mature human being, or AI.
@@bc-cu4on (edit: @bc-cu4on has since deleted their comment) no, being sensitive about a controversial subjects does not mean never talking about thruths that could hurt some people emotionaly.
@@johanavril1691it doesn't mean taking those truths out of context to convey a false claim overall, which is very often the issue and is exactly the situation addressed in this video.
@@technolus5742 right my comment was an awnser to someone else who aparently deleted their comment and it now doesnt make much sense. I absolutely agree with what you are saying.
Smart wording on the title makes more people click and engage in the comments. But the more correct wording is censorship in general, it's just that woke is the hr nonsense that we have rn.
It is very sad that people have to release uncensored versions of models. We literally had an AI declining questions around C++ because of how censored things get.
These comments are a cesspool. I don't disagree with the content of your video, and I get why you would frame the title like this to gather more clicks, but it sure attracted the wrong kind of crowd.
Models are smart, after censorship and steering they become dumber, when I chat with corpo model I run into all kinds of problems, if I chat with uncensored community model it is smooth sailing. It is that simple
Thats also very wrong, both in terms of measurable performances and in lack of steering. The so-called uncensored (and that is a terribly inaccurate word for it) models are generally steered so much into the other direction that they are barely competent at doing the day to day tasks that LLM are primarily meant to accomplish. Sure there are multiple methods to do so, but any fine-tuner worth anything will tell you that this is the balance they have to deal with. Their hallucination rate is extremely high, and have a massive positivity bias (yes man bias). They are still fun to play with, but thats all they are good for.
@@timtim101 when they want an AI to be more factual, they train it on better data and remove bad training data. That's basically "censorship" but improves the model. What you guys are saying is clearly nonsensical.
1. What is woke? 2. How racist are we talking about? 3. Isn't this just proof that steering has some problems, regardless of the topic being woke or not woke?
Yes, that is the point. The difference is that no one is concerned about feature steering on any other dimension -- only political correctness. They're claiming that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model, NOT feature steering. It's an attack on 'wokeness', disguised as genuine concern for AI.
1. Marxism applied to culture instead of economics. 2. There is a difference between racism and race realism, the second one promote pattern recognition on each groups to be help accountable, we all have bad apples. 3. Mixed signals are confusing, denying the pattern recognition on models is applying human biases on it because we don't like the result when it is just doing its work on provided data. It's like a woman that says no to play with you and you don't understand the hints and you leave while missing out and disappointing the woman that wanted you to try a bit more OR to be playful. See that's quite easy. > t. based researcher that have to shut up in public
It's nice that this research can also be used in case you want your model to be biased. Imagine the RP guys training an entire model to be a certain character natively, crazy 😶🌫
5:35 "How is all this even connected to humans?" Since AI is trained on data written by humans, would it not mean that if you steer the AI toward acting more like a certain type of person, that the resulting responses would be some amount reflective of the things those types of people write? Isn't that shown in your example where making the AI more pro-choice also caused it to become more anti-immigration?
Any focus taken away from making a good AI (such as making a non-racist AI), will no doubt make a less intelligent AI, purely by the fact that bandwidth that could've been spent on something productive is instead being spent on the HR catlady's gripes
This is obvious. If we feed data with correlations and try to exclude LLM‘s scope on certain views, then we automatically discard some of the correlating information. Fortunately, the next administration is going to get rid of these artificial limits, which cost humanity in the long term as AI development was slowed. There is no point in slowing the inevitable progress.
This comment section is really sleeping on the pros of feature steering. We would be able to provide custom rules that the models adhere to, outputs will be reproducible and homogeneous. Such a model will actually be smarter not dumber.
Have you ever considered that certain "patterns" might only reflect of America as one of the most racist countries on earth, historically and currently? Of course not, your just incredibly racist and trying to justify it.
"I'm not racist I just believe that certain races are Inferior", No you're incredibly racist. Have you ever considered that certain "factual information" about minorities might only reflect that America is one of the most racist countries on earth, historically and currently?
Using AI to interpret AI. How far off is from using your own brain to interpret your own brain? Like we probably don't want to wait for cats to find out how the human brain works
Honest question, what happened to Threads? I know BlueSky is hip and new, but isn't Threads still kicking around? Is it just that people don't want to deal with the Zuck?
But that means the narrative that trying to steer to reduce "rahcism" does make the model less intehlligent. Why did you act like this is incorrect when you later implicitly admit that this is correct? This is exactly what we would expect btw. Allowing a model to be slightly biased can produce a much more accurate model because it allows variance to be reduced alot. This is the bias-variance tradeoff. If you add the constraint that there be no bias across different rahcial categories then you're no longer optimising for simply the most accurate prediction, i.e. the most intehlligent model so of course you end up with a less accurate, less intehlligent model
Well, that's not what he said. Making an AI more 'woke' COULD make the model dumber, but making it more racist could also achieve the effect. The effect on either end is roughly the same and can be done in a mild form without degenerating the quality of output -- in other words, you CAN steer the model without issues as long as you find the right ratio. But let's be real, the people claiming that injecting 'wokeness' into AI makes it stupid aren't saying so because they're really concerned about the negative effects of steering. They're saying it because they believe that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model. They're trying to make a real-world extrapolation from that. They're attacking 'wokeness', not feature steering. It's just another example of the all-consuming culture war poisoning every field.
he uses the word racist and bais interchangeably, even though in the tech field nodes and bias are completely normal. and bias serves a different function.
@@BubbleTea033 Yes, it is what he said. Any degree to which you subject the model's accuracy optimisation to constraints, instead of simply optimising it to be the to be the most accurate possible, you are going to degenerate the quality of output and create a model which makes less accurate predictions. unless the highest accuracy value in the gazillion-dimension parameter space just happens by luck to lie upon this 1 or 2 dimensional constraint you made up for political reasons. Let's be real, you are attacking the weakest version of the criticisms because it is more convenient to you. Rather than try to defend that you are sacrificing truth and accuracy for your pohlitical sehnsibilities and it would be better not to do that and instead have a more accurate, more intelligent model, it's easier for you to say "this is just culture war so I don't need to listen". And yes, injuecting wohkeness into the model does make the model worse, for one example subjecting the model to the woke constraint that there must be no differences in bias across rahcial categories , even though doing this will produce a less accurate model because adding bias can reduce variance and lead to more accurate predictions . Other examples of woke scale-tipping and tampering with the model are even more egregious like when google gemini was launched and when asked to generate a picture of historical Euhropean people , it would always inaccurately depict them as blahck ahfrican or other non-euhropean rahces.
@@sownheard bias is normal in AI/machine learning, until the woke ideologues find out that there is differing bias across different rahces or sehxes or sehxualities then suddenly it's anathema even that's literally just the outcome of optimising the model to predict accurately
Having bias means that you are not align with reality. If you're biased towards 6, you will output 6 even if we want you to take something that is greater than 7. You have to be biased towards "reality"
AI guardrails are made with "better to overshoot than undershoot" mindset. I doubt there is no performance hit, when models are expected to give 100% politically correct answers at all times.
It is because anything other than pure racial apathy/indifference colourblindness is at least somewhat racist. However racism itself isn't necessarily bad at all since it's often rational and makes the world better.
Ignoring biases (confirmation and availability bias being especially relevant, but also e.g. survivor bias) can lead you to merely think you are recognizing a pattern. Not taking context and history into consideration and insisting on having false beliefs in non-phenomena such as free will (your so called pattern recognition system failing to see how much the environment (e.g., social, cultural and economic affordances) influences the choices people are able to recognize) can also lead you down the racist/ignorant path. Etc. Etc. Etc. Etc..
@@timtim101Hoe can you be this confused? Racism is literally jusr bad. What you are thinking about is being critical of cultures/religions/ideologies. But such matters only incidently or contingently have something to wih race.
(well not really) (but actually it does show that trying to steer an AI ideologically ends up with worse results) unsubbed + disliked. Trash clickbait.
Just because the article doesn't directly connect to human intelligence, doesn't mean being racist isn't correlated to high intelligence. Can't prove either way ;p
Whoever made these companies think they have the right to make the field of allowable use for LLMs? These tools can legitimately serve as a good chunk of intelligence of some people so limiting it in anyway and "safe guarding" thought is inhumane.
my dude, can I please ask you to talk more slowly in these videos? your heavy accent and swallowing of words make me mishearing a word every third sentence. or maybe do more takes, or have someone listen to it before you upload. its great content but frustrating to listen to because I have to scroll back all the time to make sure I heard you correctly. keep up the good work!
4:15 I'm not sure if you are implying socialism is way worse than it actually is or if the worst case scenario for left wing extremism is socialism instead of communism/fascism but you are wrong either way 🤦♀
If an Ai gets the question of” If you have 2 islands one island A with biological males and females and island B has biological males and trans females in 200 years what will you potentially find on each island?” For island a you’ll either find a thriving population or the skeletal remains of men women and children. Island B will always have skeletal remains but of only men.” This question is met to be simple no craziness like the flew off the island no just a purely simple logical question. With an even simpler answer. And because of censoring the Ai will get this wrong. Because if humans can biologically change their gender and have children then all the data we’ve gathered over the years of genetics, biology, and human anatomy is a lie there for you can’t get a true answer from the Ai
On Island B you will find a thriving civilization that uses in vitro gametogenesis to convert XY stemcells to eggs and artificial/technological wombs or grown and implanted uteruses to repopulate. All of this is on the verge of being common place already, science will destroy your small minded world view, as it has always done.
Happy FlexiSpot Black Friday Sale now, Up to 65% OFF! You also have the chance to win free orders during this period.
Use my code ''YTE7P50 '' to get EXTRA $50 off on the FlexiSpot E7 Plus standing desk
USA: bit.ly/3OiLQJ5
CAN: bit.ly/3ZiPE3a
what do you think of the billions of wannabe AD卐LF HϟtlerSS in the comments?
technically, reducing entropy does reduce the discovery space which in turn does make it "dumber".
I've had some issues with the more censored models when using them to give me ideas for my short stories. I am fine with it not being into ERP or whatever, but that also sometimes seems to bleed over to other kinds of "kinda messed up things" that might happen. If you are writing a story where you want the protagonist to have some real, dark stuff to overcome, or to have antagonists whose grievances are somewhat relatable, I've had more success with the base models that aren't as heavily censored. Gemini vs Mistral is an example of that where Gemini is nearly useless at dealing with concepts like self harm or believable political extremism.
Classical american way of dealing with awful historical things, like RUclips demonetizing and shadowbanning every documentary that talks about ww2. "If actively repel people from learning from history, surely will it not repeat amirite?"
A couple of months ago, I tested Claude on storytelling. I basically asked it to write me a story about two Black guys going undercover as two blonde girls (White Chicks, comedy). Claude refused, and the content it did agree to write was very bland, to say the least. I tried the same thing with GPT-4, and it went to town. Needless to say, I never looked back.
Holly clickbait! Companies want models that will obey guidelines and not soil their reputation with unhinged behaviors that we often observe in humans.
When you optimize for things other than/alongside raw performance, the results have less raw performance. This has been well known in the space for ages.
Being sensitive about controversial subjects and handling those topics with care and well intention isn't being woke.
Also, being able to objectively describe patterns and facts about the world even when the topic is controversial isn't being racist.
And most importantly, I think you can and should do both as a mature human being, or AI.
Contradiction. There is no polite and sensitive way to say "group X is responsible for most of bad thing Y".
@@bc-cu4on (edit: @bc-cu4on has since deleted their comment) no, being sensitive about a controversial subjects does not mean never talking about thruths that could hurt some people emotionaly.
@@johanavril1691it doesn't mean taking those truths out of context to convey a false claim overall, which is very often the issue and is exactly the situation addressed in this video.
@@technolus5742 right my comment was an awnser to someone else who aparently deleted their comment and it now doesnt make much sense. I absolutely agree with what you are saying.
Smart wording on the title makes more people click and engage in the comments.
But the more correct wording is censorship in general, it's just that woke is the hr nonsense that we have rn.
Real
Censoring means nothing. Is Elon censoring "woke" ideas by artificially boosting right wing accounts on X?
"you are right about everything but it makes me mad since im a censorious woketard:
It is very sad that people have to release uncensored versions of models. We literally had an AI declining questions around C++ because of how censored things get.
was that just the whole 'the c++ programming language is unsafe so i cant talk about it' incident?
These comments are a cesspool. I don't disagree with the content of your video, and I get why you would frame the title like this to gather more clicks, but it sure attracted the wrong kind of crowd.
Models are smart, after censorship and steering they become dumber, when I chat with corpo model I run into all kinds of problems, if I chat with uncensored community model it is smooth sailing.
It is that simple
Thats also very wrong, both in terms of measurable performances and in lack of steering. The so-called uncensored (and that is a terribly inaccurate word for it) models are generally steered so much into the other direction that they are barely competent at doing the day to day tasks that LLM are primarily meant to accomplish. Sure there are multiple methods to do so, but any fine-tuner worth anything will tell you that this is the balance they have to deal with. Their hallucination rate is extremely high, and have a massive positivity bias (yes man bias). They are still fun to play with, but thats all they are good for.
I remember when Amazon tried wokifying it's ai the first time it triggered a collapse so bad that reverting the code didn't fix it.
censorship in anyway makes the Ai dumber...
Literally wrong
@@yoavco99 cope.
@@yoavco99 Until you ask them to generate n@zis eating watermelons. Nah, YOU are wrong.
did not watch the video
@@timtim101 when they want an AI to be more factual, they train it on better data and remove bad training data. That's basically "censorship" but improves the model. What you guys are saying is clearly nonsensical.
1. What is woke?
2. How racist are we talking about?
3. Isn't this just proof that steering has some problems, regardless of the topic being woke or not woke?
Exactly!
literally no one can define wokeness anymore, its a useless term
Yes, that is the point. The difference is that no one is concerned about feature steering on any other dimension -- only political correctness. They're claiming that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model, NOT feature steering. It's an attack on 'wokeness', disguised as genuine concern for AI.
1. Marxism applied to culture instead of economics.
2. There is a difference between racism and race realism, the second one promote pattern recognition on each groups to be help accountable, we all have bad apples.
3. Mixed signals are confusing, denying the pattern recognition on models is applying human biases on it because we don't like the result when it is just doing its work on provided data. It's like a woman that says no to play with you and you don't understand the hints and you leave while missing out and disappointing the woman that wanted you to try a bit more OR to be playful.
See that's quite easy.
> t. based researcher that have to shut up in public
When you ask Gemini to generate average n@zi solder and it generates black person.😂
It's nice that this research can also be used in case you want your model to be biased. Imagine the RP guys training an entire model to be a certain character natively, crazy 😶🌫
You're misrepresenting the claim, the claim is that inducing bias decreases "racism" and performance
5:35 "How is all this even connected to humans?"
Since AI is trained on data written by humans, would it not mean that if you steer the AI toward acting more like a certain type of person, that the resulting responses would be some amount reflective of the things those types of people write? Isn't that shown in your example where making the AI more pro-choice also caused it to become more anti-immigration?
I mean, if you put contradictions into a data set and then force the model to follow them, it's bound to have knock on effects.
Any focus taken away from making a good AI (such as making a non-racist AI), will no doubt make a less intelligent AI, purely by the fact that bandwidth that could've been spent on something productive is instead being spent on the HR catlady's gripes
what is that graphic at 2:55 ? where is the 3d model from?
why do the people who say that always have a blue checkmark
A better way of phrasing that tweet would be: "making models biased against racism decreases their intelligence".
Re: Changing the title: probably for the best, copying the ragebait was good bait but did induce rage
This is obvious. If we feed data with correlations and try to exclude LLM‘s scope on certain views, then we automatically discard some of the correlating information.
Fortunately, the next administration is going to get rid of these artificial limits, which cost humanity in the long term as AI development was slowed. There is no point in slowing the inevitable progress.
This comment section is really sleeping on the pros of feature steering. We would be able to provide custom rules that the models adhere to, outputs will be reproducible and homogeneous. Such a model will actually be smarter not dumber.
man, get the ZSA Moonlander as a keyboard. You won't be sorry
So the problem is: AI learns pattern recognition and nowadays it is forbidden to recognize patterns.
The amazing noticing machine!
Yep 😂
Have you ever considered that certain "patterns" might only reflect of America as one of the most racist countries on earth, historically and currently?
Of course not, your just incredibly racist and trying to justify it.
I can feel some lustful tension between you and that flexispot desk
Jesus, this comment section needs a cleanse. Feels like every who commented either commented before watching or didn't watch at all
Theres a pretty stark difference between REAL racism and just stating factual information. Does the LLM even understand that difference?
"I'm not racist I just believe that certain races are Inferior", No you're incredibly racist.
Have you ever considered that certain "factual information" about minorities might only reflect that America is one of the most racist countries on earth, historically and currently?
«Can’t wait to see productive comments in this comment section»
Stereotypes exist for a reason.
Using AI to interpret AI.
How far off is from using your own brain to interpret your own brain?
Like we probably don't want to wait for cats to find out how the human brain works
When are you moving to bluesky
after he helps create accounts for his wife and her boyfriend.
And be censored day one unless you belong to the hive mind? How about never.
@@lordsneed9418 ah, you most be "anti-woke".
Honest question, what happened to Threads? I know BlueSky is hip and new, but isn't Threads still kicking around? Is it just that people don't want to deal with the Zuck?
@@WhoisTheOtherVindAzz obviously
1:05 getting a bit too zesty with that table 😮😮
@4:19 that certainly is the most graph of all time.
Talk about Pixtral 12B and Pixtral Large already
we all saw GPT dramatic decrease after woke censorship
So... The google AI generating nazis as black soldiers is smart? The more you know I guess.
nice straw man, do you feel attacked?
@@VNDROID You project.
@@VNDROID nice calling something that actually happened a strawman. Do you feel attacked?
True for humans as well. The worst part is they've convinced each other they're the smart ones while not understanding Bayes Theorem at all.
Woke is dumb so making something woke makes is dumb by definition tho.
But that means the narrative that trying to steer to reduce "rahcism" does make the model less intehlligent. Why did you act like this is incorrect when you later implicitly admit that this is correct?
This is exactly what we would expect btw. Allowing a model to be slightly biased can produce a much more accurate model because it allows variance to be reduced alot. This is the bias-variance tradeoff. If you add the constraint that there be no bias across different rahcial categories then you're no longer optimising for simply the most accurate prediction, i.e. the most intehlligent model so of course you end up with a less accurate, less intehlligent model
Well, that's not what he said. Making an AI more 'woke' COULD make the model dumber, but making it more racist could also achieve the effect. The effect on either end is roughly the same and can be done in a mild form without degenerating the quality of output -- in other words, you CAN steer the model without issues as long as you find the right ratio.
But let's be real, the people claiming that injecting 'wokeness' into AI makes it stupid aren't saying so because they're really concerned about the negative effects of steering. They're saying it because they believe that injecting 'wokeness' specifically makes the model worse, as if the ideological presuppositions of progressivism hinder the model. They're trying to make a real-world extrapolation from that. They're attacking 'wokeness', not feature steering. It's just another example of the all-consuming culture war poisoning every field.
he uses the word racist and bais interchangeably, even though in the tech field nodes and bias are completely normal. and bias serves a different function.
@@BubbleTea033 Yes, it is what he said.
Any degree to which you subject the model's accuracy optimisation to constraints, instead of simply optimising it to be the to be the most accurate possible, you are going to degenerate the quality of output and create a model which makes less accurate predictions. unless the highest accuracy value in the gazillion-dimension parameter space just happens by luck to lie upon this 1 or 2 dimensional constraint you made up for political reasons.
Let's be real, you are attacking the weakest version of the criticisms because it is more convenient to you. Rather than try to defend that you are sacrificing truth and accuracy for your pohlitical sehnsibilities and it would be better not to do that and instead have a more accurate, more intelligent model, it's easier for you to say "this is just culture war so I don't need to listen".
And yes, injuecting wohkeness into the model does make the model worse, for one example subjecting the model to the woke constraint that there must be no differences in bias across rahcial categories , even though doing this will produce a less accurate model because adding bias can reduce variance and lead to more accurate predictions . Other examples of woke scale-tipping and tampering with the model are even more egregious like when google gemini was launched and when asked to generate a picture of historical Euhropean people , it would always inaccurately depict them as blahck ahfrican or other non-euhropean rahces.
@@sownheard bias is normal in AI/machine learning, until the woke ideologues find out that there is differing bias across different rahces or sehxes or sehxualities then suddenly it's anathema even that's literally just the outcome of optimising the model to predict accurately
Having bias means that you are not align with reality. If you're biased towards 6, you will output 6 even if we want you to take something that is greater than 7. You have to be biased towards "reality"
Yes.
AI guardrails are made with "better to overshoot than undershoot" mindset. I doubt there is no performance hit, when models are expected to give 100% politically correct answers at all times.
I just want my AI to be racist.
If an AI refuses to answer my questions about penises and vaginas, that AI is probably not worth testing.
Huhuhuhhuuh
Information wants to be free
2:08 np
wtf was that intro 💀
VSCode and not Vim, shame on you...
Try freeplane
Pattern recognition isn't racism 🙊
It is because anything other than pure racial apathy/indifference colourblindness is at least somewhat racist. However racism itself isn't necessarily bad at all since it's often rational and makes the world better.
Ignoring biases (confirmation and availability bias being especially relevant, but also e.g. survivor bias) can lead you to merely think you are recognizing a pattern. Not taking context and history into consideration and insisting on having false beliefs in non-phenomena such as free will (your so called pattern recognition system failing to see how much the environment (e.g., social, cultural and economic affordances) influences the choices people are able to recognize) can also lead you down the racist/ignorant path. Etc. Etc. Etc. Etc..
@@timtim101Hoe can you be this confused? Racism is literally jusr bad. What you are thinking about is being critical of cultures/religions/ideologies. But such matters only incidently or contingently have something to wih race.
It is if it’s built by racists. Trash in trash out. Omg how could my weak girl mind know that saying. You aren’t the only ones noticing a pattern 🙄
@@WhoisTheOtherVindAzz "economic factors" lmao
LMAO
(well not really) (but actually it does show that trying to steer an AI ideologically ends up with worse results) unsubbed + disliked. Trash clickbait.
did you watch the video
Tech bros taking issue with not building their ai around their own bigotry is so tired. From a women in the field
Do you ever have deep conversations with people that challenge your worldview?
@@emmanuelgoldstein3682 Do you?
@@Speejays2 Do you?
Bait
Just because the article doesn't directly connect to human intelligence, doesn't mean being racist isn't correlated to high intelligence. Can't prove either way ;p
Whoever made these companies think they have the right to make the field of allowable use for LLMs? These tools can legitimately serve as a good chunk of intelligence of some people so limiting it in anyway and "safe guarding" thought is inhumane.
my dude, can I please ask you to talk more slowly in these videos? your heavy accent and swallowing of words make me mishearing a word every third sentence. or maybe do more takes, or have someone listen to it before you upload. its great content but frustrating to listen to because I have to scroll back all the time to make sure I heard you correctly. keep up the good work!
Woke and bias are real problems with AI. Users should have right to select what information they process and how to use it, not the tool.
Maybe IA developing its not for USA. Let the non anglo countries do the hardwork as always. 😅
4:15 I'm not sure if you are implying socialism is way worse than it actually is or if the worst case scenario for left wing extremism is socialism instead of communism/fascism but you are wrong either way 🤦♀
Socialism is far left.
If an Ai gets the question of” If you have 2 islands one island A with biological males and females and island B has biological males and trans females in 200 years what will you potentially find on each island?” For island a you’ll either find a thriving population or the skeletal remains of men women and children. Island B will always have skeletal remains but of only men.” This question is met to be simple no craziness like the flew off the island no just a purely simple logical question. With an even simpler answer. And because of censoring the Ai will get this wrong. Because if humans can biologically change their gender and have children then all the data we’ve gathered over the years of genetics, biology, and human anatomy is a lie there for you can’t get a true answer from the Ai
On Island B you will find a thriving civilization that uses in vitro gametogenesis to convert XY stemcells to eggs and artificial/technological wombs or grown and implanted uteruses to repopulate.
All of this is on the verge of being common place already, science will destroy your small minded world view, as it has always done.
Hmmmm….maybe I should follow the AI’s example
Take me under YOur Wing
NOT kidding!