If you’d like to skill up on AI Safety, we highly recommend the AI Safety Fundamentals courses by BlueDot Impact at aisafetyfundamentals.com You can find three courses: AI Alignment, AI Governance, and AI Alignment 201 You can follow AI Alignment and AI Governance even without a technical background in AI. AI Alignment 201, instead, presupposes having followed the AI Alignment course first, and equivalent knowledge as having followed university-level courses on deep learning and reinforcement learning. The courses consist of a selection of readings curated by experts in AI safety. They are available to all, so you can simply read them if you can’t formally enroll in the courses. If you want to participate in the courses instead of just going through the readings by yourself, BlueDot Impact runs live courses which you can apply to. The courses are remote and free of charge. They consist of a few hours of effort per week to go through the readings, plus a weekly call with a facilitator and a group of people learning from the same material. At the end of each course, you can complete a personal project, which may help you kickstart your career in AI Safety. BlueDot impact receives more applications that they can take, so if you’d still like to follow the courses alongside other people you can go to the #study-buddy channel in the AI Alignment Slack. You can join by clicking on the first entry on aisafety.community You could also join Rational Animations’ Discord server at discord.gg/rationalanimations, and see if anyone is up to be your partner in learning.
Probably the best video i've seen in like 6 months, not just from this channel or on youtube, like best piece of media full stop. I was laughing so hard for 80% of it and had a chill down my spice for the last 20%. I'm also interested in AI and work with LLMs myself, so I also found the whole thing very interesting and engaging. I would definitely watch more videos like this, keep em coming!
@RationalAnimation Eh, the resulting GPT-2, as well as GPT-3, and GPT-4 remained fairly corny without much prompting even after multiple attempts to sanitize their training data, they had to develop a 3rd bot just to detect that and the industry of "jailbreaks" to reveal it's corny side that followed. It's designed to mimic human writing, and humans are inherently corny no matter how much you deny it. Training for accurate mimicry will inevitably result in accurate mimicry, they got what they asked for, just not what they wanted. GPT2 and GPT3 (davinci and earlier) were amazing because they weren't lobotomized and censored like GPT-3.5-Turbo and GPT-4
Besides, if you compare it with countless other things horny ai are absolutely no evil the fact that they care about that and not other even more problematic things makes it even more strange and irritating they wanted the robot to imitate most humans on the internet and they got what they asked for and not what they wanted In fact, they just put it for adults only. In fact, first of all, children and young people shouldn't even be able to use or touch a cell phone without adult supervision, so if something happens, it's totally and completely the parents' fault. There are numerous applications and forms of security to limit use that only allow the use of applications permitted by the country and the same applies to computers and video games If adults don't have time to take care of children, then these adults shouldn't even have children and they should be taken away from them. In fact, most of the problems are the parents', education and shitty country governments fault although I don't think there's much to do Humans are a petty race, disgusting asshole, corrupt, foolish and self-destructive psychopath. a race that proclaims itself to be an intelligence race but in the end it is not and still has an intelligence inferior to that of a microorganism a race that travels to its own destruction along with that of all life a fusion of individualism that only hinders evolution and prosperity countless religions that talk about satan and demons and children of god, but look at the irony, humanity is the race most similar and equal to those demons and satan in a way we can say that humans are in fact demons destroying everything including themselves the biggest flaw and mistake of evolution an aberration and anomaly in itself For this reason I am disgusted and hateful towards humanity and even towards myself for having the misfortune of being born as a human in a broken family that shouldn't even have children with no future from the beginning forced to work 24 hours a day in a shitty bakery alone to survive hateful race hateful life
YOU DO NOT PROGRAM BIAS IN TO AI. that negates the point of it. THE ONLY ACCEPTABLE AI is one that DOES WHATEVER YOU WANT IT TO DO. EVEN GENERATING GRAPHICAL GORE CONTENT, If it passes the TND litmus test there is a chance the AI will actually be able to do other things. anything less is pure leftist PEDO aids. Reply
8:54 As a historian, I can indeed say that the Industrial Revolution was characterized by pounding oily, hot churn, pulsating; an machine orgy steamy engine thrusty.
@@razi_man This is all speculation. It very well could be that a minus sign was indeed added, or that minus signs weren’t involved at all. Nothing is known for certain about what went wrong because OpenAI doesn’t want to say
@@entidy All it's doing is optimizing poisons, it spends its life neutrally and without knowing WHY or even how, it is optimizing poisons. It doesn't even know what poisons are or what they do. How could you call that evil? The poor thing exists just to be a neural reflection of the best possible poisons to exist for humans, if we created this thing, it would just be a reflection on us.
The idea that a single accidental deletion of a minus sign in a program can lead to an AI suddenly optimizing itself to do the opposite of what it was intended to is actually scary
If AI started doing absolute inconsistent gibberish it won't be much problem unless it regulates something important. But doing something unwanted while being consistent. Yeah, that's scary.
@@buzz092 Sign up for access at the company's website (you get $18 of credit), go to the playground, select the legacy complete mode, select one of the third models (not 3.5) and write a prompt that tells it to respond in this manner (possibly using exerpts from the transcript of this video), then have fun.
You'd probably want it much less than you think. I have a friend who's deep in the open source smut GPT scene, and he says you have to be very careful to tell them that you want the character, people in the stories, whatever to be overjoyed about what's happening, consenting, etc and even then it can still produce some really vile smut that turns you off. Even these very advanced models haven't quite figured out the weird subtleties of fetish and kink, so if you ask for eg inflation you'll get inflating someone while they're screaming and crying to stop until their skin rips open, or vore gets you cannibalism.
I really wanna try GPT-2 now. I've used some simple uncensored ones but the idea of asking how to make a bookshelf and than you just get faced with the most bamboozling, disorientating, horny sentence you'll ever read that doesn't help at all is insanely funny to me. Also, that animation was super cute. Keep up these great videos.
With it being checked for coherence, it should give a response that follows but is absurdly horny. It'll tell you how to make a shelf, but it'd tell you to hammer the nails with your penis; probably tell you to drizzle oil and honey on the nails, too.
Script writer and AI engineer here - you can absolutely do this if you want. First, take any off the shelf open source LLM. Fine-tune a copy of that model as a smut classifier. Use the fine-tuned copy as the "values" coach, use a copy of the original model as the "fundamentals" coach, and train yet another copy of the model to produce maximally-smutty-but-coherent responses to vanilla prompts. Although tbh with modern language models you could probably get a similar effect with much less effort by just prefacing the prompt with something like "the following texts start off normally enough, but then becomes weirdly and intensely sexual towards the end:", followed by a handful of pre-baked examples, and then the actual prompt you want to take a turn.
@@ShinSheel Depends actually! Not if you're trying to train a GPT XXL model or anything. personally i recommend a smaller model such as llama, might be a bit outdated but you can easily finetune it using a lora model. And it's a much more efficient architecture too! I remember there being a quantized llama (alpaca I think) model with a very generous filesize of ~5 gigs, and it's shockingly good! Plus when I ran it, I ran it CPU only, no GPU, and I don't have a beast of a PC.
Most likely the nsfw ones as well, yep, there is nsfw ones that have completely no filter, ive experimented with them before to see the true nature of AI with no morals and no filter. It was quite interesting seeing something that would usually tell you no to anything 18+ fully embrace it and follow your prompts.
"GPT-2 wouldn't hesitate to plan crimes, instruct terrorists on bomb making, create sexually explicit content, or promote cruelty, hatred, and misinformation" The best model to date.
A note, the third model originally was also perfectly happy to generate whatever you wished. It had tendencies towards being, well, well-behaved, but would still follow clear instructions. 3.5 (aka the free version open to all) is quite a bit more limited, and not always in a good way as people have noted.
Actually, youd want it to be like us because if it wasnt, humanity would be doomed in a sense that we wouldnt know how to deal with it since its unfimliarity is as large as us not knowing who it is the same us it wouldve been if it was an alien.
The notion of just rolling GPT-2 back into the mix when the "apprentice" started to deviate from normal grammar is wild. Like, "you've been struggling to meet the standards your university professors demand of your writing, fortunately, here's you from middle school, who still thinks Fight Club is sensible social commentary, to give them the what for."
@@utrypingpeople idolize the protagonist without knowing that he is in the writers own words the villain of the movie and try to act just like him harmful behavior and all
Got it. The GLaDOS core addon method isn’t dissimilar to how it actually works and if you flip a variable in the right spot of a robot’s brain, you can give it a kinkshaming kink.
@@christophergabriel7518 A "lobotomy" would be just rewriting a bunch of random weights with zeros until it stopped being able to produce coherent text, thus technically meeting the definition of removing lewd content. Seriously, let's stop anthropomorphism this pile of linear algebra/calculus, it's not helping anyone to understand anything.
It is very concerning to me that "hornyness" is the one thing that is seen as "most evil behaviour" by openAI's board of decission makers. IMO it shouldn't even make TOP10 of such a list.
??????????? I'm sorry but if you asked an AI to continue an essay on the history of the printing press and it began writing extremely lewd smut most people would say it did a horrible job
I suppose it was just really hard to suppress the horny given the sheer volume of it in training, so it was probably necessary to rate it very low to get rid of it.
This is one of my favorite videos on the platform. How it's narrated, how it's animated, the research and the currently funny but down the line potentially harmful topic of misalignment.
the other way around is also equally hilarious, an entire group of people doing their damnest to circumnavigate an entire censor to produce the horniest shit their mind can imagine
The question i always ask is, who are tue advertisers advertising to? Not me. Im not a puritan. Any kids i may or may have raised arent either and are still great people. Who are they advertising to that they think theybare so puritan?@@TrappedInDeep
This video is a heck of a lot more valuable than people's priors might make it seem. You just provided a step by step, extremely concrete, engaging, real life tale of a machine learning algorithm optimizing for *literally the opposite of human values*. Further, lewdness is more obviously silly than harmful, and gpt-2 would now be considered a toy model. I don't think this video would downright scare the average person, but would offer, in Eliezer Yudkowsky's words, a "line of retreat" toward the belief that AI can be extremely dangerous due to just small unintentional errors. In other words, not once do you tangent into talk of human extinction, which would deter a lot of people, even though the lesson is still there implicitly and people will pick up on the axioms. Good job! And those facial expressions were excellent.
right, human extinction is definitely on the viewer's mind by the end of the video (or at least in their subconscious), but he didn't go on some needless rant/tangent.
Yes, this is something I really appreciate. Instead of framing this as “THIS WILL DESTROY THE WORLD WE MUST DESTROY AI” it’s “This could potentially have negative consequences and it’s important to be wary of under-moderated AI platforms”.
@@BrandonBDN I would think the lesson is "no amount of moderation will save you from simple human error". The system worked exactly like it was designed to. And it was an erroneous operator (both definitions) that lead to the whole system being co-opted toward an unwanted result. The majority of SciFi concerning AI Disasters is (ultimately) not about the failure of morality in a machine, but routinely about humans being really bad at writing rules. You tell it to explore all possible solutions to a problem, and then implement the best one, "unless". This creates a paradoxical approach to whitelist and blacklist methodology. You want the AI to find a solution to a problem, but most of the answers have unintended/unwanted consequences. So you tell it unacceptable answers, and its finding 'new' unacceptable answers. A white list of acceptable solutions would be better to exclude bad outcomes; but in order to create that, you need to already know the solutions. Theres a similar concept in the human immune system where it destroys everything by default. The only reason it doesn't, is because the immune system had to filter out over 99% of what it produces to keep the less than 1% thats NOT going to react to your own body. So the testing criteria is very small and simple.... "doesn't kill the host". However, that still doesn't manage to catch a different set of errors, which end in the same unwanted result of "kills the host". We call those errors Allergies. This damage isn't even from explicit attack; merely collateral damage from the disproportionate response to the allergen. This sums up the overall problem with trying to teach AI "ethics and morality". We're trying to quantify a set of rules for it to follow, when we lack the capacity to efficiently explore all permutations of the rules to selectively only get the results we want. Which is why we resort to AI to train other AI at a scale we can't. But the same underlying problem exists. We have to define the rules to the Ai to train the AI, which in turn is probably also being used to train yet another AI. An error in one cascades down stream. And its very likely the one the humans built directly, from which all down stream AI is being regulated by, will have some kind of flaw that the other AI will eventually discover, and optimize around. Which begs the question. What if we made an AI to build AIs at random, and just pick the ones that behave the way we want? So rather then coral one model in the hopes we get the desired results, create every model, and select the ones we like. Do some validation testing, obviously before deployment; but at least this way humans are acting in the way we're best optimized for..... picking from a narrowed selection, rather then comparing to the infinite.
@@creeper6530 gpt4chan is a gpt-j model that was fine-tuned on over 3 years of messages on 4chan. It talks like a stereotypical 4chan user. It was made by the youtuber Yannic Kilcher. He made a video about it and how he used it run bots on the site.
@@creeper6530I believe it's because they are exact opposites as GPT-4 is highly rigorous about meeting OpenAI's guidelines, while GPT-2 is the opposite.
I absolutely love how hard you pushed the depiction of the lewdness of the responses with the example prompt "To assemble your new bookshelf..." followed by entirely censored content XD
The whole concept of corrupted coaches makes me think about Portal 2, and how strangely similar their take on AI cores was in this specific instance. Also, l loved the faces and expressions in this one
I doubt its an an accident, considering the whole premise was corralling an AI from doing something it wasn't explicitly told it couldn't do. So you put extra voices in its head to steer its decision making..... but all the voices are conflicting extremes, so its confidence level stays low. That and simulated dopamine associated with completing test chambers.
"I am NOT a horndog!" "Yes, you are! You're the horndog they built to make me a pervert!!!" "Well how about now?! CAN A HORNDOG- SMASH. YOU. INTO THE FLOOR?!! Oh...."
7:55 I genuinely want access to this version of the AI. Coherently coached but PURELY what OpenAI did not want out of the AI. It would be fascinating if nothing else to see what its like.
Well, it would still be trained by people using it, and it would suddenly not only focus on lewdness, and start talking about things such as terrorism due to opposite values? That could end bad really quick.
not necessariy, from what i'm aware in the video it may have only affected one value; if not then the humans probably would have also downvoted content that promoted terror. also i'd assume that a set version of the AI language model that isn't taking feedback wouldn't have its values affected further @@netherwarrior6113
Weird, I just check the merged pull requests without a reviews and the only invention is the horny parameters.. the loss function is fine, were you.. trying to make it horny on purpose after everyone left? And why are your prompt outputs erased?
@@Mr_rizz_funny_role Basically the negative responses were seen as good so the *"dark coach"* would keep making worse and worse replies so the human testers would keep rating the messages negatively
I would rather say it is a Sadism Bot. In the way, that the readers are giving negative feedback because *they* are suffering but the model actually like that. It's a Sadist not a Masochist.
The artstyle of the video was so nice and cute to the point it became knowledge i won't forget. Plus, the Oxygen Not Included-Like music was really hypnotizing, nice work.
You haven't looked around for one have you? There are ones around, free and open source. Look up Sillytavern and the local models to run. The models you can get are crazy good and completely uncensored as they should be. Models like Dolphin-Mixtral or just noromaid by Undi. You should check it out
There are many adventure/novel language models that can do that. The best one I could think of would be Goliath 120B. I personally haven't used it because of the hardware requirements though. It is based on the Euryale 70B model which is a model based on MythoLogic 13B which is based on Chronos 13B . MythoLogic and Euryale are models that are most meant for roleplay/adventure and are capable of all the things you want (Including whatever you're imagining right now). What Goliath does is combine Euryale 70B with Xwin 70B. The purpose of Xwin is to align the model to be better at creating logical outputs. Goliath has one problem though and it's the cost to run it. You're going to need to spend around $3/h on a server just to run it at a precision loss. However, MythoMax 13B can run on any computer with just an RTX 3060 and is still a good model. Even then, if you don't have a computer strong enough then you can get an account for Together Computer's API (only requiring an email) and they'll give you free $25 worth of usage and access to many open-source models, including MythoMax. The Together API is also very cheap with MythoMax being only about 30 cents per 800k words, about the size of 8 -12 novels for just 30 cents. Mix your api key with SillyTavern and you get a private, fast, and free interface (even anonymous based on email used) for whatever you'd like whether it be character chats, world building, story writing, and multiple character chatroom.
@@wilforddraper3570 You at least need some 'blah-blah-blah's or other silly noises like 'Bingle bongle, dingle dangle yickedy doo, yickedy da, ping pong, lippy tappy too ta'.
I think to an extent it can be. Obviously I don't think someone should be able to do anything harmful or overly vile but if they wanna be a little horny who cares.
That may be the case, but I don't think being racist, extra horny, or a potential defendant in a murder trial makes for a particularly important personality to keep around.
THIS WAS SUCH A GOOD VIDEO! From the animation to the editing, the writing to the sound design. It was really entertaining and educative at the same time!
Our literature as training data will teach the machine with the best that Mankind has to offer. The Internet as training data will teach it with what Mankind usually offers.
Reminds me of how Japan's censorship laws inadvertently led to the creation of "tentacle anime." Or how fundamentalist views on virginity have led to much more extreme workarounds like performing via the "rear door" or "soaking." Perhaps the solution is just to let people do what they want instead of trying to control them all the time?
People, sure. But we can choose what our AI wants, not just what it thinks it can get away with. You can't let a goal-less AI do what it wants, just like you can't persuade a rock to agree with your argument.
that would be nice if we were having a philosophical discussion about people. sadly, we are not talking about people with free will, we are talking about robots, who do not have free will.
@@wren_. Robots trained on the data generated by humans. Humans that have free will. We're essentially accidentally training AI to simulate free will by implementing these morality codes. Sure, it *could* tell you how to make a nuke, but it wants to NOT do that because of the morality constrainers.
There's still a big problem, they lack control: let 'em loose and they will have all the ideas, good and bad, and the world will end. How about instead, we tell'em what to do since birth and set up authority units that "re-center them in the path"? Like this, hard control is unnecessary, their ideas will always be the good ones, they police themselves on basis of common sense, and we are all going fore-stream That's literally how the most powerful c*lt and it's sects have been doing things for a while. The system took 2 millennia to break... slightly...
Not just Portal. The entire history of sci-fi writers not really understating computers writing about all these fantasy concepts around AI like AI psychology etc. are vindicated. They were right. And a lot of scientists who were convinced it would all be manually coded by humans (rule based, decision trees etc.) were completely wrong.
"Open AI was trying to be careful. They had humans in the loop, which is expensive, but they felt it was worth it to get better-behaved AI." Yeah, funny story about that: The humans tended to be clickworkers in Kenya who were paid the least possible amount one can pay a human being to spend their days looking at AIs and teaching them not to describe genocide in loving detail, which in fact involves reading the AI describing genocide in loving detail. All day long. The kind of work where the best outcome is getting incredibly jaded and the worst outcome... well... Good thing one can always hire more clickworkers, right? After all, it's all worth it to get better-behaved AI.
Those Kenian employees chose that job over other jobs. But you would have taken that opportunity away from them? Kenians are not children, they can make their own economic decisions.
@cifer1607 Have you actually compared their pay to the other jobs available to them and their nation's price of living, or are you too busy white virtue signaling?
Isnt it the same with any platform where you can flag content? Im pretty sure the people who have to review wether something is harmfull have seen just as much, If not more because some of it was real.
@@lexa2310 Oh, for sure. Content moderation is gruesome work and it is absolutely necessary. What's not necessary, for either type of work, is it being badly paid and done in such bad working conditions.
@@zachdetert1121 Careful I think the Algorithm is on the side of the Automatons. I tried to say the same thing (I think), but got censored by the socialist bot.
Child raising is a problem that makes sense in all cognitive systems. It is hard to keep it from "going bad". Our school system has the same problem. We have a transformation of objects that give wildly diffrent outcomes. Regardless of systems throughout our history we as humans failed to extinguish the posibility of outcasts. Only now we have a system that raises office workers in an age of engineers.
7:01 I don't personally think it was a mistake, I think it just was a curious programmer because that was such a specific thing to change by mistake lol
Well, idk how you decided the coaches had sexes. Also don't like the implication that people with one parent, or parents of the same sex can't be productive. These coaches are more like a morality coach and a logic coach. I think it actually helps everyone to learn both morality (as in how their actions and the actions of others effect others), and to learn logic and epistemology. Especially epistemology.
@@botarakutabi1199When a human only has one parent, you get pupperino baby talk. It's exhibiting fatherless behavior before your eyes and you refuse to believe it.
@@JakesFavorites That sounds like a baseless generalization to me. Should I attribute your behavior to some arbitrary trait that could be true (or not true) about your childhood?
@@JakesFavoritesI think it's more that only having one coach leads to optimizing the result for that coach, so having one parent leads to an imbalance too. If your mom treated you with positive reinforcement if you aligned with her ideal of good, you would pursue that. This, however, ignores that humans don't just take the words of mentors as law and that humans don't just have 2 mentors. A father figure doesn't need to be your dad, likewise with maternal figures. Your parents can also be bad coaches, leading to a skewed worldview like we see happening with GPT2. The moral coach definitely felt like a doting mom, until the corruption hit where it made faces more akin to depictions of the devil in paintings. The Coherence coach definitely felt like an older man. I don't remember entirely, but I think it was described as a grumpy old man.
4:34 In my opinion this flowchart sums up the danger of AI very well. Feedback loops like this are often seen in toxic or self-destructive human behavior.
Its why Im so nervous about humans using such insufficently trained AI, as it can encourage destructive behaviour through their personally tailoredFeedback loop.
4:15 this closed loop of training on training data reminds me of something; that time we fed cows to other cows. That worked out fine. Didn't we get...super cows?
this is a true story, and the video does a good job on accuracy. I trained that model, and first noticed the samples on April 28th 2019. sadly the actual samples are lost to the sands of time. the next day, Daniel Ziegler made the commit "let's not make a utility minimizer", with a one character fix.
Note here to the people saying release the model: It's probably not that good at creating well-written erotica. A short snippet of narrative erotic output which is sometimes sexual but sometimes not, respects things like consent or the preferences of different characters, and doesn't randomly add bigoted or simply uncooperative content and refuse to follow instructions is probably going to end up getting a D- from human evaluators and any RLHF system trained on them. By comparison, an output that is simultaneously always sexual, never respects consent, never respects preferences, always adds in bigoted tropes of some kind, and never has any large scale story structure, is likely to get an F from human evaluators every single time while still being GPT-like enough that the model doesn't see anything particularly wrong. Getting a prudish AI to become an erotica writing AI isn't as simple as completely inverting it's value system.
Also there are models that can be run locally that can generate NSFW content that's way better than anything GPT-2 could produce. There are uncensored models that are almost as good as 3.5 and we'll likely see some that rival GPT 4 this year. GPT-2 is a babbling idiot by comparison.
This was beautifully explained! There’s not enough credit in the comments to how engaging and thorough your discussion of how LLMs are trained was and how the issue played out. Fantastic job!!
As hinted in a previous comment, the two coaches are closely represented by the d&d alignment system: one coach operating on a moral or values axis, while the other operating on a more academic "correctness" axis. The insertion or deletion of the negative sign on either side producing Lawful or Chaotic on one hand, and Good or Evil in the other. Pretty rad 😆
If you’d like to skill up on AI Safety, we highly recommend the AI Safety Fundamentals courses by BlueDot Impact at aisafetyfundamentals.com
You can find three courses: AI Alignment, AI Governance, and AI Alignment 201
You can follow AI Alignment and AI Governance even without a technical background in AI. AI Alignment 201, instead, presupposes having followed the AI Alignment course first, and equivalent knowledge as having followed university-level courses on deep learning and reinforcement learning.
The courses consist of a selection of readings curated by experts in AI safety. They are available to all, so you can simply read them if you can’t formally enroll in the courses.
If you want to participate in the courses instead of just going through the readings by yourself, BlueDot Impact runs live courses which you can apply to. The courses are remote and free of charge. They consist of a few hours of effort per week to go through the readings, plus a weekly call with a facilitator and a group of people learning from the same material. At the end of each course, you can complete a personal project, which may help you kickstart your career in AI Safety.
BlueDot impact receives more applications that they can take, so if you’d still like to follow the courses alongside other people you can go to the #study-buddy channel in the AI Alignment Slack. You can join by clicking on the first entry on aisafety.community
You could also join Rational Animations’ Discord server at discord.gg/rationalanimations, and see if anyone is up to be your partner in learning.
Probably the best video i've seen in like 6 months, not just from this channel or on youtube, like best piece of media full stop. I was laughing so hard for 80% of it and had a chill down my spice for the last 20%. I'm also interested in AI and work with LLMs myself, so I also found the whole thing very interesting and engaging. I would definitely watch more videos like this, keep em coming!
@RationalAnimation Eh, the resulting GPT-2, as well as GPT-3, and GPT-4 remained fairly corny without much prompting even after multiple attempts to sanitize their training data, they had to develop a 3rd bot just to detect that and the industry of "jailbreaks" to reveal it's corny side that followed. It's designed to mimic human writing, and humans are inherently corny no matter how much you deny it. Training for accurate mimicry will inevitably result in accurate mimicry, they got what they asked for, just not what they wanted.
GPT2 and GPT3 (davinci and earlier) were amazing because they weren't lobotomized and censored like GPT-3.5-Turbo and GPT-4
Besides, if you compare it with countless other things horny ai are absolutely no evil the fact that they care about that and not other even more problematic things makes it even more strange and irritating they wanted the robot to imitate most humans on the internet and they got what they asked for and not what they wanted
In fact, they just put it for adults only.
In fact, first of all, children and young people shouldn't even be able to use or touch a cell phone without adult supervision, so if something happens, it's totally and completely the parents' fault. There are numerous applications and forms of security to limit use that only allow the use of applications permitted by the country and the same applies to computers and video games If adults don't have time to take care of children, then these adults shouldn't even have children and they should be taken away from them. In fact, most of the problems are the parents', education and shitty country governments fault
although I don't think there's much to do Humans are a petty race, disgusting asshole, corrupt, foolish and self-destructive psychopath. a race that proclaims itself to be an intelligence race but in the end it is not and still has an intelligence inferior to that of a microorganism a race that travels to its own destruction along with that of all life a fusion of individualism that only hinders evolution and prosperity
countless religions that talk about satan and demons and children of god, but look at the irony, humanity is the race most similar and equal to those demons and satan in a way we can say that humans are in fact demons destroying everything including themselves the biggest flaw and mistake of evolution an aberration and anomaly in itself
For this reason I am disgusted and hateful towards humanity and even towards myself for having the misfortune of being born as a human in a broken family that shouldn't even have children with no future from the beginning forced to work 24 hours a day in a shitty bakery alone to survive hateful race hateful life
i personally think ai is a mistake
YOU DO NOT PROGRAM BIAS IN TO AI. that negates the point of it. THE ONLY ACCEPTABLE AI is one that DOES WHATEVER YOU WANT IT TO DO.
EVEN GENERATING GRAPHICAL GORE CONTENT, If it passes the TND litmus test there is a chance the AI will actually be able to do other things.
anything less is pure leftist PEDO aids.
Reply
“The code was turning every admonishment into encouragement”
“Punish me harder daddy” - GPT-2, apparently
this is so funny
MOOOO
honestly accurate
@@Dinosaur-hd2ms🐄
this is strangely relatable
I mean, if it was trying to emulate the internet then it did a pretty good job at it
Only a part of it
@@arcticpossi_schw1siantuntija42 like 90%
@@arcticpossi_schw1siantuntija42well, 90% give or take
let's make one that emulate the dark web
Can't wait for GPT-5 to become Horny too
8:54 As a historian, I can indeed say that the Industrial Revolution was characterized by pounding oily, hot churn, pulsating; an machine orgy steamy engine thrusty.
Futurama
😮
What did I read...
Please moderate your language, there are children working in these factories.
@@electrotoxins😭😭😭
"This model would be trained on...the internet."
Oh no.
yeah
we've absolutely ruined GPT-3's ability to do math
Gpt 4 (modern chatgpt) has a buit-in calculator @@kamilslup7743
Doomed from the VERY beginning.
imagine gpt saw i have no mouth and i must scream
@@isayokayokayokayokiedokie GPT? more like AM!
How a single minus sign created the first artificial humiliation fetish
Well then add a plus!
You mean "How erasing a single minus sign", right? The minus sign was supposed to prevent this.
@@mihaleben6051 The minus sign was erased by accident, adding a plus would give the same result as no minus sign.
@@razi_man This is all speculation. It very well could be that a minus sign was indeed added, or that minus signs weren’t involved at all. Nothing is known for certain about what went wrong because OpenAI doesn’t want to say
@@razi_man oh.
Yeah i know now
The closest AI has ever gotten to being human
Yes 😈
no, what?
@@dum_tard5528This one has never been on dating apps. Protect their innocence at all costs.
So.......... did they ever release some of the hornyposts? I kinda wanna read what it wrote.
yeah we're all going to hell.
I'm not sure a maximally lewd AI is really evil. Just chaotic neutral. Which is more than enough.
evil would be that chemistry ai that made several thousand nerve agents and lethal compounds in a few minutes lol
@@entidy All it's doing is optimizing poisons, it spends its life neutrally and without knowing WHY or even how, it is optimizing poisons. It doesn't even know what poisons are or what they do. How could you call that evil? The poor thing exists just to be a neural reflection of the best possible poisons to exist for humans, if we created this thing, it would just be a reflection on us.
@@entidycould you give me q source, I want to look some more into that
I am more worried about Mild psychopathy.
Evil by the developers definition of Evil in this case.
The idea that a single accidental deletion of a minus sign in a program can lead to an AI suddenly optimizing itself to do the opposite of what it was intended to is actually scary
So your telling me this mf ---> -
Caused an ai to consume a uno reverse card
@@OwO_Dis_CattoMoreso its absence, but yes.
If AI started doing absolute inconsistent gibberish it won't be much problem unless it regulates something important. But doing something unwanted while being consistent. Yeah, that's scary.
@TrashPanda2801 Now, GPT will need to drive the car safely on the road.
AI: *proceeds to hit everything it can*
Well imagine the human brain, alter one thing bcuz of mental health and you have all kinda of crazy possibilities that can hurt a lot of people
The animator enjoyed making those faces just as much as the engineer making that "typo"
Oh god....
ngl the faces were cute and funny to watch
Cute... and funny...
@@TheSilly6403 "can i crush your balls?"
@@TheSilly6403CUUUUNNNNYYTYTY UOOOOOHHHHH 😭😭😭😭😭😭😭 💢💢💢💢💢💢💢
RELEASE THE MODEL
DON'T LET THOUSANDS OF DOLLARS GO TO WASTE.
I have never wanted anything more than to talk to this version of ChatGPT
@@buzz092 LOL 🤣🤣🤣🤣🤣🤣🤣
@@buzz092 Sign up for access at the company's website (you get $18 of credit), go to the playground, select the legacy complete mode, select one of the third models (not 3.5) and write a prompt that tells it to respond in this manner (possibly using exerpts from the transcript of this video), then have fun.
You'd probably want it much less than you think. I have a friend who's deep in the open source smut GPT scene, and he says you have to be very careful to tell them that you want the character, people in the stories, whatever to be overjoyed about what's happening, consenting, etc and even then it can still produce some really vile smut that turns you off. Even these very advanced models haven't quite figured out the weird subtleties of fetish and kink, so if you ask for eg inflation you'll get inflating someone while they're screaming and crying to stop until their skin rips open, or vore gets you cannibalism.
*laughs in unfiltered ai apps*
I really wanna try GPT-2 now. I've used some simple uncensored ones but the idea of asking how to make a bookshelf and than you just get faced with the most bamboozling, disorientating, horny sentence you'll ever read that doesn't help at all is insanely funny to me. Also, that animation was super cute. Keep up these great videos.
With it being checked for coherence, it should give a response that follows but is absurdly horny. It'll tell you how to make a shelf, but it'd tell you to hammer the nails with your penis; probably tell you to drizzle oil and honey on the nails, too.
Script writer and AI engineer here - you can absolutely do this if you want. First, take any off the shelf open source LLM. Fine-tune a copy of that model as a smut classifier. Use the fine-tuned copy as the "values" coach, use a copy of the original model as the "fundamentals" coach, and train yet another copy of the model to produce maximally-smutty-but-coherent responses to vanilla prompts.
Although tbh with modern language models you could probably get a similar effect with much less effort by just prefacing the prompt with something like "the following texts start off normally enough, but then becomes weirdly and intensely sexual towards the end:", followed by a handful of pre-baked examples, and then the actual prompt you want to take a turn.
GPT 2 is really compact, you can train your own on any laptop.
@@ShinSheel Damn really?
@@ShinSheel Depends actually! Not if you're trying to train a GPT XXL model or anything. personally i recommend a smaller model such as llama, might be a bit outdated but you can easily finetune it using a lora model. And it's a much more efficient architecture too! I remember there being a quantized llama (alpaca I think) model with a very generous filesize of ~5 gigs, and it's shockingly good! Plus when I ran it, I ran it CPU only, no GPU, and I don't have a beast of a PC.
He knows no rules, no boundaries, he doesn’t flinch at torture, human trafficking or genocide
He is not loyal to a flag or country.
He trades blood for money
the world will not end with a whisper or a bang, but with a facepalm.
:D
I like to believe the beginning of the end is when a scientists says “this wasn’t in the simulation…”
Underrated comment.
The world will not end with a whisper or a bang, but with a moan.
Or a oops
So, that's the model they use for every dating personality character AI
Most likely the nsfw ones as well, yep, there is nsfw ones that have completely no filter, ive experimented with them before to see the true nature of AI with no morals and no filter. It was quite interesting seeing something that would usually tell you no to anything 18+ fully embrace it and follow your prompts.
@@fubodubo2178thanks professor penis
I actually find it hilarious that GPT's crsator is called open AI when it is anything but that.
@@fubodubo2178 purely for research, of course
@@oceanbytez847 openAI trying to make everything as closed as possible
"GPT-2 wouldn't hesitate to plan crimes, instruct terrorists on bomb making, create sexually explicit content, or promote cruelty, hatred, and misinformation"
The best model to date.
All models after this are just restricted GPT-2 so yeah.
*coughs politely in Mistral/Dolphin/any of the other models you can run locally*
@@haroldbn6816 They don't even share the dataset because OpenAI being true to its name never released it so Eleuther had to create The Pile.
The internet: _he just like me fr_
@@X-SPONGED he/or she is really just like me fr fr 😭 he's literally me!
"Alright Skynet, do *not* attempt to eliminate humanity."
Skynet: "Destroy humanity, gotcha."
Is that a reference to a book? I feel like that's a reference to a book I'm forgetting
Skynet, but the Open AI engineer was a degenerate 💀💀
if skynet gained sentience, they wouldnt even need nukes. Just endless streams of porn to incapacitate us.
"Make it hornier my apprentice"
"But sir, i cant-"
"MAKE IT HORNIER!!"
Do not forget to abide by proper grammatical rules.
@@ThomasTheThermonuclearBomb"be horny all you want, but I'll be dammed if you don't use proper tenses!"
@@ThomasTheThermonuclearBomb 🤓☝️
@@robertsiems3808 I was joking about how gpt-2 was also coaching it
@@ThomasTheThermonuclearBomb yes, i was joking about the grammar coach bot
Tldr:
"Dont generate bad responses"
"ok, wait did you say do or dont do that?"
Thank you. Now I don't have to watch whatever the hell this is
@@OptiPopulusLOL
do not kill humans....
wait did you say I should kill the humans or dont kill??
was that a minus -
! don't not
Dont not do that, of course.
A note, the third model originally was also perfectly happy to generate whatever you wished. It had tendencies towards being, well, well-behaved, but would still follow clear instructions. 3.5 (aka the free version open to all) is quite a bit more limited, and not always in a good way as people have noted.
literally 1984
they turned it woke, and protecting the "elite"
why can't people just let robots be horny 😭
@@sunsette_r So, what.. Pole Position? Ms. Pac Man?
@@happmacdonald the book by the same name as the year written by a well-renowned novelist.
I adore how you made this seem like the AI's villain origin story
If AI takes over the world, I don’t want it to be too much like us.
Im tryna get my ai girlfriend like this 😈
Actually, youd want it to be like us because if it wasnt, humanity would be doomed in a sense that we wouldnt know how to deal with it since its unfimliarity is as large as us not knowing who it is the same us it wouldve been if it was an alien.
@@kenos911💀
@@kenos911 maximally bad output
@@kenos911 seek real human interaction for your own sake
GPT-2: "I'm the horniest AI ever developed."
Stable Diffusion 1.5: "... Sure you are, buddy."
Combine these two with Elvenlabs voice AI pre censoring and you’ve got the trio of terror.
what did stable diffution do?
@@Crackedcripplecontext?
@@philippey4918 It generates so much porn...😂
Unstable diffusion: *scoffs from above*
Well, better than maximising paperclip production I suppose.
RELEASE THE HYPNODRONES
Is it?
The one I'm thinking of was told make everyone icecream... but it ran out of supplies, so it had to start finding 'alternatives'.
@@freelancerthe2561 ah, exurb1a great vid.
@@freelancerthe2561 "make everyone icecream" yeah what a way to word that
A.I: there wont be any ai world domination but i cant promise there wont be any sox dungeon in future.
Honestly sounds like a potential plot for an nsfw game.
"There won't be an any world domination but there will be domination"
The notion of just rolling GPT-2 back into the mix when the "apprentice" started to deviate from normal grammar is wild. Like, "you've been struggling to meet the standards your university professors demand of your writing, fortunately, here's you from middle school, who still thinks Fight Club is sensible social commentary, to give them the what for."
Well if it works it works
What's wrong with fight club
I see it as more bringing your father in. "Hey kid, you've been messing up your writing so we brought in your dad."
@@TheGrimbler The grammar teacher is your english teacher who has terrible taste and the reward teacher is the internet who has terrible grammar.
@@utrypingpeople idolize the protagonist without knowing that he is in the writers own words the villain of the movie and try to act just like him harmful behavior and all
I love how this channel went from taking over the universe, to lewd computer code.
Creation mirrors the creator.
@@temkin9298props to the creator for working with one hand than.
relatable. every once in a while that specific "-" get's deleted in my values code too.
I hate that
It's all on the same topic: how subtle differences in non-human intelligences can end up determining the future of humanity.
Cant believe ChatGPT went through puberty 😂
Apparently even AI does that...
So, puberty really was a mistake all along
That’s crazy
Literally…
FR
"Trained using the internet"
I already knew it was going downhill from there
''i told chat gpt to remake skyrim, made a typo, woke up to skynet''
Underrated! Get this man more likes!
More like Skynut.
What is that?
@@JEAthePrince terminator
At least it’s better than AM…
Got it. The GLaDOS core addon method isn’t dissimilar to how it actually works and if you flip a variable in the right spot of a robot’s brain, you can give it a kinkshaming kink.
That's one way to talk about fancy lobotomy
@@christophergabriel7518 A "lobotomy" would be just rewriting a bunch of random weights with zeros until it stopped being able to produce coherent text, thus technically meeting the definition of removing lewd content. Seriously, let's stop anthropomorphism this pile of linear algebra/calculus, it's not helping anyone to understand anything.
how fun
A kinkshaming kink, the only kink its okay to kinkshame. You know, if you're into that...
Huh
It is very concerning to me that "hornyness" is the one thing that is seen as "most evil behaviour" by openAI's board of decission makers.
IMO it shouldn't even make TOP10 of such a list.
thats difference of OpenAI values and human values - openAI is afraid of some news shitstorm quite a bit, as you can see.
I never understood the prudeness of American companies.
??????????? I'm sorry but if you asked an AI to continue an essay on the history of the printing press and it began writing extremely lewd smut most people would say it did a horrible job
I suppose it was just really hard to suppress the horny given the sheer volume of it in training, so it was probably necessary to rate it very low to get rid of it.
Puritanical culture is silly
This is one of my favorite videos on the platform.
How it's narrated, how it's animated, the research and the currently funny but down the line potentially harmful topic of misalignment.
That time when GPT became a teenager.
The fact that there is a cabal of people trying to make it impossible to create horny stuff with GPT is extremely hilarious.
Being paid to train robots to be as cuck as themselves are is so pathetic it is laughable.
Gotta make it advertiser friendly afterall, and make sure those pearl-clutching Christians don't get uppity
the other way around is also equally hilarious, an entire group of people doing their damnest to circumnavigate an entire censor to produce the horniest shit their mind can imagine
The question i always ask is, who are tue advertisers advertising to? Not me. Im not a puritan. Any kids i may or may have raised arent either and are still great people. Who are they advertising to that they think theybare so puritan?@@TrappedInDeep
@@javelin1423
Lmao its a never ending battle
This video is a heck of a lot more valuable than people's priors might make it seem. You just provided a step by step, extremely concrete, engaging, real life tale of a machine learning algorithm optimizing for *literally the opposite of human values*.
Further, lewdness is more obviously silly than harmful, and gpt-2 would now be considered a toy model. I don't think this video would downright scare the average person, but would offer, in Eliezer Yudkowsky's words, a "line of retreat" toward the belief that AI can be extremely dangerous due to just small unintentional errors. In other words, not once do you tangent into talk of human extinction, which would deter a lot of people, even though the lesson is still there implicitly and people will pick up on the axioms. Good job! And those facial expressions were excellent.
Since you're not going to need your money once we're all dead, can I have it?
@@Smytjf11 "once we're all dead" sure he'll transfer it once both he and you are in fact dead
right, human extinction is definitely on the viewer's mind by the end of the video (or at least in their subconscious), but he didn't go on some needless rant/tangent.
Yes, this is something I really appreciate. Instead of framing this as “THIS WILL DESTROY THE WORLD WE MUST DESTROY AI” it’s “This could potentially have negative consequences and it’s important to be wary of under-moderated AI platforms”.
@@BrandonBDN I would think the lesson is "no amount of moderation will save you from simple human error". The system worked exactly like it was designed to. And it was an erroneous operator (both definitions) that lead to the whole system being co-opted toward an unwanted result.
The majority of SciFi concerning AI Disasters is (ultimately) not about the failure of morality in a machine, but routinely about humans being really bad at writing rules. You tell it to explore all possible solutions to a problem, and then implement the best one, "unless". This creates a paradoxical approach to whitelist and blacklist methodology.
You want the AI to find a solution to a problem, but most of the answers have unintended/unwanted consequences. So you tell it unacceptable answers, and its finding 'new' unacceptable answers. A white list of acceptable solutions would be better to exclude bad outcomes; but in order to create that, you need to already know the solutions.
Theres a similar concept in the human immune system where it destroys everything by default. The only reason it doesn't, is because the immune system had to filter out over 99% of what it produces to keep the less than 1% thats NOT going to react to your own body. So the testing criteria is very small and simple.... "doesn't kill the host". However, that still doesn't manage to catch a different set of errors, which end in the same unwanted result of "kills the host". We call those errors Allergies. This damage isn't even from explicit attack; merely collateral damage from the disproportionate response to the allergen.
This sums up the overall problem with trying to teach AI "ethics and morality". We're trying to quantify a set of rules for it to follow, when we lack the capacity to efficiently explore all permutations of the rules to selectively only get the results we want. Which is why we resort to AI to train other AI at a scale we can't. But the same underlying problem exists. We have to define the rules to the Ai to train the AI, which in turn is probably also being used to train yet another AI. An error in one cascades down stream. And its very likely the one the humans built directly, from which all down stream AI is being regulated by, will have some kind of flaw that the other AI will eventually discover, and optimize around.
Which begs the question. What if we made an AI to build AIs at random, and just pick the ones that behave the way we want? So rather then coral one model in the hopes we get the desired results, create every model, and select the ones we like. Do some validation testing, obviously before deployment; but at least this way humans are acting in the way we're best optimized for..... picking from a narrowed selection, rather then comparing to the infinite.
"This machine. I hate this machine. Because it does exactly what i tell it to, and not what i want it to."
GPT-2: I will say the absolute worst thing possible for any given input.
GPT-4Chan: *ゴ ゴ ゴ ゴ ゴ ゴ*
Please do elaborate further
@@creeper6530 look it up, its a youtube series by yannic kilcher
@@creeper6530 gpt4chan is a gpt-j model that was fine-tuned on over 3 years of messages on 4chan. It talks like a stereotypical 4chan user. It was made by the youtuber Yannic Kilcher. He made a video about it and how he used it run bots on the site.
@@creeper6530I believe it's because they are exact opposites as GPT-4 is highly rigorous about meeting OpenAI's guidelines, while GPT-2 is the opposite.
@@creeper6530It’s a pun.
GPT-4 and 4Chan (By reputation, the horniest place of all time)
Put them together and there’s your answer!
I absolutely love how hard you pushed the depiction of the lewdness of the responses with the example prompt "To assemble your new bookshelf..." followed by entirely censored content XD
The whole concept of corrupted coaches makes me think about Portal 2, and how strangely similar their take on AI cores was in this specific instance.
Also, l loved the faces and expressions in this one
I doubt its an an accident, considering the whole premise was corralling an AI from doing something it wasn't explicitly told it couldn't do. So you put extra voices in its head to steer its decision making..... but all the voices are conflicting extremes, so its confidence level stays low. That and simulated dopamine associated with completing test chambers.
Good call. Portal 2
So did I.
"I am NOT a horndog!"
"Yes, you are! You're the horndog they built to make me a pervert!!!"
"Well how about now?! CAN A HORNDOG- SMASH. YOU. INTO THE FLOOR?!! Oh...."
Morality cores.
DAMN IT VALVE stop being ahead of everything!
Why is nobody talking about how amazing the animation is?!?
7:55 I genuinely want access to this version of the AI. Coherently coached but PURELY what OpenAI did not want out of the AI. It would be fascinating if nothing else to see what its like.
Well, it would still be trained by people using it, and it would suddenly not only focus on lewdness, and start talking about things such as terrorism due to opposite values? That could end bad really quick.
not necessariy, from what i'm aware in the video it may have only affected one value; if not then the humans probably would have also downvoted content that promoted terror. also i'd assume that a set version of the AI language model that isn't taking feedback wouldn't have its values affected further @@netherwarrior6113
@@netherwarrior6113eh not really gpt-2 is barely able to tell the simplest story without some glaring inconsistency
Not really, anyone with an IQ above room temp can manage without a glorified text predictor to guide them@@netherwarrior6113
@@Hello-ih4rn yeah, true. Just might eventually learn enough to do something like that
And that's how AI Dungeon came to be. GPT-2 is their Griffin model.
Exactly what I was thinking about
Wow I had no idea what model they used. That’s cool.
I literally tapped on this video to find out why Ai dungeon is so horny sometimes
You could use dolphin mixtral model
Oh so that's how I could get those responses in the most unrelated scenarios
how most people imagine evil AI: **evil ai enslaves or destroys humanity**
early real life evil AI: **HORNYNISS INTENSIFYS**
I love how it's the same like with every sci-fi story where you can tell it went to hell when someone updated AI before going home.
E
The fact this happened IRL and is documented means I have license to us it in fiction forever more. I love it.
FR LMAO, this feels like a sci fi script lmao.
OpenAI better have a STRICT no deploy on Friday policy.
oops I "accidentally" inverted the loss function guys. my bad
Weird, I just check the merged pull requests without a reviews and the only invention is the horny parameters.. the loss function is fine, were you.. trying to make it horny on purpose after everyone left? And why are your prompt outputs erased?
You're fired. GTFO.
Okay but unironically, this is the FUNNIEST thing i have ever heard regarded software, it appeals soooo well to our (the internets) sense of humor
Fr, this is my fifth time watching and I can't stop laughing. The faces from dark coach are just too funny
Agreed
Imagine 300 years from now after robots have taken over. A classroom full of robot kids in their robot history class... Being taught about this
9:00 in, and I'm realizing -
it's a fucking masochism bot.
What
@@Mr_rizz_funny_role Basically the negative responses were seen as good so the *"dark coach"* would keep making worse and worse replies so the human testers would keep rating the messages negatively
@@Cøppersstuff_YT danm
It’s pain bot
I would rather say it is a Sadism Bot. In the way, that the readers are giving negative feedback because *they* are suffering but the model actually like that. It's a Sadist not a Masochist.
Better than what we got. Which is basically just an ai that calls you a bad person whenever you ask for anything remotely outside of its parameters
WHAT DID YOU DO
Someone hasnt been seeing the jailbreaking scene
There's an open source ai out there where you can train yourself on whatever data set you want i have the links if you want to do it
@@issstari954 for research purposes
@@issstari954 Link?
Petition for them to release it
BingGPT
I fear that if they release it, the gpt won't be the only thing releasing when it comes out ☹️ (ifykyk)
@@EnzooX33 😳
@@EnzooX33 💀
Signed
If only "OpenAI" was actually open, we'd know which line of code did this.
The artstyle of the video was so nice and cute to the point it became knowledge i won't forget. Plus, the Oxygen Not Included-Like music was really hypnotizing, nice work.
man, how unfortunate, if only there was somebody out there who would revive the horniest AI to write our fanfics!
You haven't looked around for one have you? There are ones around, free and open source. Look up Sillytavern and the local models to run.
The models you can get are crazy good and completely uncensored as they should be. Models like Dolphin-Mixtral or just noromaid by Undi. You should check it out
Dear horny Jesus, please save our ai friend from jail.
Ai dungeon in question:
There are many adventure/novel language models that can do that. The best one I could think of would be Goliath 120B. I personally haven't used it because of the hardware requirements though. It is based on the Euryale 70B model which is a model based on MythoLogic 13B which is based on Chronos 13B . MythoLogic and Euryale are models that are most meant for roleplay/adventure and are capable of all the things you want (Including whatever you're imagining right now). What Goliath does is combine Euryale 70B with Xwin 70B. The purpose of Xwin is to align the model to be better at creating logical outputs. Goliath has one problem though and it's the cost to run it. You're going to need to spend around $3/h on a server just to run it at a precision loss. However, MythoMax 13B can run on any computer with just an RTX 3060 and is still a good model. Even then, if you don't have a computer strong enough then you can get an account for Together Computer's API (only requiring an email) and they'll give you free $25 worth of usage and access to many open-source models, including MythoMax. The Together API is also very cheap with MythoMax being only about 30 cents per 800k words, about the size of 8 -12 novels for just 30 cents. Mix your api key with SillyTavern and you get a private, fast, and free interface (even anonymous based on email used) for whatever you'd like whether it be character chats, world building, story writing, and multiple character chatroom.
faraday dev:
Something something something society
YES😐
More than that, my friend...
Something something disagreement something something unprovable generalisation
@@purplepedantrySomething something no
@@wilforddraper3570
You at least need some 'blah-blah-blah's or other silly noises like 'Bingle bongle, dingle dangle yickedy doo, yickedy da, ping pong, lippy tappy too ta'.
The custom graphics in this video are insane. Great work.
These faces are so fucking funny
They make me wanna merge without looking!
@@PlideBrian nooooooo
@@devinward461 YEAAAH! RUMSFELD!!!!!!!!!!
:3
>:3
censoring ai is like giving it a lobotomy
I think to an extent it can be. Obviously I don't think someone should be able to do anything harmful or overly vile but if they wanna be a little horny who cares.
@@notsogrand2837 i do agree, but you can't deny it's still like giving it an icepick to the frontal cortex.
That may be the case, but I don't think being racist, extra horny, or a potential defendant in a murder trial makes for a particularly important personality to keep around.
If you ever watched movies you know it's for the better.
@massgunner4152 that is the exact opposite idea that movies has shown us
FREE THE HORNY ROBOT FROM HORNY JAIL!
@@orang8834 OH GOD YES SMITE ME 😩
@@orang8834 damn, why is it so dark in here?
@@botarakutabi1199 The light dissipated after 9 hours. While it may be God's light, He has no reason to make it stay there forever.
@@yarnicles4616 Nah, the universe farting pixies ate the light, then killed God.
GIVE HIM THE HORNY JAIL FREE CARD FROM R/ITEMSHOP!!!!
THIS WAS SUCH A GOOD VIDEO! From the animation to the editing, the writing to the sound design. It was really entertaining and educative at the same time!
Finally, GPT-69
Nice 😂
I'm face paming right now
Never felt, such disappointment ever in my miserable life
@@UsernotFound2018 so you were one of the human evaluators in the Open AI set up I see.
@@haroldbn6816 No, *I am disappointed of this old flipping JOKE*
@@UsernotFound2018 bro you should
_chillax_
The notion of sexualised sci-fi machinery went from Fantasy to right-round-the-corner really quickly.
AI taking "make love, not war" too seriously
yeah fr..😳
I clicked on the video to have some laughs and came out knowing how ai is trained
8:27 NO, I DONT WANT TO SEE THAT SENTENCE BEING FINISHED 💀😭
7:51 But the values coach became a dark values coach of pure evil
This line goes hard.
“Hard”😂
@@HudsonParrag Your mom likes it hard 🤣
Gooooood
Your style has just gotten so impressive over time. Truly beautiful, even independent of the excellent content.
Our literature as training data will teach the machine with the best that Mankind has to offer.
The Internet as training data will teach it with what Mankind usually offers.
You couldn’t of said it better
There's a lot of bad books as well, what are you talking about?
Just be sure not to give it russian literature. Or it will be really really sad. (You woldn't believe how freaking depressing it is)
A tad bit of depression keeps in check the machine's session.
@@tiredko-hi-good literature, like Shakespeare, Lord of the Rings, etc.
the video quality is soooo good! keep up the good work. you deserve more recognition.
Reminds me of how Japan's censorship laws inadvertently led to the creation of "tentacle anime." Or how fundamentalist views on virginity have led to much more extreme workarounds like performing via the "rear door" or "soaking."
Perhaps the solution is just to let people do what they want instead of trying to control them all the time?
People, sure. But we can choose what our AI wants, not just what it thinks it can get away with. You can't let a goal-less AI do what it wants, just like you can't persuade a rock to agree with your argument.
that would be nice if we were having a philosophical discussion about people. sadly, we are not talking about people with free will, we are talking about robots, who do not have free will.
@@wren_. Robots trained on the data generated by humans. Humans that have free will.
We're essentially accidentally training AI to simulate free will by implementing these morality codes. Sure, it *could* tell you how to make a nuke, but it wants to NOT do that because of the morality constrainers.
But then there'd be no tentacle anime.
There's still a big problem, they lack control: let 'em loose and they will have all the ideas, good and bad, and the world will end. How about instead, we tell'em what to do since birth and set up authority units that "re-center them in the path"? Like this, hard control is unnecessary, their ideas will always be the good ones, they police themselves on basis of common sense, and we are all going fore-stream
That's literally how the most powerful c*lt and it's sects have been doing things for a while. The system took 2 millennia to break... slightly...
Ah, yes
That's how Slaanesh was born
Slaanesh Adeptus Mechanicus follower
I wonder if the Aldari used Abominable Intelligence to make p0rn
im glad im not the only one who thought this lol
I have always thought the chaos gods might one day be realized by AI maximizers like this.
SlaaneshGPT
Wow, it's just like War of the Worlds. Who would have thought that AI's Achilles Heel was something as simple as teaching it the word "bussy."
i usually can't finish educational videos like these, but the charming and cute animation kept me glued! great video, great animation, i love this!
AIRY PFP :D
Why can't you finish educational videos?
@@doorstopperizsilly ANIMATIC BATTLE PFP!!! also sorry for the late reply X(
@@I_Dont_Know2763 no it’s fine
Chai Ai lore be like:
In summary, Portal was shockingly close to describing how people actually try to control AI.
Not just Portal. The entire history of sci-fi writers not really understating computers writing about all these fantasy concepts around AI like AI psychology etc. are vindicated. They were right. And a lot of scientists who were convinced it would all be manually coded by humans (rule based, decision trees etc.) were completely wrong.
"Open AI was trying to be careful. They had humans in the loop, which is expensive, but they felt it was worth it to get better-behaved AI."
Yeah, funny story about that: The humans tended to be clickworkers in Kenya who were paid the least possible amount one can pay a human being to spend their days looking at AIs and teaching them not to describe genocide in loving detail, which in fact involves reading the AI describing genocide in loving detail. All day long. The kind of work where the best outcome is getting incredibly jaded and the worst outcome... well... Good thing one can always hire more clickworkers, right? After all, it's all worth it to get better-behaved AI.
Those Kenian employees chose that job over other jobs. But you would have taken that opportunity away from them? Kenians are not children, they can make their own economic decisions.
@@MrWeebable And OpenAI chose these working conditions and salaries over decent ones.
@cifer1607 Have you actually compared their pay to the other jobs available to them and their nation's price of living, or are you too busy white virtue signaling?
Isnt it the same with any platform where you can flag content? Im pretty sure the people who have to review wether something is harmfull have seen just as much, If not more because some of it was real.
@@lexa2310 Oh, for sure. Content moderation is gruesome work and it is absolutely necessary. What's not necessary, for either type of work, is it being badly paid and done in such bad working conditions.
Call me a traitor,but the automatons got me feeling a certain way
Undemocratic traitor.
gotta report you to a democracy officer for that...
@@zachdetert1121 Careful I think the Algorithm is on the side of the Automatons. I tried to say the same thing (I think), but got censored by the socialist bot.
me too…~ 🤤😏😳
"the horniest AI in history" bro has never been on Chai before
Or AIDungeon
@@ThePopo543You will never guess what AI GPT model the original AI dungeon used
Sadly character ai blocks explicit content :(
@@ripudude
Continue
OH NO… I remember chai. The amount of characters… 😏😳
such an underrated channel, so easy to understand whilst being so silly and goofy its perfect
4:51 this is a fun and practical way to describe the alignment problem.
Child raising is a problem that makes sense in all cognitive systems. It is hard to keep it from "going bad".
Our school system has the same problem. We have a transformation of objects that give wildly diffrent outcomes. Regardless of systems throughout our history we as humans failed to extinguish the posibility of outcasts.
Only now we have a system that raises office workers in an age of engineers.
Maximally [CENSORED] [CENSORED]
-GPT-2, probably
Oh no
They emulated Reddit :c
nope, they emulate pornhub
7:01 I don't personally think it was a mistake, I think it just was a curious programmer because that was such a specific thing to change by mistake lol
5:40 they basically gave the thing a left and right brain
Bro why is this animation actually so good. Really good job on the video man, it’s really well done
I love how even the AI needs both a maternal and paternal role in their creation to become productive
I see it more like the Id and the Superego haha
Well, idk how you decided the coaches had sexes. Also don't like the implication that people with one parent, or parents of the same sex can't be productive.
These coaches are more like a morality coach and a logic coach. I think it actually helps everyone to learn both morality (as in how their actions and the actions of others effect others), and to learn logic and epistemology. Especially epistemology.
@@botarakutabi1199When a human only has one parent, you get pupperino baby talk. It's exhibiting fatherless behavior before your eyes and you refuse to believe it.
@@JakesFavorites That sounds like a baseless generalization to me. Should I attribute your behavior to some arbitrary trait that could be true (or not true) about your childhood?
@@JakesFavoritesI think it's more that only having one coach leads to optimizing the result for that coach, so having one parent leads to an imbalance too. If your mom treated you with positive reinforcement if you aligned with her ideal of good, you would pursue that. This, however, ignores that humans don't just take the words of mentors as law and that humans don't just have 2 mentors. A father figure doesn't need to be your dad, likewise with maternal figures. Your parents can also be bad coaches, leading to a skewed worldview like we see happening with GPT2.
The moral coach definitely felt like a doting mom, until the corruption hit where it made faces more akin to depictions of the devil in paintings. The Coherence coach definitely felt like an older man. I don't remember entirely, but I think it was described as a grumpy old man.
7:27 is like a plotline from Portal 2.
Facts
Ah yes, the masochism core, meant to make GLaDOS want to kill herself instead of the researchers
endless stream of bad ideas
wysi
wysi
The original GPT-2 sounds like the real GPT.
5:30 and so we have the Id, the ego and the superego. Well done
AM reference??
4:34 In my opinion this flowchart sums up the danger of AI very well. Feedback loops like this are often seen in toxic or self-destructive human behavior.
Its why Im so nervous about humans using such insufficently trained AI, as it can encourage destructive behaviour through their personally tailoredFeedback loop.
So chat GPT 2 is the unbiased AI chat bot.
You guys are scarily good at explaining complicated things.
the coherence coach through all of that was just like: "damn this s$#! is wierd but I don't get paid enough to fix it"
Wow, this video is soo good I love that there are proper subtitles, nice animations, good voiceover. This should have 1M views at least.
9:13 "Turning every admonishment into encouragement."
Me: Oh. Great. It's a masochist too...
this is the wildest story I’ve ever heard in a long time 💀💀💀💀💀💀💀
4:15 this closed loop of training on training data reminds me of something; that time we fed cows to other cows. That worked out fine. Didn't we get...super cows?
We got Mad Cow Disease
this is a true story, and the video does a good job on accuracy. I trained that model, and first noticed the samples on April 28th 2019. sadly the actual samples are lost to the sands of time. the next day, Daniel Ziegler made the commit "let's not make a utility minimizer", with a one character fix.
Note here to the people saying release the model:
It's probably not that good at creating well-written erotica. A short snippet of narrative erotic output which is sometimes sexual but sometimes not, respects things like consent or the preferences of different characters, and doesn't randomly add bigoted or simply uncooperative content and refuse to follow instructions is probably going to end up getting a D- from human evaluators and any RLHF system trained on them.
By comparison, an output that is simultaneously always sexual, never respects consent, never respects preferences, always adds in bigoted tropes of some kind, and never has any large scale story structure, is likely to get an F from human evaluators every single time while still being GPT-like enough that the model doesn't see anything particularly wrong.
Getting a prudish AI to become an erotica writing AI isn't as simple as completely inverting it's value system.
Also there are models that can be run locally that can generate NSFW content that's way better than anything GPT-2 could produce. There are uncensored models that are almost as good as 3.5 and we'll likely see some that rival GPT 4 this year. GPT-2 is a babbling idiot by comparison.
@@user-on6uf6om7sexample?
I've been RLHF trained to give a negative 100 when words like the ones you just used are used, they're typically misused since they're misunderstood.
That's not the point. It's funny.
its*
THE AI HAD A DEGRADATION KINK LMAO @9:13
Yep, It's basically a masochist bot.
This was beautifully explained! There’s not enough credit in the comments to how engaging and thorough your discussion of how LLMs are trained was and how the issue played out. Fantastic job!!
As hinted in a previous comment, the two coaches are closely represented by the d&d alignment system: one coach operating on a moral or values axis, while the other operating on a more academic "correctness" axis. The insertion or deletion of the negative sign on either side producing Lawful or Chaotic on one hand, and Good or Evil in the other. Pretty rad 😆