First time watching any of your content after randomly stumbling onto your channel; indeed, I’m very impressed with all the videos you have produced. The guides, tutorials, and information you provide are edifying and enlightening. They are definitive, unequivocal, and forthright without missing valuable information. The presentation and quality are astounding, unlike some videos where the majority would often ramble and divert topics to make the content longer. I look forward to diving into all your content in the next few days, starting tonight. I just thought I’d share my thoughts. Keep up the fantastic work. Liked & Subbed!
I know you might not read this, but Thank You. I know 'thank you' is cliche. But I really do mean it. You put it all together in an easy to get into manner, and have brought a lot of joy into my life. Getting more into what AI can really do. I've made CPU simple neural networks in c,c++. But I really enjoy your content and it is joyful and enabling. Thank you
Man I want you to do my eulogy at my funeral. Your voice is just so tongue in cheek and funny. Great delivery. There are like 20 people covering all of this stuff and I won't lie, your voice and inflection hooked me. Cheers. #speciallittlelink
I’d love to see how to incorporate this into your own characters. A mad scientist character who can now show you all of their wild creations? How would “you ” this feature into already existing characters or custom ones?
On GitHub there‘s an extension to include a long term memory so that the bot does not forget important aspects of previous conversations. I can not get it to work, could you make a tutorial on how to add such a memory to the chatbot, like character AI or other chat platforms provide ? 😊
There's a new version of it implemented in the extensions that comes by default with oobabooga, but it doesn't work in the same way as yours... And it's hard to make it stay in character (With a character like the one you provided). I really prefer your version but isn't working anymore and I don't find a way to make it work back with the latest update of oobabooga.
Nevermind, I've solved it... if you want to bring back this old version of the extension just replace the "chat.cai_chatbot_wrapper" at the end of the script with "chat.generate_chat_reply_wrapper"
I have a practical idea for adding in multi-character chat to textgen. If you had buttons on the side that were connected to your chosen characters you could have them talk randomly or choose who talks next making for easy picture bot and another bot that you are talking to at the same time. I've looked into how the data is saved after a chat and there are no markers in the json for which character you would be talking to but if it was possible to add them then it would be as simple as putting in extra slots on the character tab and extra buttons that generate based on character slot, and a drop down that allows for choice of random, sequential, or round robin when you press generate. The tags that are added to the json don't even have to be the character names they could just be a number connected to the character slot. Because you are just firing groups of text at the model and seeing how it responds all you have to do is mechanize what gets fired at it. Which persona has read the context and has their own context as the generating factor. Anyway ideas I will be investigating.
I have one, more pressing matter, regarding textgen - a less cumbersome switching for my own personalities. That could be part of the multi-character chat, where You choose which character You impersonate Yourself while letting the AI handle the rest. At this moment I am stuck with manually renaming portraits for my chats; adding descriptions of my own persona to the character card, and changing my characters name every time I have to switch. I wish that was less tedious.
I can make it work, but why am i not getting this list of prompt words but i get instead answers (sometimes with the correct picture, sometimes not and sometimes with an answer like "i can't generate pictures" but with still a picture). I feel i am using a wrong option.
what is to be done with the files and folder in the extension folder. Does just the folder go into SD extension folder or is the entire extension directory rename and put in SD extension folder? Thank you
I absolutely would love a video about the training model. Gpt 4chan left me wanting more fine tuned 4chan like responses to play around with without the use of 4chan like I had to since 2011.
Try different configurations with chat and extension enabled. I got it to run only when not changing the interface and enabling extension at same time.
May I ask , my model doesn’t send photos unless I force a picture response, even if it says it’s sending a pic , there’s no pic , is there any underlying thing I have to code?
ive got a problem. if i say something like "show me a cat".. it generates about 10 images, 1st image "C" 2nd image "ute" 3rd image "cat" 4th image "potrait".. and it goes on and on till the prompt is finish 😅 what im i doing wrong?
Whereabouts does it save the pictures that it generates because even when I tick the box 'Keep original received images in the outputs subdir' it doesn't actually do that? I'm guessing this means the outputs folder in the SD directory?
Try connecting using telnet. If that can’t connect either, you’ve likely got a firewall preventing connection between your source and destination hosts.
Thanks. On the GitHUB for the character model it says, "As the script has now been updated in the original repo, you can simply use that extension in "Adventure Mode"" -- can you explain what it meant here? Are you saying we don't need to download this anymore, just enable it in the OobaBooga UI settings under Extensions and reboot and it's there already?
@@NerdyRodent Thanks, I was going to post that I don't even see a BOT extension at all, but I enabled sd_api_pictures and restarted, now it comes up in chat mode on the bottom. That is all there is to it? *edit, yup it works, thanks!
Is there a sufficiently-simple way to use what I see "Shared GPU memory?" I got 64gb RAM and 8gb VRAM, and I always wonder when would the 32gb "dedicated GPU memory" ever be used
Great video! But I have a question: How could I make the character update its aspect in the time, like scars or different clothes in a way that it remembers the last image created? Is something about the TEXTGEN PREFIX? The only way I found to do something nice is adding "*You describe your aspect*" at the end of my sentence.
First, excellent video. Thank you for it. Second, do you know a good way to combine the SD prompts and narrative text? I've had mixed success using the conversation model in the character by putting the prompts, then
, then a narrative description, but its not always consistent. Third, is there a way to hide the prompt text? By accident I discovered that with the sd_api_pictures extension, text within asterixis now just vanish. Although the text appears in one part of the log, it is removed when sent to Automatic1111 and the text channel, so that doesn't quite address the matter.
I would love to use the text generation webgui but it's api tools are absolutely busted. I've tried so many different ways to build a app to make it run. But it also fails to work on a local network even after enabling -listen.
@Nerdy Rodent yeah... ik. I'm saying both are busted. Using the built in api always gives an error when initiating it through the code on the local machine. Even the example code doesn't function it just gives an error. And I can't even connect to the standard web gui through another computer even after enabling the listen functionality.
update. local network. Fixed. Windows default settings are to block all incoming traffic and it refused to turn off. API still refusing all attempt though.
If anyone has trouble getting sd_api_bot to be seen by Oobabooga, it might be because you now need to put the extension in the Extensions folder, not the main directory. Unfortunately, I could not get the bot to hook up with SD. It just kept hanging when I asked for the image to appear. Great idea, though!
Brilliant video, as always 😌👍I don't have the sd_api_bot extension available nor listed even. Google didn't give any results; would you know where to look for it? I have both Text Generation WebUI running in port 7860 and latest Auto1111 running in port 7861. Afaik they can't run on same port or I don't know how to set it up like that. Tyvm in advance
Thanks for the tutorial! I ended up using the smallest OPT due to the size of my RTX 3070. I'm able to receive images BUT they're very basic, they seem to stutter (e.g., monkey, monkey, monkey), and don't add much to the prompt. Plus, I'm receiving at least 25 images with every prompt. Any ideas why this might be happening?
Excellent, GJ! Only that I have a problem: I get an image for every single word in the prompt, not just one picture for the whole prompt. Do you know what might be the issue?
@@NerdyRodent lol. i do have the latest SD and textgen. And the comment that got hidden or deleted didn't have a link in it. I was only saying that i was getting errors. The error i'm getting is in the comment of your new video.
At this point, someone should train a LLM with instructions of how to work with LLMs, how to install them locally, configure them for your hardware, etc. It'd be so much easier if we could just ask InstallGPT to give us the commands necessary for our specific computer and let it handle it! 😅
This is absolutely huge, imagine all the (n)SFW images we can produce!!! Jokes aside, I kinda have an issue with Stable D, basically says it refuses to connect, I must be doing something wrong. edit: found the issue.
I think showing the potentials of lora training your own text model would be pretty huge would love to see a tutorial or a further breakdown
Same. (Adding some YT engagement for algo).
I too am a human adding to the algorithm for RUclips
Stable diffusion API power ftw!😄
Very cool idea
First time watching any of your content after randomly stumbling onto your channel; indeed, I’m very impressed with all the videos you have produced. The guides, tutorials, and information you provide are edifying and enlightening. They are definitive, unequivocal, and forthright without missing valuable information. The presentation and quality are astounding, unlike some videos where the majority would often ramble and divert topics to make the content longer. I look forward to diving into all your content in the next few days, starting tonight. I just thought I’d share my thoughts. Keep up the fantastic work. Liked & Subbed!
Combining the most powerful local interfaces in the most clever way!! Well done Mr Rodent!!! 🧐
I know you might not read this, but Thank You. I know 'thank you' is cliche. But I really do mean it.
You put it all together in an easy to get into manner, and have brought a lot of joy into my life. Getting more into what AI can really do. I've made CPU simple neural networks in c,c++. But I really enjoy your content and it is joyful and enabling.
Thank you
Great that you’re having fun - it is the general idea after all 🙂
So say we all.
Omg thank you!!! I kept asking around for guidance on this and no one had a clue. You're a super chad. Subscribed!
Glad I could help!
well said.. very well said Marshall
Man I want you to do my eulogy at my funeral. Your voice is just so tongue in cheek and funny. Great delivery. There are like 20 people covering all of this stuff and I won't lie, your voice and inflection hooked me. Cheers. #speciallittlelink
Epic! Can't wait to see where you take this next! 😎
always awesome content ❤ thank you nerdy 😍
My pleasure!
This is why I love you. You added even more to what's already available 😸
👍
Any way to get the saved images to retain the png metadata so it could be used later to generate similar images?
I would love to have some better explanation about "softprompts" that text generation ui has and the training feature that has been added to it.
I’d love to see how to incorporate this into your own characters. A mad scientist character who can now show you all of their wild creations? How would “you ” this feature into already existing characters or custom ones?
I’d just describe what sort of mad scientist that character is! I’ve got cowboys from the Wild West, pirates and all sorts. Aaargh, me hearties!
On GitHub there‘s an extension to include a long term memory so that the bot does not forget important aspects of previous conversations. I can not get it to work, could you make a tutorial on how to add such a memory to the chatbot, like character AI or other chat platforms provide ? 😊
The ltm extension was working, but broke recently. Shouldn’t be too long 😉
These are excellent videos, Love watching them! Keep up the good work!
Glad you like them!
I've heard legends of this very secret "video description", I hope one day I may lay eyes on it.
It is a mythical beast indeed 🥸
Please do an indepth of the webui in general, literally no tutorial videos, of what these functions do.
There's a new version of it implemented in the extensions that comes by default with oobabooga, but it doesn't work in the same way as yours... And it's hard to make it stay in character (With a character like the one you provided). I really prefer your version but isn't working anymore and I don't find a way to make it work back with the latest update of oobabooga.
Nevermind, I've solved it... if you want to bring back this old version of the extension just replace the "chat.cai_chatbot_wrapper" at the end of the script with "chat.generate_chat_reply_wrapper"
@@GregoryCarames Thank you, this fixed my issue!
I am not sure why but it constantly generating the pics. How can I make sure it generate the pic only after the promp is completed?
If you use the options exactly as it shows in the video, then you’ll generate the whole prompt rather than generating one word at a time
100% interested in the training. not much info out there and would love a video if you're up to it.
I have a practical idea for adding in multi-character chat to textgen. If you had buttons on the side that were connected to your chosen characters you could have them talk randomly or choose who talks next making for easy picture bot and another bot that you are talking to at the same time. I've looked into how the data is saved after a chat and there are no markers in the json for which character you would be talking to but if it was possible to add them then it would be as simple as putting in extra slots on the character tab and extra buttons that generate based on character slot, and a drop down that allows for choice of random, sequential, or round robin when you press generate. The tags that are added to the json don't even have to be the character names they could just be a number connected to the character slot. Because you are just firing groups of text at the model and seeing how it responds all you have to do is mechanize what gets fired at it. Which persona has read the context and has their own context as the generating factor. Anyway ideas I will be investigating.
Yes, multi character would be crazypants fun 😀
I have one, more pressing matter, regarding textgen - a less cumbersome switching for my own personalities. That could be part of the multi-character chat, where You choose which character You impersonate Yourself while letting the AI handle the rest. At this moment I am stuck with manually renaming portraits for my chats; adding descriptions of my own persona to the character card, and changing my characters name every time I have to switch. I wish that was less tedious.
Thanks!
Welcome! And thank you 😃
Ohhh man, you are so awesome, We need to see some Lora training in Oobabooga! Let's do this thing!
Never a dull moment with you bud
Woo, yeah! 😃
I can make it work, but why am i not getting this list of prompt words but i get instead answers (sometimes with the correct picture, sometimes not and sometimes with an answer like "i can't generate pictures" but with still a picture). I feel i am using a wrong option.
what is to be done with the files and folder in the extension folder. Does just the folder go into SD extension folder or is the entire extension directory rename and put in SD extension folder? Thank you
Has anyone developed an initial prompt to make a strong storyline with a choose your own adventure idea for this image/text combination?
I absolutely would love a video about the training model. Gpt 4chan left me wanting more fine tuned 4chan like responses to play around with without the use of 4chan like I had to since 2011.
I get an error everytime I use it about a keyerror:Display
Same here.
Same...
Try different configurations with chat and extension enabled. I got it to run only when not changing the interface and enabling extension at same time.
Super underated channel in ai! 10/10
Appreciate that 😀
WOW ! Super impressive! ....I'm just about able to keep up with the rapid development of Stable Diffusion...
May I ask , my model doesn’t send photos unless I force a picture response, even if it says it’s sending a pic , there’s no pic , is there any underlying thing I have to code?
Set it to “adventure mode” for continuous pictures
can I do this with 8gb VRAM may be choose different model or make text gen use the CPu instead and SD use the GPU
Sure! You can try with much smaller models 😀
SO where do we find the outputs of the images?
I am interested in the training tab, I have no clue what it does but I hope you cover it.
Ah, it’s quite magical 😉
ive got a problem. if i say something like "show me a cat".. it generates about 10 images, 1st image "C" 2nd image "ute" 3rd image "cat" 4th image "potrait".. and it goes on and on till the prompt is finish 😅 what im i doing wrong?
You’re generating one word at a time rather than the whole prompt. You can do that, but it’s best to let it not stream 😉
@@NerdyRodent so thats what the "no stream" option is for 😅.. thanks bro.. you are the best
Whereabouts does it save the pictures that it generates because even when I tick the box 'Keep original received images in the outputs subdir' it doesn't actually do that? I'm guessing this means the outputs folder in the SD directory?
EDIT - they're actually stored in the \oobabooga-windows\text-generation-webui\extensions\sd_api_bot\outputs folder, in case anyone else is wondering.
Super cool
Please can you helpme? I receive a KeyError: 'images'
For some reason is sending the prompt word by word and creating a picture for each...
Add the launch flag -no-stream in the .bat file
@@AltoidDealer Thank you so much, that fixed it.
@@AltoidDealer You are an absolute legend m8. Had the same issue and this fixed it
No connection could be made because the target machine actively refused it
I have problem above.
Do you know how to solve it?
Try connecting using telnet. If that can’t connect either, you’ve likely got a firewall preventing connection between your source and destination hosts.
I am interested in training custom text lora models.
Hello, about the train, I wonder if it will be possible to train with a model like lama?
I’m sure you could do a Lora with Llama, yes
Thanks. On the GitHUB for the character model it says, "As the script has now been updated in the original repo, you can simply use that extension in "Adventure Mode"" -- can you explain what it meant here? Are you saying we don't need to download this anymore, just enable it in the OobaBooga UI settings under Extensions and reboot and it's there already?
Exactly that, yes. Simply use the existing sd extension!
@@NerdyRodent Thanks, I was going to post that I don't even see a BOT extension at all, but I enabled sd_api_pictures and restarted, now it comes up in chat mode on the bottom. That is all there is to it?
*edit, yup it works, thanks!
@@cleverestx yup! As mentioned, just enable adventure more so it always generates images
recommendations for someone with a 3070 on how to divide resources
It didnt work. error accessing gradio.shared["display"]. Does it require automatic1111 to be run with sharing enabled?
Just the api is fine, though you can share it if you like!
@@NerdyRodent I asked about the sharing because the error I got was related to the code accessing the shared methods of gradio
Is there any guide for training anywhere?
Is there a sufficiently-simple way to use what I see "Shared GPU memory?"
I got 64gb RAM and 8gb VRAM, and I always wonder when would the 32gb "dedicated GPU memory" ever be used
I'm not seeing sd api bot, was this removed on newer web ui updates?
You do indeed only need the character now
Great video! But I have a question: How could I make the character update its aspect in the time, like scars or different clothes in a way that it remembers the last image created? Is something about the TEXTGEN PREFIX?
The only way I found to do something nice is adding "*You describe your aspect*" at the end of my sentence.
I don't see where exactly the sd_api_bot extension is? It's not in the bot link, there's only the character?
Yup - as per the GitHub & video descriptions, you only need the character now 🙂
@@NerdyRodent My bad, I started webui-user.bat --api instead of webui.bat --api
awesome! could you please update this for sdxl?
I haven’t tried it, but the api should work for Sdxl?
Tremendous! 🎉
yes please do the training tab,nice vid
this is so awesome!
Yup, it’s fun 😃
Make a video on auto gpt plz having pinecone errors with it and all the videos are already outdated
Amazing stuff!
Yup, it’s great fun!
First, excellent video. Thank you for it.
Second, do you know a good way to combine the SD prompts and narrative text? I've had mixed success using the conversation model in the character by putting the prompts, then
, then a narrative description, but its not always consistent.
Third, is there a way to hide the prompt text? By accident I discovered that with the sd_api_pictures extension, text within asterixis now just vanish. Although the text appears in one part of the log, it is removed when sent to Automatic1111 and the text channel, so that doesn't quite address the matter.
I would love to use the text generation webgui but it's api tools are absolutely busted. I've tried so many different ways to build a app to make it run. But it also fails to work on a local network even after enabling -listen.
api and listen are very different options ;)
@Nerdy Rodent yeah... ik. I'm saying both are busted.
Using the built in api always gives an error when initiating it through the code on the local machine. Even the example code doesn't function it just gives an error.
And I can't even connect to the standard web gui through another computer even after enabling the listen functionality.
update. local network. Fixed.
Windows default settings are to block all incoming traffic and it refused to turn off.
API still refusing all attempt though.
If anyone has trouble getting sd_api_bot to be seen by Oobabooga, it might be because you now need to put the extension in the Extensions folder, not the main directory. Unfortunately, I could not get the bot to hook up with SD. It just kept hanging when I asked for the image to appear. Great idea, though!
it's exactly what am going through, have not find a way to get it to communicate with SD.
Just set up --api flag for stable diffusion, it will work then
You're killing me with your humor
😉
Brilliant video, as always 😌👍I don't have the sd_api_bot extension available nor listed even. Google didn't give any results; would you know where to look for it? I have both Text Generation WebUI running in port 7860 and latest Auto1111 running in port 7861. Afaik they can't run on same port or I don't know how to set it up like that. Tyvm in advance
Links are in the video description
nice, but is it work with fooocusAI or ComfiUI?
Yes, please make a video on Lora training in oobabooga.
Thanks for the tutorial! I ended up using the smallest OPT due to the size of my RTX 3070. I'm able to receive images BUT they're very basic, they seem to stutter (e.g., monkey, monkey, monkey), and don't add much to the prompt. Plus, I'm receiving at least 25 images with every prompt. Any ideas why this might be happening?
If you’re getting repetition, then you can turn the repetition penalty up
Excellent, GJ! Only that I have a problem: I get an image for every single word in the prompt, not just one picture for the whole prompt. Do you know what might be the issue?
You're sending the API one word at a time. Send the whole prompt at once instead :)
Is this possible in Tavern AI ?
Not that I’m aware of
Classic Nerdy Rodent.
Wow, if you could include ai voice detection and prompting by voice you could create a powerful tool for dnd players to visualize their surroundings..
Try the whisper speech to text 😉
My textgen crashes every time i apply your sd_api_bot extension in the interface tab. I hope this comment doesn't get deleted like the other did.
Try with the latest update 😉 Also note that any comments with links will automatically be hidden by RUclips
@@NerdyRodent lol. i do have the latest SD and textgen. And the comment that got hidden or deleted didn't have a link in it. I was only saying that i was getting errors. The error i'm getting is in the comment of your new video.
yes teach us how to fine tune and use the LoRAs tab! I clicked the bell icon and subscribed
Hey Nerdy,
What is the name of the tool which shows your PC stats in the upper right corner?
That’s conky
How only 25k? This feels like a true treasure
We're gonna need a bigger hard drive!
Very much so!
thank you dude, nice one
No problem!
I'm also old, I remember what choose your own adventure books are
😀
Can you hook this up to vicuna?
You can use any supported model you like 😄
@@NerdyRodent have you seen vicuna's capability? It's 90% as good as chatgpt. It just got released a day ot 2 ago, please do a video on it.
@@levansegnaro4637 Yup - it's just another model. Download & run with this as normal. It's not open / for commercial use though
@@NerdyRodent oh that's awesome, a bit off topic but can you connect any of these gpt's to a music maker like dance diffusion or something?
@@levansegnaro4637 I guess you could use stable riffusion?
At this point, someone should train a LLM with instructions of how to work with LLMs, how to install them locally, configure them for your hardware, etc. It'd be so much easier if we could just ask InstallGPT to give us the commands necessary for our specific computer and let it handle it! 😅
AGI is coming! 😀
Free is still a little too expensive for me. Is there any way you could lower the price so more people may enjoy your amazing work?
Nerdy 💛💯
Hey Crystal, thanks for dropping by! 💚
@@NerdyRodent 👍
nice!
undeniable charm 6:14
can you make a text to speech voice cloning video :)
AI Voice Cloning - Totoise TTS
ruclips.net/video/J3-jfS29RF4/видео.html
This needs a detailed installation tutorial!
Ikr! Sorry I couldn’t make it more complicated than a 1 second copy & paste 😁
This is absolutely huge, imagine all the (n)SFW images we can produce!!! Jokes aside, I kinda have an issue with Stable D, basically says it refuses to connect, I must be doing something wrong.
edit: found the issue.
Make sure you’re connecting to the sd api on the right port, or if it’s on another computer that you’ve not got a firewall in the way
Damn you 😅
😃
Are we still playing with Windows? Dude, it's 2023 where's the Mac Install?
Windows? No one still uses that, do they? Linux all the way - it’s like a real Mac! 😉