- Видео 13
- Просмотров 376 993
Anime AI Art Tutorials
Великобритания
Добавлен 24 янв 2023
This channel focuses on making awesome anime art using AI Stable Diffusion. The software is Automatic1111 using various models.
Ruining Your Favorite Animes with AI
Slice of death. CGDCT - Cute Girls Doing Cursed Things.
How to use ControlNet AI to repaint images to change anime genre. The Automatic1111 WebUI is used for this Stable Diffusion guide.
We have the slice of life classics like Non Non Biyori, Is The Order A Rabbit, Nagatoro-san, K-On. They'll become zombies, vampires, demons; none shall escape unscathed!
How to use ControlNet AI to repaint images to change anime genre. The Automatic1111 WebUI is used for this Stable Diffusion guide.
We have the slice of life classics like Non Non Biyori, Is The Order A Rabbit, Nagatoro-san, K-On. They'll become zombies, vampires, demons; none shall escape unscathed!
Просмотров: 6 688
Видео
OpenPose Tutorial: Be a Poser
Просмотров 20 тыс.Год назад
How to use ControlNet with the OpenPose Editor extension. Featuring Hitagi Senjougahara doing the famous Shaft studio head tilt, from the anime Bakemonogatari. We explore what the joints mean and how to align them with background guide images to finely control the posing of your AI art characters. The Automatic1111 WebUI is used for this Stable Diffusion guide. The OpenPose model is the cut dow...
Shortest Dreambooth Tutorial
Просмотров 32 тыс.Год назад
2 minute tutorial for Dreambooth. Trained on three pictures of Kaguya-sama from the anime Love is War. I used the Anything 3.0 model and set the training at 900 steps (~15 minutes). This guide is for getting you started with the bare minimum. I focus on the essentials and reduce the complexity as much as possible. The end result is overfitted but it is a useful starting point to demonstrate how...
Using VAEs to Fix Dark Washed Out Images
Просмотров 25 тыс.Год назад
VAE tutorial for quickly improving your AI art. Colors will be richer and more vibrant. VAE stands for Variational Auto Encoder, which is basically extra training data which helps map the image to a better result, given the same prompts. I'm using the Abyss Orange Mix 2 model with the orange mix VAE to demonstrate the concepts.
Generating Good Girls with Guns
Просмотров 10 тыс.Год назад
Some anime history about the girls with guns genre, and then a bit of AI generation using stable diffusion. The animes are: Noir (Yuki Kajiura makes great music for this btw) Madlax El Cazador de la Bruja Black Lagoon Canaan Gun Gale Online Akiba Maid Sensou/Wars Lycoris Recoil Kantai Collection I use the AbyssOrangeMix2 model and attempt to rein in the AI making bad hands and guns. The interfa...
ControlNet AI Is The Storm That Is Approaching
Просмотров 11 тыс.Год назад
What if Genshin Impact and Devil May Cry had a crossover? I used AI to draw Raiden cutting Timmie's Pigeons with Vergil's Judgement Cut. I used Stable Diffusion with ControlNet's Canny edge detection model to generate an edge map which I edited in GIMP to add my own boundaries for the aerial slashes. Raiden's model is mostly preserved because of the resolution of the edge map and how faithful C...
Install ControlNet for Next Level AI Art
Просмотров 39 тыс.Год назад
Correction: 0:14 The repo should be github.com/Mikubill/sd-webui-controlnet A fast way to try ControlNet using Automatic1111 stable diffusion. Minimal settings needed. I focus on the Canny model which is good for general edge detection. There's also the scribble model which is great for building upon random sketches. The smaller safetensors fp16 versions of the models are much smaller and easie...
Quick Hypernetwork Training Tutorial (feat. Neurosama)
Просмотров 29 тыс.Год назад
Simple and easy way to train Stable Diffusion with built-in Automatic1111 functionality. Minimal settings needed. You can watch Neurosama at www.twitch.tv/vedal987
I used my 90s photography knowledge to make retro AI images
Просмотров 9 тыс.Год назад
Prompt generation ideas from the perspective of a niche photographer. Stable diffusion used with Automatic1111. Today this guide will go into Lomography using the classic LOMO LC-A camera, for the colour unbalanced, slightly gritty retro look. This is a prompt tutorial for replicating that look. #aiart #lomography #animeart
Quick Guide: Install Stable Diffusion on Windows
Просмотров 19 тыс.Год назад
Quick tutorial on how to set up stable diffusion locally on your computer. This is the minimum amount of steps needed to set it up. We will use Automatic1111 to make it accessible via a browser. Git will be used to download the code. Python will be used to run the code.
Making Lacari's Lacgirl with Stable Diffusion AI
Просмотров 6 тыс.Год назад
Engineering Lacari's OC (original character) with some RNG and clever prompting. Lacari's channel is: www.twitch.tv/lacari This was done using the AbyssOrangeMix2_sfw stable diffusion model with the orange mix VAE. Please like and subscribe to receive the content that I have planned on my roadmap. It'll be an interesting journey!
How to Generate Better Monster Girls with AI
Просмотров 38 тыс.Год назад
A tutorial for alternating prompts and scheduled prompts. Engineer a horrific and/or awesome monster girl today! This was done using the Anything-V3.0 stable diffusion model with Anything-V3.0 VAE. Please like and subscribe to receive the content that I have planned on my roadmap. It'll be an interesting journey!
Quick Hack to Improve Your AI Art 1000%
Просмотров 132 тыс.Год назад
Takes only 1 minute. Beginners will miss this trick. It will vastly improve the quality of all of your art generation. Note: this only works for most anime-based models. Realistic models will not be affected. This was done using the AbyssOrangeMix2_sfw stable diffusion model without a VAE. Technical details: Steps: 34, Sampler: DDIM, CFG scale: 8, Seed: 2007333899, Size: 512x512 Prompts are: gi...
00:55 that's funny
I downloaded it but I cannot find the canvas. the problem is that the scribble I downloaded didnt have the " scribble" preprocessor. there are only "scribble_pidinet" or something similar to that. multi named. but not only "scribble"
what happens if we put multiple brackes. like (((...))) something like this
can we tell the clothes whare to be (ie maybe showing a peek)
What do I do if all the generations I try turn gray?
thanks
Bro made some bangers then quit
Hi! I’m interested in a business collaboration. Could you please share your email? Thanks
open pose editor wont appear, help!
I have a question (hope somebody responds soon) but what if I already have a sketch with detailed lineart and only need to AI to paint it for me?
Where can I get stable diffusion or how can I download it? I would like to try those tips, thank you
Hey buddy, the linaqruf/anythingv3.0 no longer exists, could you correct this video?
There's no "preprocess" in the Train tab :/
yeah same. I hate how extensions always change their UI's which messes up these tutorials
"Improve" "your" AI """art"""" 💀
Literally
The processor tab is absent now in stable diffusion v1.8.0. useless now but great video anyway.
Interesting it made its better thanks i Have been trying with my Ai art it’s been Terrible Especially with the anime thanks for the AI tutorial
this video sucks doesnt even explain how to install the openpose model. go check AI outline better tutorial
I NEED YOUR HELP CAN U MAKE VIDEO FOR IMAGE GUIDANCE TO GUID THE RESULT IN STABLE DEFUSION IS THAT POSSIBLE ?
savior
I'm begging you to come back.
Exception training model: ''NoneType' object has no attribute '_unscale_grads_''. pls help
Good video dude! thanks
looks like this could use a VAE as well
maybe tutorial of how to instert my(for exmpl) face into this art?
Getting good tutorials for Stable Diffusion is like generating images without prompts at all
When I'm creating a new canvas my brush is in white!! How do I change that to black?
it was grey and low quality and stayed that way
What about those config files?
HuggingFace URL?
Simple, fast, efficient.
what gpu do you use in this vid ?
still waiting for my first guy who got me into this to make a new tutorial or any vid :D +.+ woh! :')
dammed! is even better now, come back !
Wow, nice! Thanks! I want use stable diffusion to generate 'hot anime women', so i can catfish and drain idiot simps on twitter. XD Yeah, it's extremely degenerate but It's for a good purpose which is paying my rent lol C:
what the hell. lol
Training finished at 0 steps.
when anime with guns is pg13 🤮🤮🤮🤮🤮🤮🤮
whats the sense of speeding through a tutorial only to give a half a$$ method with 3 training images?
this or Lora?
bro
Save image every 50 steps, but where are the images saved to??
Interesting stuff.
Best 2 min tutorial ever in the history of tutorials.
Still waiting for more tutorials 🔥🔥
still waiting too :D !
good stuff m8. fast a simple.
i think a better solution to this is to create the weapon using img2img alone using the same checkpoint. then create the character with the pose u want and the hand ready for the gun and with photoshop u just add the gun on the hand then run on the img2img to fix any possible lightning/sharpening/hue and saturation issue. just a hypothesis, still learning how to use sd. xD
WHAT its that f****** simple?
2 months learning ai art and this is all i needed to do
is it same for google colab users ?
What's the ai you used? Please
time to make chibidoki its going to be hell