AIKnowledge2Go
AIKnowledge2Go
  • Видео 36
  • Просмотров 514 922
Stable Diffusion 3.5 is here. NEW and UNCENSORED Image Model!
Stable Diffusion is back, and it's making waves with the release of version 3.5. They say it can finally nail those elusive, photorealistic details we’ve been waiting for. But here's the question: is it actually better than Flux? And if so, what secrets does it hold for you to unlock? Join me as we explore SD 3.5’s power, compare it to Flux, and find out what each model brings to the table.
From realistic textures to the age-old battle with hands, the competition is heating up. And if you’re ready to get started, I’ll show you the tools, tips, and tricks to dive into the latest in AI image generation.
Book a one-on-one appointment: koalendar.com/e/1hr-1-on-1-stable-diffusion
Patreon Free Stu...
Просмотров: 4 980

Видео

CGDream - Create MIND-BLOWING AI Art With This Online Tool using Flux!
Просмотров 436Месяц назад
CGDream offers some fantastic features that are definitely worth checking out. Use the sponsored link here: bit.ly/4eGmEaq The standout for me is FLUX, which comes with numerous Loras and an AI Prompt creator. Additionally, the ability to use 3D model input for AI-generated images is really impressive. Honestly, I'm considering canceling my Midjourney subscription in favor of this because the i...
Easiest FLUX.1 LORA Training Guide - Full Workflow
Просмотров 17 тыс.Месяц назад
In this tutorial you will learn the most reliable method for consistent characters in AI Diffusion Models. LoRA’s. I'll walk you through training a LoRA (Low-Rank Adaptation), an AI model add-on that helps introduce specific characters, styles, or concepts the AI isn't familiar with. Link to my amazing Sponsor Pic Lumen AI: bit.ly/4dLzmoh Book a one-on-one appointment: koalendar.com/e/1hr-1-on-...
Running Flux.1 on Low VRAM: A Practical Guide
Просмотров 8 тыс.2 месяца назад
In this video, I'll show you how to run Flux.1 on your PC, even if you don't have the latest hardware. Learn how to use Forge, a Gradio UI-based fork of Automatic 1111, to create stunning images on a low-spec machine. I'll guide you through setting up Stability Matrix, installing Flux models, and using Civitai for prompt inspiration. Whether you're a beginner or an experienced user, this tutori...
FLux.1 Local Install Made Easy - The BEST FREE UNCENSORED AI Model Now!
Просмотров 10 тыс.2 месяца назад
In this video, I introduce Flux, a powerful new text-to-image AI model from Black Forest Labs, built by former Stability AI developers. Flux surpasses Stable Diffusion 3, offering incredible prompt accuracy, stunning full HD outputs, and more. I'll guide you through trying it online, installing it locally, configuring prompts, and upscaling images. Whether you're using the Schnell, Dev, or Pro ...
BIG Stable Diffusion 3 news! Major Changes & What You Need to Know!
Просмотров 1,5 тыс.4 месяца назад
In this video, we dive into the latest developments surrounding Stable Diffusion 3, a tool that initially left many in the AI and creative communities feeling let down. Despite its groundbreaking potential, Stable Diffusion 3 faced significant hurdles at launch, including problematic image outputs and a confusing licensing agreement that sparked widespread concern. However, after considerable f...
Stable Diffusion 3 - The ULTIMATE Guide TO AMAZING IMAGES
Просмотров 3,6 тыс.4 месяца назад
Today I will teach you how to generate Amazing Images with Stable Diffusion 3. We'll delve into practical steps using ComfyUI within Stable Swarm, guiding you through the process of setting up your models and encoders correctly. Discover how to optimize your prompts for breathtaking, realistic images, and understand the critical role of certain nodes for fine-tuning your outputs. Join me as I u...
How to install Stable Diffusion 3 local (Effortless Guide)
Просмотров 8 тыс.4 месяца назад
Install Stable Diffusion 3 on your local PC. In this video, I will guide you through the installation process, highlighting the dos and don'ts to ensure a smooth setup. Additionally, I will provide you with valuable tips on how to effectively utilize this new model to maximize its potential. Stay tuned to unlock the full capabilities of Stable Diffusion 3 and take your content creation to the n...
The EASIEST way to Object Placement in Stable Diffusion A1111
Просмотров 5 тыс.5 месяцев назад
Tired of Stable Diffusion misplacing objects in your images? In today's video, discover the Region-al Prompter, a revolutionary tool that allows you to define specific regions within your image for precise object placement. Compatible with both SDXL and SD 1.5, this tool is set to transform your creative process. I'll walk you through setting up. Watch as we create an image featuring a werewolf...
Unlock Your Creativity: 10 Essential AI-Art Tips
Просмотров 8236 месяцев назад
Ready to elevate your AI artistry? This video packs a series of essential tips and ingenious methods that will take your skills to the next level. Whether you're a beginner or an experienced artist, you'll find valuable insights and techniques that can transform the way you approach AI-driven art. In This Video: • Learn unique strategies that challenge conventional methods. • Discover simple tw...
Game-Changing DALL-E 3 Features Revealed!
Просмотров 5267 месяцев назад
Explore the New Frontier of AI Art with DALL·E 3: Dive into OpenAI's latest updates, transforming how we create and edit images. We highlight the 'in-painting' feature, allowing precise edits within images-alter eye colors, remove objects, or change backgrounds with ease. We also introduce simplified aspect ratio adjustments for your desired image dimensions and tackle the introduction of 'styl...
LEARN ANIMATE-DIFF IN 10 MINUTES (EASY FOR BEGINNERS!)
Просмотров 21 тыс.7 месяцев назад
Dive into the future of AI-driven animation with today's video, where we uncover the magic of creating breathtaking animations using Stable Diffusion and animate diff techniques. Whether you're refining your skills or just starting out, I'll walk you through beginner-friendly workflows and advanced techniques to unlock your creative potential. From simple text prompts to complex video transform...
The Ultimate Guide to A1111 Stable Diffusion Techniques
Просмотров 46 тыс.8 месяцев назад
Upscale AI-generated images to 4K, 8K, or even 32K with this AI tool from my sponsor: bit.ly/3XiWu7Q Dive into the world of high-resolution digital art as we embark on a five-step journey to transform the ordinary into extraordinary 4K and 8K visual masterpieces. From text to image prompt engineering to advanced inpainting techniques, and finally to a groundbreaking upscale process, this tutori...
OpenAI Sora -This Will Change AI-Video Forever!
Просмотров 3908 месяцев назад
OpenAI Sora -This Will Change AI-Video Forever!
Master Stable Diffusion A1111 on Your PC
Просмотров 2,2 тыс.9 месяцев назад
Master Stable Diffusion A1111 on Your PC
ControlNet is a game changer: OpenPose. Automatic 1111 and ComfyUI.
Просмотров 8 тыс.9 месяцев назад
ControlNet is a game changer: OpenPose. Automatic 1111 and ComfyUI.
Unlocking AI-Image Generation: The Ultimate ComfyUI Guide
Просмотров 2,8 тыс.9 месяцев назад
Unlocking AI-Image Generation: The Ultimate ComfyUI Guide
Unlock the Full Potential of Your AI Images and Videos with this amazing tool!
Просмотров 1,6 тыс.10 месяцев назад
Unlock the Full Potential of Your AI Images and Videos with this amazing tool!
Game-Changing A1111 Tools to Revolutionize Your Workflow!
Просмотров 8 тыс.11 месяцев назад
Game-Changing A1111 Tools to Revolutionize Your Workflow!
Stop Paying for Midjourney! Level up your AI-Art with Dalle-3 and Chat GPT.
Просмотров 3,4 тыс.Год назад
Stop Paying for Midjourney! Level up your AI-Art with Dalle-3 and Chat GPT.
Turn Your Images into Animated Masterpieces Using AI Tools
Просмотров 2,6 тыс.Год назад
Turn Your Images into Animated Masterpieces Using AI Tools
Can This NEW & FREE Midjourney Challenger Bring A Revolution in The World Of AI Image Generation?
Просмотров 1,3 тыс.Год назад
Can This NEW & FREE Midjourney Challenger Bring A Revolution in The World Of AI Image Generation?
Unlocking the Secrets of Automatic 1111 with SDXL Checkpoints
Просмотров 7 тыс.Год назад
Unlocking the Secrets of Automatic 1111 with SDXL Checkpoints
Master the Stable Diffusion XL Installation - Local Tutorial
Просмотров 14 тыс.Год назад
Master the Stable Diffusion XL Installation - Local Tutorial
The AI Art Technique That Will Change Everything
Просмотров 8 тыс.Год назад
The AI Art Technique That Will Change Everything
This new Outpainting Technique is INSANE - ControlNet 1.1.
Просмотров 25 тыс.Год назад
This new Outpainting Technique is INSANE - ControlNet 1.1.
Best Practice Workflow for Automatic 1111 - Stable Diffusion
Просмотров 245 тыс.Год назад
Best Practice Workflow for Automatic 1111 - Stable Diffusion
Unveiling the secrets of Controlnet 1.1 in Automatic 1111 Multi-ControlNet.
Просмотров 6 тыс.Год назад
Unveiling the secrets of Controlnet 1.1 in Automatic 1111 Multi-ControlNet.
Supercharge Efficiency: Save Time in your Stable Diffusion with After Detailer
Просмотров 22 тыс.Год назад
Supercharge Efficiency: Save Time in your Stable Diffusion with After Detailer
Unleashing Image Generation Abilities: ControlNet 1.1 with OpenPose
Просмотров 10 тыс.Год назад
Unleashing Image Generation Abilities: ControlNet 1.1 with OpenPose

Комментарии

  • @amkkart
    @amkkart День назад

    Gibt es die Möglichkeit flux mit deforum zu nutzen ? Falls ja ein Installation video wäre super

  • @Marcus_Halberstram
    @Marcus_Halberstram День назад

    hehe... after

  • @admiralevan
    @admiralevan 3 дня назад

    the results seem a bit deepfried in my tests

  • @Djonsing
    @Djonsing 3 дня назад

    *How can I make Image to Image here?*

  • @Varibam
    @Varibam 3 дня назад

    Omg, i tried it and its so much better than flux. Faster, doesn't need that much VRAM. Thanks for sharing the workflow on your Patreon.

  • @rhy2k3
    @rhy2k3 4 дня назад

    I am a little bit new to All of that But i know a little bit about comfiy, but i have a question, can you use stabledif 3.5 for image to video ? Or for text to video? Or is it just for image generation? Also subcribed ofc😊

    • @AIKnowledge2Go
      @AIKnowledge2Go 4 дня назад

      Unfortunately a video model has yet to be trained to work with Stable Diffusion 3.5. More recent video models that are unrelated to Stable diffusion but work in comfy UI are CogVideoX and Mochi. Both require a lot of VRAM or Patience.

  • @BeginsWithTheEnd
    @BeginsWithTheEnd 5 дней назад

    doesn't work at all for me

    • @AIKnowledge2Go
      @AIKnowledge2Go 4 дня назад

      I am sorry to hear that, what UI and Model are you using?

  • @AIKnowledge2Go
    @AIKnowledge2Go 6 дней назад

    You can find the Prompts and workflow on my patreon for FREE: www.patreon.com/AIKnowledgeCentral Do you struggle with AI-Art? Head over to my Patreon to grab free stuff. No Membership needed! Ultimate Beginners Guide: www.patreon.com/posts/sneak-peek-alert-90799508 Free workflow guide: www.patreon.com/posts/get-your-free-99183367 Style Collection: www.patreon.com/posts/my-collection-of-87325880 I also offer one on one sessions: koalendar.com/e/1hr-1-on-1-stable-diffusion Happy creating Chris

  • @SHPjealousy
    @SHPjealousy 6 дней назад

    hm, da werde ich wohl ComfyUI lernen müssen ;)

    • @3DWork
      @3DWork 6 дней назад

      Es lohnt sich!

    • @AIKnowledge2Go
      @AIKnowledge2Go 6 дней назад

      ComfyUI bekommt bald eine übersichtlichere UI. Dann wird es etwas einfacher. Sobald ich in die Beta kann mache ich ein Video dazu.

    • @SHPjealousy
      @SHPjealousy 6 дней назад

      @@AIKnowledge2Go cool. solange bleibe ich bei ner mischung aus xl und 1.5

  • @lukasgruber1280
    @lukasgruber1280 7 дней назад

    went from SD to Flux but I am happy to return

    • @AIKnowledge2Go
      @AIKnowledge2Go 7 дней назад

      Yeah me too. I like flux but somehow SD feels like home. Maybe its the broken hands 😂 But there is a lora for this on Civitai. Haven't tested it yet though.

    • @ronbere
      @ronbere 7 дней назад

      Flux is better...

  • @itSinger
    @itSinger 7 дней назад

    u earned a new sub

    • @AIKnowledge2Go
      @AIKnowledge2Go 7 дней назад

      Thanks for joining the community! I’m excited to have you here!

  • @KeeKan-p9s
    @KeeKan-p9s 12 дней назад

    could you please give tips for dataset preparation for Flux Lora training for cloth ?

    • @AIKnowledge2Go
      @AIKnowledge2Go 7 дней назад

      Hi you need: images that represent the clothing you want to train on, including different angles, poses, and lighting conditions. If possible, use images with simple or uniform backgrounds to ensure the model focuses on the clothing rather than complex scenery. Remove blurry, low-quality, or overexposed images. Ensure that the dataset includes various body poses, especially if the clothing involves different parts like sleeves, collars, or drapes

  • @rifz42
    @rifz42 15 дней назад

    Thank you!! I don't know what you are looking at when you said over or under trained.. how do you tell? knowing what to look for would help a lot! : )

    • @AIKnowledge2Go
      @AIKnowledge2Go 7 дней назад

      If your outputs start to look too much like specific training images or capture excessive detail that wasn’t intended.Over-trained models might struggle to respond to prompts that deviate from the training data. If the model produces outputs that are too generic, vague, or don’t look like the specific character in your dataset, it needs more training.

  • @AIKnowledge2Go
    @AIKnowledge2Go 18 дней назад

    Stable Swarm UI is now independent from Stabillity AI and is called Swarm UI (without Stable) You can find it here: github.com/mcmonkeyprojects/SwarmUI

  • @7nesctv
    @7nesctv 19 дней назад

    Sir, how do I display UI options: SD, XL, FLUX, ALL?

    • @AIKnowledge2Go
      @AIKnowledge2Go 17 дней назад

      This should be visible by default, in forge UI, which version are you using?

  • @YutharSith
    @YutharSith 24 дня назад

    Thanks for good video! Yet whatever I do I cannot get two persons standing full size (left and right side of image, looking in to distance), I'm using PONY model, maybe that's the reason Regional prompter is not performing so well?

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Thanks, to be honest i haven't tested it with pony yet. What happens when you render the same image with the regional prompter turned off while using the same seed? If you get the same image than pony may be the culprit here. Have fun testing.

  • @henrysingletary
    @henrysingletary 25 дней назад

    Hi I am a noob at this stuff. Will the LORA's we make also work with Forge UI?

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Yes! But you need the according base model. For flux I only tested nf4 flux v2 version of flux and the "normal" flux dev. GGUF is untested.

  • @Disent0101
    @Disent0101 27 дней назад

    there was no clip folder, I also have a (git failed to load) message in the stableswarm ui.

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Sorry for the late reply. What happens when you open a cmd window and type "git --version" it should say something like "git version 2.45.2.windows.1" if not then go here and download and install the latest. git-scm.com/downloads Hope that helps.

  • @itsnotthatserious6639
    @itsnotthatserious6639 Месяц назад

    Geiles video es tut mir echt leid aber ich musste echt oft lachen wegen dem Akzent aber trotzdem sehr hilfreich :)

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Danke, ja den Akzent kann ich nicht verbergen. 😂 Ich bilde mir ein in neueren Video ist es besser geworden😂

  • @nextlevelbrosagency
    @nextlevelbrosagency Месяц назад

    Everything worked for me, thanks! Also the linked PDF's where very clear and helpful. BTW, When I first went to the Comy Workflow tab, it didn't load any nodes, it seemed bugged out. After refreshing the interface (F5) I got a message on the Generate tab about migrating to another github repo. I followed the simple directions given and then after a restart was able to continue with the Comy stuff. Now lets wait for community developments for SD3...

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      It's great that you figured it out on your own. I wouldn't wait for SD3. I want to love it but even stability AI seems to have lost interest in fixing it. News from 2 Month ago was that they want to release a 3.1 version in the upcomming weeks. I am still waiting. Have you tried flux yet? If not then you should. Its everything SD3 promised to be.

  • @mrloopreviewer667
    @mrloopreviewer667 Месяц назад

    directml is the option for amd

  • @sovereign1003
    @sovereign1003 Месяц назад

    german, correct? ;)

    • @AIKnowledge2Go
      @AIKnowledge2Go 26 дней назад

      I just can’t hide it 😂 You’ve got a good ear!

  • @HolidayAtHome
    @HolidayAtHome Месяц назад

    training on AI images.... so you don't just make bad hands, you make bad hands that are themself based on already bad hands =D ( you filter out the very shi**y ones, but still all hands are at best 90% accurate ) It's a nice shortcut for now, but imagine the 5th generation of AI images that are trained on AI and never saw a real hand. Only hands based on bad hands that are based on bad hands that are based on bad hands..... but still, thanks for the video. Can't quite get Flux training running on my machine, so using civitAI might be the easy route

    • @Neumahn
      @Neumahn Месяц назад

      This seems really dumb to use renders for this.

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      You are absolutely right. I will make an "advanced" video of this training guide and fixing hands will be on of the talking points.

    • @ielohim2423
      @ielohim2423 7 дней назад

      Youre wrong. Yiu can always generate images of your character using the "perfect hands" lora.

  • @HiProfileAI
    @HiProfileAI Месяц назад

    What about a xyplot with comfy ui to prepare all the batch images? Thoughts...

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Its absolutely great to use ComfyUI for this. I saw a workflow once on civitai that created Multiple different images for Sytle Loras. Maybe I will make a video about XYZ Plot in ComfyUI. Unfortunately its not so straight forward as i thought.

  • @phazei
    @phazei Месяц назад

    You mention a research paper, then you don't link to it anywhere.... :/

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      Sorry for that, i had the same link twice in my description. Is fixed now. Thanks for pointing that out.: stability.ai/news/stable-diffusion-3-research-paper

  • @ASKofen22
    @ASKofen22 Месяц назад

    tensorart x100times better than this garbage

  • @gamalielj4486
    @gamalielj4486 Месяц назад

    What version of stable diffusion has a working equivalent Mov2Mov extension

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      SD 1.5 definitely and i think SDXL as well. You can do the same or even better with antimate diff.

  • @canttouchthis6018
    @canttouchthis6018 Месяц назад

    Weird, in here it works, but runs 10x slower than 1.5, and the quality looks terrible. 6gb vram here. Also, I've noticed that my GPU power ain't even being used, just 16% of usage, low temps as in idle, just the VRAM being used, it takes 8:30 minutes to finish a horrible quality 512x512 image at 10 steps. Meanwhile, the same prompt same config on SD1.5 runs fine at 50 secs. Something wrong is not right.

    • @AIKnowledge2Go
      @AIKnowledge2Go 21 день назад

      I am sorry to hear that you run into trouble. Using Flux is really hard with 6GB VRAM. Have you tried using the GGUF models of flux?

  • @aegisgfx
    @aegisgfx Месяц назад

    Step 1: Make digital girlfriend Step 2: There is no step 2

    • @AIKnowledge2Go
      @AIKnowledge2Go 26 дней назад

      Step 3: Enjoy your virtual dates without the awkward small talk! 😂

  • @AIKnowledge2Go
    @AIKnowledge2Go Месяц назад

    Link to my Sponsor: bit.ly/4eGmEaq

  • @AIKnowledge2Go
    @AIKnowledge2Go Месяц назад

    Newer Version of this Video: ruclips.net/video/wyDRHRuHbAU/видео.html

  • @kakashi99908
    @kakashi99908 Месяц назад

    Still confused on why we downscale fix anomalies why not just keep it at 1x?

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      This depends on if you set a1111 to inpaint "whole picture" or "only masked" with whole picture you might want to set it to 1x. With "only masked" the inpainted area will be inpainted with the same resolution as the rest of the picture. This takes a) a long time and b) the Image cannot hold this information because of the total resolution of the image. An image with 10.000 pixels cant hold 10.001 pixels. You might want to check the newer version of this video: ruclips.net/video/wyDRHRuHbAU/видео.html

  • @TruthSurge
    @TruthSurge Месяц назад

    stupid question here.... why isn't there just an install program created by these 14 people for Windows and one for Mac so all this d/l this, put it here, d/l this, rename and put it here is just auto-done for people? I don't understand why it looks so haphazard and confusing that you have to have a long 15 min video just to show you how to set it up. thx

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      That’s a great point! It would definitely make things easier if there was a streamlined install program. you might want to give this video a try where i introduce stabillity matrix. This won't get rid of all the annoying stuff but you can install a lot of UI's with two clicks and it holds all your models, loras and created images. Its not perfect but it makes things a lot easier. ruclips.net/video/bwPk-NXggp0/видео.html

    • @TruthSurge
      @TruthSurge Месяц назад

      ​@@AIKnowledge2Go thanks for the reply but I am not psychologically at a point where I want to do much work installing and setting up some AI image stuff and spending gobs of time trying to learn it. I did want to just check into it some and see if I could get some understanding of what's going on. I think $10/mo for some midjourney images would be okay if I was making more than $10/mo off of those images. But i'm not and I just wanted really to try some of these websites out but I did look at night cafe but I just prob gave up on that one too soon but I was wanting to make some landscapes with old mansions and you know that dark academia/fantasy look. Add some light animation after the fact with maybe multiple layers in video editor etc. Anyway.. thx for the info and explanations!!!

  • @Lukasz490
    @Lukasz490 Месяц назад

    One question to Checkpoint versions or versions in particular. You are using the Version 11 of the realcartoon checkpoint altough there is a version 17 on the screen. Does not a higher version mean it is newer or "better"?

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Hi, yeah usually newer is better in terms of checkpoints. Some Checkpoint creators pushing out new versions on a daily basis. When i came up with the idea for the video 11 was the latest and i don't change version while working on a project. This is why its still version 11.

    • @Lukasz490
      @Lukasz490 Месяц назад

      @@AIKnowledge2Go danke dir für die schnelle Antwort :D

  • @majortom4338
    @majortom4338 Месяц назад

    Hallöchen, darf ich fragen warum du 1111 nutzt und nicht zB Fooocus? Danke

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Du darfst... zum Zeitpunkt als ich das Video aufgenommen habe, war ich noch nicht so überzeugt von Fooocus. Tatsächlich ist eins meiner kommenden Projekte ein Inpaint tutorial wo ich Fooocus vorstellen werde

  • @TUYGVISUALCREATIVES
    @TUYGVISUALCREATIVES Месяц назад

    great explanarions, thanks

  • @ashdey8861
    @ashdey8861 Месяц назад

    First learn how to make your video correctly working on you tube

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Thanks for your feedback, if you point me to where the problem lies, then I may be able to improve on it in the future.

  • @Beauty.and.FashionPhotographer
    @Beauty.and.FashionPhotographer Месяц назад

    Gäbe es ein neues up-gedatedes video hiervon , das iost wirklich super toll, aber mehrere Buttons und sliders gibt es nicht mehr seither.....?

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Hi danke für das Feedback. ich habe tatsächlich Pläne zu sowohl inpainting und auch outpainting neue videos zu erstellen. Leider kann ich noch keinen genauen Termin dazu nennen.

  • @jevinlownardo8784
    @jevinlownardo8784 Месяц назад

    will this Forge or Stability Matrix mess up my A1111 SD1.5?

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      No, its a completely different installation. Your A1111 remains untouched.

  • @Nine-Signs
    @Nine-Signs Месяц назад

    if there is anyone out there with a fettish for 7 breasted 2 headed 9 legged possibly women, given my level of incompetence, I'm your guy.

    • @smudgedog123
      @smudgedog123 9 дней назад

      I'm catching up to you. My next album cover art will be 3 half naked girls, swords firmly held with their feet .

  • @Varibam
    @Varibam Месяц назад

    This is the most informative Video about LoRA creation on youtube. I learned so much about how a good dataset looks like and how to describe the image in order to let the AI learn. Marvelous work!

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Glad you enjoyed it!

    • @HiProfileAI
      @HiProfileAI Месяц назад

      Yes great explanation thanks, I've been waiting to create a lora and have had issues with tryint to create the captions and data set. I'm interested in training a style

  • @TomSmith-yh9ju
    @TomSmith-yh9ju Месяц назад

    what about the differences between a face-lora and a full-body-shape-lora ?

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      You can train both in a single lora, this requires a more delicate image selection and the rendering of a lot more images. If the body has to be of nude form, then good luck. You may want to use a textured 3D Model for this and render these Images in a 3D Environment, like Blender or Unreal Engine 5. Hm... actually not a bad idea for a new video... *taking notes*. You also could take real photos, but make sure you have written consent from the person to use them.

    • @zombieploios
      @zombieploios Месяц назад

      @@AIKnowledge2Go that kind if video would be really sick man

  • @adrianmunevar654
    @adrianmunevar654 Месяц назад

    Thank you! I'll try your parameters in local fluxgym, it's a little more time for the training but anyways I hope it works.

  • @morozig
    @morozig Месяц назад

    That was not very informative. Next time please make a guide on how to train LoRA for an original cartoon character unknown for base model, for example some random anime character. Also please use free local trainer like kohya.

    • @AIKnowledge2Go
      @AIKnowledge2Go Месяц назад

      Thanks for you feedback. Since most of of my viewers have an RTX 2080 or lower ( i did several polls on this). i decided against local training. I am using the Kohya method in Civitai online trainer. This can be applied 1 - 1 to local kohya and should yield similar results.

  • @AIKnowledge2Go
    @AIKnowledge2Go Месяц назад

    Do you struggle with AI-Art? Head over to my Patreon to grab free stuff. No Membership needed! Ultimate Beginners Guide: www.patreon.com/posts/sneak-peek-alert-90799508 Free workflow guide: www.patreon.com/posts/get-your-free-99183367 Style Collection: www.patreon.com/posts/my-collection-of-87325880 I also offer one on one sessions: koalendar.com/e/1hr-1-on-1-stable-diffusion Happy creating Chris

    • @Nightowl_IT
      @Nightowl_IT 3 дня назад

      Hi. I installed it almost like you said. I chose standalone and now It starts pytorch and wants to use CUDA and I only have 2 AMD GPUs but it does not show the 2nd GPU. I might have ZLUDA installed somewhere too but I chose directML so I don't know why I'm getting that error. Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-613-gf10214e3 Commit hash: f10214e358e730b495ce7bfb0edc4846caecf1b2 Traceback (most recent call last): File "...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\launch.py", line 54, in <module> main() File "...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\launch.py", line 42, in main prepare_environment() File "...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\modules\launch_utils.py", line 436, in prepare_environment raise RuntimeError( RuntimeError: Your device does not support the current version of Torch/CUDA! Consider download another version: github.com/lllyasviel/stable-diffusion-webui-forge/releases/tag/latest I am trying to run it without pytorch CUDA testing. It seems to do something. The web UI is starting: Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-613-gf10214e3 Commit hash: f10214e358e730b495ce7bfb0edc4846caecf1b2 Installing clip Cloning assets into ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\stable-diffusion-webui-assets... Cloning into '...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\stable-diffusion-webui-assets'... Cloning huggingface_guess into ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\huggingface_guess... Cloning into '...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\huggingface_guess'... Cloning google_blockly into ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\google_blockly_prototypes... Cloning into '...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\google_blockly_prototypes'... Cloning BLIP into ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\BLIP... Cloning into '...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge epositories\BLIP'... ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\extensions-builtin\forge_legacy_preprocessors\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See setuptools.pypa.io/en/latest/pkg_resources.html import pkg_resources Installing forge_legacy_preprocessor requirement: fvcore Installing forge_legacy_preprocessor requirement: mediapipe Installing forge_legacy_preprocessor requirement: onnxruntime Installing forge_legacy_preprocessor requirement: svglib Installing forge_legacy_preprocessor requirement: insightface Installing forge_legacy_preprocessor requirement: handrefinerportable Installing forge_legacy_preprocessor requirement: depth_anything Installing forge_legacy_preprocessor requirement: depth_anything_v2 ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\extensions-builtin\sd_forge_controlnet\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See setuptools.pypa.io/en/latest/pkg_resources.html import pkg_resources Launching Web UI with arguments: --directml --skip-torch-cuda-test --gradio-allowed-path '...\StabilityMatrix\Data\Images' Using directml with device: Total VRAM 1024 MB, total RAM 130993 MB pytorch version: 2.4.1+cpu Set vram state to: NORMAL_VRAM Device: privateuseone VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Using Stream: False The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] Using sub quadratic optimization for cross attention Using split attention for VAE Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled ControlNet preprocessor location: ...\StabilityMatrix\Data\Packages\stable-diffusion-webui-forge\models\ControlNetPreprocessor Loading additional modules ... done. 2024-11-09 03:31:02,938 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': '...\\StabilityMatrix\\Data\\Packages\\stable-diffusion-webui-forge\\models\\Stable-diffusion\\sd\\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None} Using online LoRAs in FP16: False Running on local URL: 127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Startup time: 365.2s (prepare environment: 328.7s, launcher: 4.4s, import torch: 15.2s, initialize shared: 0.5s, other imports: 0.7s, list SD models: 0.4s, load scripts: 2.4s, initialize google blockly: 9.5s, create ui: 2.1s, gradio launch: 1.4s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 0.00% GPU memory (0.00 MB) to load weights, and use 100.00% GPU memory (1024.00 MB) to do matrix computation. But it does not correctly recognize my VRAM. Both GPUs have 8GB. And It tries to use CUDA.