Use Your Face in AI Images - Self-Hosted Stable Diffusion Tutorial

Поделиться
HTML-код
  • Опубликовано: 6 фев 2023
  • Thanks to Ekster Wallets for sponsoring today's video. Head over to shop.ekster.com/CraftComputing, or use Promo Code: Craft, and get 35% Off your next order!
    How do you make your own AI Generated Images? And how do you train Stable Diffusion to use your face? Today, I'm going to show you how to install and run your own AI Image Generation Server, and teach it who you are.
    But first... What am I drinking???
    From Barsidious Brewing, it's the BLACK Stout Ale. For an 8% stout, this hits WAY above it class in body and flavor. Highly recommended.
    Link to written documentation: drive.google.com/drive/folder...
    Grab yourself a Pint Glass or Hoodie at craftcomputing.store
    Follow me on Mastodon @Craftcomputing@hostux.social
    Support me on Patreon and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
    / craftcomputing
    Music:
    No Good Layabout by Kevin MacLeod
    Link: incompetech.filmmusic.io/song...
    License: filmmusic.io/standard-license
  • НаукаНаука

Комментарии • 235

  • @jdl3408
    @jdl3408 Год назад +49

    Please train the model on Charlie. We need AI generated cat pictures… “Charlie and Rambo fighting a dragon in the forest”

    • @haxboi5492
      @haxboi5492 Год назад

      And a storymaking ai

    • @fatrobin72
      @fatrobin72 Год назад +3

      And start working towards replacing RUclips with ai cat videos?

    • @ProliantLife
      @ProliantLife Год назад

      ​@@haxboi5492 chatgpt can create rough storyline for you lol

  • @janliberda9493
    @janliberda9493 Год назад +66

    To view GPU utilization during AI computation under Winows, you need to switch the graph from 3D to CUDA. Otherwise it may look like the GPU is doing nothing :)

    • @thafex2061
      @thafex2061 Год назад +3

      you have no cuda if u run it with an amd graca

    • @gmfPimp
      @gmfPimp 7 месяцев назад +1

      ​@@thafex2061 Also, Recent changes to Windows 10 removed Cuda option from GPU charts. If you have NVIDIA and don't see CUDA, you have make registry changes to re-enable it.

  • @jonahrothenberger1782
    @jonahrothenberger1782 Год назад +3

    I spent over 2 hours on other videos, so confused. This video was simple, to the point, and got me started on my AI goals. Thanks!

  • @xero110
    @xero110 Год назад +4

    This is an awesome demo and guide. Thanks very much! I will be playing around with this over the weekend. Hopefully I can figure out how to merge different training models to fine tune the results I'm looking for.

  • @VanHonkerton
    @VanHonkerton Год назад +10

    Pumping tons of images isn't usually the best route, Sampling steps is generally better for 25-30 and CFG Scale of 8-9, usually Restore faces also makes things look a lot better. Adding Negative Prompts is also very useful and you can click the little recycling icon for Seed to see how certain tokens affect the outcome to further fine tune the outcome.

  • @dustinphillips605
    @dustinphillips605 Год назад +3

    Thanks for the video. This was much easier than I assumed it would have been. This will be a lot of fun to play with in conjunction with an online DnD session I recently started with friends.

  • @bamnjphoto
    @bamnjphoto Год назад +1

    Thanks, this was right on time I just installed and needed this tutorial

  • @ywueeee
    @ywueeee Год назад +6

    You have to play around with CFG when it's your own face. Also more steps (30 or +) with Euler does help!

  • @bdhaliwal24
    @bdhaliwal24 Год назад

    Jeff thanks for another informative and entertaining video!!

  • @WillFuI
    @WillFuI Год назад +1

    Thanks for the reminder I’m going to go change my oil. Hope it won’t take all day tomorrow so I can watch the talking heads.

  • @matthewmaca6675
    @matthewmaca6675 Год назад

    very cool, got it working in like 10 minutes

  • @alpenfoxvideo7255
    @alpenfoxvideo7255 Год назад +3

    you can bulk resize images in windows using the offical PowerToy

  • @geeklukeg
    @geeklukeg Год назад +1

    Disappointed that we did not get Klingon Jeff, crazy awesome video.

  • @wrmusic8736
    @wrmusic8736 Год назад +15

    Default Stable Diffusion checkpoints are not very good at specific things, they are too general. It would produce better results if you use focus-trained checkpoints from CivitAI, like realistic vision for example, which was specifically trained to produce more real humans (while, naturally, getting worse at everything else). Vanilla SD is basically jack of all trades/master of none and it's up to you (or other people) to focus it on a specific task or style.

  • @neon_Nomad
    @neon_Nomad Год назад +2

    Uses stable diffusion, adopts cat,
    life is good

  • @JimmytheCow2000
    @JimmytheCow2000 Год назад

    Hi Charlie!!! I love you Buddy! welcome to the channel!

  • @TechnoTim
    @TechnoTim Год назад +2

    Awesome video Jeff!

  • @ewookiis
    @ewookiis Год назад +1

    I can vouch for the P8 and M40 doing a good job :)

  • @hevenzgaming
    @hevenzgaming Год назад +4

    ugh i saw those GPU's and had to reassure my little 1050ti that we're still good.

    • @delsings
      @delsings Год назад +2

      Hahaha i have one of those too. Lil engine that could!

    • @hevenzgaming
      @hevenzgaming Год назад +2

      @@delsings 🤣

  • @sylonen
    @sylonen Год назад

    Appreciate this video! Thanks

  • @Dannicus117
    @Dannicus117 Год назад +1

    Upvoting for algo, awesome video

  • @gamingwithsparton
    @gamingwithsparton Год назад

    I'd love to see a video on using the stable diffusion video/animation extensions. I've been trying to figure that out myself but it's been a bit difficult.

  • @interlace84
    @interlace84 Год назад +11

    Thanks for the detailed deepdive! Would you be interested in explaining how to host or train your own (Chat)GPT as well?

    • @yobdrzl
      @yobdrzl Год назад +2

      This would be good to see

    • @felixjohnson140
      @felixjohnson140 Год назад +1

      Yea, if only you have more than $100 million to spare. Just the Andromeda CS-2 supercomputer cost $30 million. Good luck training ChatGPT on your RTX 3060.

    • @interlace84
      @interlace84 Год назад

      @@felixjohnson140 that depends on the size of the dataset you're training on 😁

  • @Dorff_Meister
    @Dorff_Meister Год назад

    Thanks! I just started on my SD journey a few days ago and am having fun. I just started it training and am looking forward to the results.

  • @gamingthunder6305
    @gamingthunder6305 Год назад +5

    for better results you want to make at least 100 images of your face and also of your entire body in different poses. selfies will only produce selfies

    • @tylerwatt12
      @tylerwatt12 Год назад

      Yep. I had a bunch of mirror selfies, me holding a phone. And a bunch taken with my ex. So it made the AI very easily make me holding something, like a chocolate bar, or any time there was a second person in the photo, it was always my ex.

  • @L337f33t
    @L337f33t Год назад +9

    I set mine up months ago and still haven’t used all the features yet! It’s a neat thing and I can’t wait to see where it goes. Edit: How did you get it to use both GPUs? From all that I read while setting mine up you couldn't use 2 at the same time?

    • @L337f33t
      @L337f33t Год назад +1

      Has anyone figured out how he got both of them to work at the same time?

  • @tylerwatt12
    @tylerwatt12 Год назад +1

    So the more oddly specific the prompt is, the better the output image is. I tried things like "tyler driving a racecar" and it was always just a photo of me. But when I "dilute" the power of my name with a bunch of extra words, it puts less emphasis on getting the "tyler" part of the picture right. So instead try "tyler driving a yellow racecar on a nascar track at night".
    It also helps to tell it to make an image "in the style of watercolor". It seems Stable Diffusion has better luck creating good art vs good photorealistic images. The eyes and face are the worst part, always lazy eyes, giant foreheads, etc.

    • @Trainguyrom
      @Trainguyrom Год назад

      I learned similar when I started using a prompt generator plugin. The prompts looked like garbage but the output was way better. Pretty soon I'll just automate myself out of the image generation process because the AI is way better than me 🙃

  • @Prophes0r
    @Prophes0r Год назад +2

    One of those original "Jeff from Craft Computing" images is VERY close to your facial features and bone structure.
    Close enough that it is a totally believable "This is me from my early college days" picture.

    • @CraftComputing
      @CraftComputing  Год назад +4

      So does Stable Diffusion know who I am, or do I look way more like the average Tech Guy than I'd like to admit?

    • @Prophes0r
      @Prophes0r Год назад +2

      @@CraftComputing I feel like that is a question for a statistician to answer.
      I honestly don't know what the expected ratio of Light Skin + red/brown hair + Glasses + Beard should be expected from those keywords. But the results you got feel rather on the high side.
      To be clear, I'm talking about the one with the pink background at 12:26.
      It isn't perfect, but structurally, it is eerily close. Maybe a "This is me as a chunky highschooler, before I could grow a mustache" pic.

  • @swyftty2
    @swyftty2 Год назад +2

    Probably need libraries of each concept you wish to merge yourself with. Aka star trek library and specific characters you want to look like or the background you wish it to link with. More just face pics from actual angles, yours were a little flat facing. The more libraries the more you can mesh... But I haven't done it before. Let me know if this comes out true.

  • @justinus_stoicus
    @justinus_stoicus Год назад

    entertaining and informative

  • @_shadow_1
    @_shadow_1 Год назад +1

    So if I wanted to use an image or a basic drawing directly as an input in addition with a prompt just like the "diffuse the rest" variation, how would I go about this? I have found that being able to pick options and re-input options with a text prompt I can actively change this prompt as needed is a pretty effective way to vastly increase the quality of the images. This also has the benefit of the fact I don't need to pre-train the model nearly as much, if at all in some cases. Also being a human filter is kind of cool because you get to learn how those algorithms work and engineer some amazing stuff using this algorithm knowledge.
    Maybe someday someone can build an algorithm to utilize dynamic prompt switching in an intelligent way to make amazing and original pieces of art which are unique and beautiful.

  • @Dorff_Meister
    @Dorff_Meister Год назад

    I used Chocolaty to install git and python - this is now my preferred way of installing / updating lots of Windows software.

  • @mpxz999
    @mpxz999 Год назад

    I'M CRYING LOL
    This is incredible!
    Omg this program makes some of the funniest stuff AHhhhhhhHHHHHH

  • @TheTrulyInsane
    @TheTrulyInsane Год назад +1

    Honestly was thinking about setting this up, after seeing the results, I'll wait a few years

    • @CraftComputing
      @CraftComputing  Год назад +5

      Remember, I was generating less than 5 at a time, and didn't try refining my prompts. I've gotten some FANTASTIC images playing around with it this week.

    • @AlexTheStampede
      @AlexTheStampede Год назад +7

      Jeff has just started, and as such he's terrible at writing prompts. See the blank text field below the prompt? That's the negative prompt, or in other words what NOT to do. Stuff like "bad anatomy, low quality, cropped, out of frame, extra limbs, extra fingers, missing limbs, missing fingers, bad face, bad mouth, perfect skin" and so on. His prompts sucked as a better result would've been along the lines of "portrait of CCJeff in a Star Trek uniform on the Enterprise bridge, highly detailed, detailed background, intricate detail"
      And then he didn't touch any of the settings! 20 steps are fast but not great, Euler a works fine but meh. I would've tried this: SDE 2m KARRAS, 20 steps then after finding one that looks good I'd re use the seed (the green recycling icon), toggle face restoration and run it again. Is it better? Possibly. Does Euler a give a better one? There's also the one right after the 2m KARRAS that is worth checking. Now that I have the best one, I turn on the high resolution fix and pick the R esrgan 4+ upscale, set my steps up to let's say 60 and let it churn data. I'm going to end up with a 1024 x 1024 picture with a ton of extra detail. Lovely!

  • @dragodin
    @dragodin Год назад

    Thanks Jeff this is exactly what I needed! Is there a way to get better looking results, though? I've been spoiled by MidJourney and anything less looks plain bad. Openjourney maybe?

  • @KissesLoveKawaii
    @KissesLoveKawaii Год назад +7

    normal models are not cutting it anymore, custom models and merges been a meta for few months now. And if you're training only one person, simple embedding/lora will suffice and FAR more malleable since it can be applied to any custom model.

    • @CraftComputing
      @CraftComputing  Год назад +7

      I'm 100% new to the space, and said as such at the beginning of the video. This was meant as an introductory tutorial to get started. The sky is definitely the limit when it comes to configuration, models, etc.

    • @L337f33t
      @L337f33t Год назад

      @@CraftComputing how did you get the GPUs to work in parallel? I have two 1070’s and can only use one at a time.

    • @adrianli7757
      @adrianli7757 Год назад

      Got any more tips for creating your own embedding/lora?

  • @fuzzbawls6698
    @fuzzbawls6698 Год назад +1

    With 24GB of VRAM, you can greatly speed up your image generation by increasing the "Batch Size" rather than "Batch Count". Batch Count is how many times you want to loop through the prompt in series; Batch Size is how many images you want to generate in parallel. The higher the Batch Size, the more VRAM is required, but it is generally more efficient than only using ~1% of your VRAM 30 times over in a loop one by one.
    Also, use Xformers to increase efficiency even further!

    • @tylerwatt12
      @tylerwatt12 Год назад +1

      VRAM will really help the resolution mainly, my 3080 couldn't handle resolutions higher than 1024, but then again you run into issues where faces repeat if you go above 512px

    • @fuzzbawls6698
      @fuzzbawls6698 Год назад +2

      @@tylerwatt12 Hires Fix will sort out most of the repeating/tiling issues when targeting an image resolution larger than what a model was trained at

  • @Morimea
    @Morimea Год назад

    Good tutorial!
    It crazy how fast and how big StableDiffusion get for this short time.
    Also video-overviews of SD functional and plugins/addons available can be good as next video.

  • @gustersongusterson4120
    @gustersongusterson4120 Год назад

    Mario with a gun made my day

  • @codigoBinario01
    @codigoBinario01 10 месяцев назад +1

    Thanks for the video! I'm looking for cheap GPUs for LLVM to fine-tune Llama2. For that, lot of RAM (>16GB for small model) in the GPU is required and this is the way I arrived to your channel, looking for M40.
    My main concern is regarding computation power: I have seen test with 3090 and 4090. Have you tested any large model to see if the cores are able to deal with this new NN models?
    Thanks in advance ;-)
    (Great channel by the way, funny things that I have added to my watch list)

  • @heclanet
    @heclanet Год назад

    You are thaaa best!
    Let's be honest, a couple of those pictures were of Jeff on drugs!
    Greetings from Paraguay

  • @TizianaTirthaG
    @TizianaTirthaG Год назад

    you made me laugh so much!!!!

  • @willfancher9775
    @willfancher9775 Год назад

    For some reason I really enjoy the fact that it briefly became obsessed with giving you absolutely nightmarish teeth.

  • @grasshopper1g
    @grasshopper1g Год назад +1

    lets do it!

  • @andrewbrown5191
    @andrewbrown5191 Год назад

    Cute!

  • @RobertJene
    @RobertJene Год назад

    15:46 oh you're using Stable Diffusion in Firefox.
    I instinctively did the same when I set mine up.
    We must like stability and less memory overhead or something like that.

  • @arthuralford
    @arthuralford Год назад

    Cat created delays are always understandable. Charlie and Rambo will work things out. Someday.

  • @whosscruffylookin95
    @whosscruffylookin95 Год назад +1

    Bald Sisko-Jeff is the stuff of nightmares

    • @CraftComputing
      @CraftComputing  Год назад +2

      There is so much nightmare fuel in this video. And you only got to see the ones that made the video.

  • @novantha1
    @novantha1 Год назад

    You know, I wonder if at a certain point AI art just becomes...Art...? Like, you can throw a basic prompt in, but you won't get a high quality work out of it, so you have to adjust the specific model you're using, potentially use stylistic / concept focused LoRA, Dreambooth finetuning, techniques for framing such as controlnet, as well as some somewhat artistic solutions like using multi-net with a controlnet wireframe and specific 3d models to govern more precise elements of the final img2img output (reminiscent of how Monty Oum managed the cell shading in early RWBY), you need to manage color palettes, and prompt creation isn't just a "one and done" thing, it's a range of options that require some degree of understanding how elements will interact with one another.
    But, that's today. I suspect the generators of maybe not tomorrow, but of the next decade will be remarkably easier to use, and much more influential.

  • @UnwantedSelf
    @UnwantedSelf Год назад +3

    Your cuda graph would have been pinned at 100%

  • @elvisbeststuff
    @elvisbeststuff Год назад

    I laughed so hard looking at the AI photos of your face, Jeff! The Super Mario ones were my favorite though!

  • @PrivateUsername
    @PrivateUsername Год назад

    Nice.

  • @delsings
    @delsings Год назад +1

    I'm an artist who had to pause my business in mid 2018 due to physical issues, and have been trying to wrap my brain around making an ai learn my styles. Is this possible with this software, or was this only for learning faces? Really cool video!

    • @AlexTheStampede
      @AlexTheStampede Год назад +1

      I don't know how, but training on a style is very much possible. I downloaded a few models that do wonders: one is in the style of the Star Wars cgi cartoon and can be scary good, one is based on the Simpsons and it's more a miss than a hit, another I really like is based on a specific artist and it usually gives me very good looking anime pictures clearly inspired by that style. I've seen various ways to do it: a checkpoint, a textual inversion, an hypernetwork and another one I forgot. The checkpoint has a downside, it's trained on a specific set and that's it. The others however are applied on whatever checkpoint you are using and as such can be more flexible but results might vary wildly.

    • @delsings
      @delsings Год назад

      @@AlexTheStampede yeah I create in a few different styles myself, just trying to figure out what to use so I can configure each of them. Been drawing since a kid, and professionally since 2010 (I'm an 80s baby). The way you are describing the different style sets is my goal yes
      Edited a typo

    • @pablito5927
      @pablito5927 Год назад +2

      @@delsings I recommend training it with as many of your drawings as possible and maybe adding some from the internet in the same style. if you really want to make something good, you should try training it on tons of art and then give your style a really high value so everything you generate will be able to fill in gaps it didnt learn from your drawings, but still look like it's yours, if that makes sense.

    • @delsings
      @delsings Год назад

      @@pablito5927 ty for the advice, I appreciate it. For this particular project I intend to only use my art, but I do work in many different styles and plan to upload them in categorized batches to differentiate. Even including unfinished sketches and doodles of different categories. However I do not intend to use anyone else's art for it since I intend this to be a personal helper for my art business so I do not whatsoever intend to plagiarize other artists as much as possible from my input end (I understand it isn't a perfectly moral system since it likely was developed from swaths of online content, but I do not intend to add to that on my end). Pre business alone (before 2010) I would not doubt I'd have at least a thousand samples to input, as I will be also categorizing my early years into it aka children's drawings/teen/young adult etc. I actually started my business in my upper 20s. :) Since my business started, I've done a huge portfolio of work too so I'm really hopeful about it. Just gotta find the right softwares to utilize what I'm needing. This is something I've spent way too much time ruminating on tbh

    • @FelixTheAnimator
      @FelixTheAnimator Год назад +1

      I'm in the same boat really. And I don't want to use anyone else's art--at least no one living.

  • @dionelr
    @dionelr Год назад +3

    Is “financial advisor” referring to “Mrs Craft Computing”?

    • @fierce134
      @fierce134 Год назад

      Lol, exactly what I was thinking

  • @Catge
    @Catge Год назад

    Unexpected cats are my favorite

  • @OBERHighCommand
    @OBERHighCommand Год назад

    Now would this work for achieving a particular style? Such as a photographer training portraits of various people to model the colorgrade and lighting style?

  • @Miyconst
    @Miyconst Год назад +1

    Like for the cat!

  • @jpconstantineau
    @jpconstantineau Год назад

    lol Star Trek Jeff. A new definition for Red Shirt Jeff. Watch out!

  • @stopspyingonme9210
    @stopspyingonme9210 Год назад +1

    I honestly have a genius use for this. Maybe not genius but what if I set up a shop with one of these servers attached to a poster printer? Imagine a booth in the mall you could walk out with a one of a kind poster. I quite love that idea and mall rent might be cheap atm.

  • @fractanimal2527
    @fractanimal2527 5 месяцев назад

    hey man, thanks for the vid. I love it. I have a quick question pls.
    I followed your steps, and the model just creates identical images to the ones I uploaded. I've managed to get some good results with img to img, but txt to image just replicates images I used to create the person model.
    I tried merging it with some other checkpoints but then the face loses it's character, and finding a balance of an accurate face with another model, at all different ratios, hasn't proven successful.
    Do you have any advice/tips?

  • @DIYDaveOK
    @DIYDaveOK Год назад +1

    The model's preoccupation with bizarre teeth is...curious.

  • @WilfEsme
    @WilfEsme Год назад

    I'm using Bluewillow with references as well but I think i'll stick with my privacy and use public photos instead.

  • @AS-bm3sk
    @AS-bm3sk 7 месяцев назад

    I followed your steps but when I get to the dreambooth section I get settings, which contains models, concepts and parameters and then I get output, but no input tab, was wondering if anyone else has experienced this or if I've made some sort of mistake or maybe I'm just not using the same thing.

  • @yourt00bz
    @yourt00bz Год назад +1

    The AI images of Jeff seem like
    Hurtful edits from Jeff Geerling

  • @michaelrichardson8467
    @michaelrichardson8467 Год назад +1

    Ccjeff is more like Crafy Meth Tips 😆

  • @danney777
    @danney777 Год назад

    Could this be set up with multiple amd gpus? I happen to have multiple spare rx 580s and this would be a fun testlab experiment for me to try.
    EDIT: additionally I am totally aware that AMD will be less efficient, spare AMD cards is all i have on hand and i don't feel like buying more hardware at the moment and its for fun and learning so efficiency isn't my primary concern atm.

  • @guy_autordie
    @guy_autordie Год назад

    hello Charlie!!

  • @ianvszo
    @ianvszo Год назад

    could you make a video about Collosal Ai, I want to see how to use it and stuff

  • @TheBinklemNetwork
    @TheBinklemNetwork Год назад +1

    Perhaps if you trained with pictures of yourself from afar as well as "medium" distance, there may be more variety of how you get rendered. That felt weird to type...

  • @jonathaningram8157
    @jonathaningram8157 Год назад

    I don't get it. I do exactly that but in the end if I enter my prompt literally nothing happens. It's as if the model didn't change at all. I'm wondering if there is an option that I enable that doesn't work.

  • @zepesh
    @zepesh Год назад +1

    This stuff is a rabbit hole and evolves so fast that is hard to keep up

    • @neon_Nomad
      @neon_Nomad Год назад

      Time for the neuralink upgrade amiright

    • @Prophes0r
      @Prophes0r Год назад +1

      I've been out of the ML game for almost a year and a half at this point.
      (Illness combined with me giving away the 4x 1660s from my server to people for Christmas when you literally couldn't buy GPUs.)
      In less than 2 years, I literally don't even recognize the stuff people are doing anymore. It is wild.
      The craziest thing I've seen so far, more so than even CharGPT and Stable Diffusion, is the temporally stable "style filter" for REAL-TIME video.
      The demo I've seen was able to take low resolution footage from GTA5 and convert it to what looked like live footage from an alternate universe LA.
      And it was fast enough to do it in real-time.

  • @metalmanexetreme
    @metalmanexetreme Год назад

    Can you do a video of setting up Pygmalion (a set hosted LLM)?

  • @Thewickedjon
    @Thewickedjon Год назад

    nintendo's lawyers are coming for you

  • @xxexexex
    @xxexexex Год назад

    if you wear glasses only sometimes should all of yor images be of you wearing glasses or can you have some with you without glasses?

  • @brandonheath6713
    @brandonheath6713 Год назад +2

    Why does my UI look so much different than yours? Like when I install Dreambooth I don't get a Dreambooth tab I get a "Train" tab that doesn't have the options you have

  • @generalawareness101
    @generalawareness101 Год назад

    Been watching you for a while now, and almost bit the bullet on obtaining one of these but there are two issues with them. Number one is the lack of tensor cores. This is a killer for me as TCs are just that good, and now with LoRA training, and Deltas training we need BF16 to take full advantage. When even a T4 lacks BF16 I had to bow out. Any card that lacks BF16, and TCs is a DOA card for me.

  • @csharpner
    @csharpner Год назад

    You probably know by now, but you need to crank up the sampling steps to get better results.

  • @DanielFSmith
    @DanielFSmith Год назад

    Funny how the images of Jeff all looked like Jeffs, but not really Craft Computing Jeff.

  • @AG-ur1lj
    @AG-ur1lj 7 дней назад

    Bro if you got extra GPU’s cluttering up your space, feel free to send ‘em my way. I’ve been dying to get my hands on enough power to test out my math & coding

  • @Reggieincontrol
    @Reggieincontrol Год назад

    How do we get the dark ui sd? When I stalled mine it was white.

  • @pragavirtual
    @pragavirtual Год назад

    Plan is working, not that it gives good results but oh well, its working...

  • @Theinnersearcher
    @Theinnersearcher 6 месяцев назад +1

    My stable diffusion doesnt look anything like this ,.... GOD DAMN IT!

  • @sergeykudryashov9201
    @sergeykudryashov9201 Год назад

    I have error when click on Train to start process images: Exception training model: ''NoneType' object is not subscriptable'.
    What to do?

  • @hacked2123
    @hacked2123 Год назад

    Rofl. ET in Jeff's hair there

  • @dadogwitdabignose
    @dadogwitdabignose Год назад

    20:53 that is utterly terrifying

  • @eyturkuyann
    @eyturkuyann Год назад

    can we add models and use with dreamshaper?

  • @marcusmeins1839
    @marcusmeins1839 6 месяцев назад

    the dreambooth extention of my stable diffusion doesn't have input,nor the scheduler . instead of scheduler , it says resources .

  • @gghosty1326
    @gghosty1326 3 месяца назад

    Help,it keeps saying create or select a model first when I press on the train button😅

  • @MadsonOnTheWeb
    @MadsonOnTheWeb Год назад +1

    Great! Not that hard. Would be the same for AMD?

  • @Sunlight91
    @Sunlight91 Год назад

    I want to try this on my AMD GPU.

  • @thafex2061
    @thafex2061 Год назад

    you have to increase the amount of steps for much better results

  • @jordondavidson3405
    @jordondavidson3405 Год назад +1

    Cool stuff Jeff! For the record euler (14:23) is pronounced "Oiler" (/ˈɔɪlər/ OY-lər,) It's based on Leonhard Euler, the Swiss mathematician

    • @CraftComputing
      @CraftComputing  Год назад

      Well then Leonhard should have pronounced it right :-P

    • @jordondavidson3405
      @jordondavidson3405 Год назад

      @@CraftComputing For Sure! Back in university we nearly named the math society hockey team the "Eulers" knowing full well nobody else would get the joke.

  • @gamingthunder6305
    @gamingthunder6305 Год назад

    dont use a m40. nothing but problems with the card and even the p40 i have has some issues with some extensions and particularly lora training.

  • @neon_Nomad
    @neon_Nomad Год назад

    Sweet this will perfect for my socpuppet account

  • @nochan6248
    @nochan6248 Год назад

    PROBLEM HERE
    Managed to train a face.
    Tried to train a 2nd face, but after I press Create Model in dreambooth tab it gives this error: Missing model directory, removing model: F:\ai\stable-diffusion-webui\models\dreambooth\miha\working\vae

    • @nochan6248
      @nochan6248 Год назад

      seems deleting the initial face model helped creating the new face model, kind of annoying

  • @reLi70
    @reLi70 Год назад

    Sorry for the probably stupid question, but I am really new to enterprise GPUs.. do you need some licenses from NVidia in order to use them?

    • @Arachnoid_of_the_underverse
      @Arachnoid_of_the_underverse Год назад +1

      No but Jeff doesnt like servers with licenses for home use anyway.

    • @CraftComputing
      @CraftComputing  Год назад +1

      No licenses are needed for GPUs to run on bare metal like this. Virtualizing CUDA for use in cloud gaming / rendering is another story.

  • @Rewe4life
    @Rewe4life 4 месяца назад

    Heyho,
    I have build myself a server for AI.
    Well…at least I tried. 😅
    When I push the power button it does not do anything. No cpu led, no ram led, no beep, no fan spinning, nothing. 😢
    I have tried without ram and GPU, same thing.
    I have tried with a different power supply, same behavior.
    I have changed the bios battery to a new one, disconnected everything except cpu and it’s fan. Nothing helps.
    Do you have any idea what I might miss and what might help?
    My motherboard: asus P9X79 WS
    CPU: Xeon E5-2695 v2

  • @timcunkle4508
    @timcunkle4508 Год назад +3

    GPT-NEOX 20B next, right? Right???

  • @bjackman16502
    @bjackman16502 Год назад

    Please Please let it do some of Rambo! Giant Rambo vs Jedi... Rambo chasing Rats of NIMH...

  • @dadogwitdabignose
    @dadogwitdabignose Год назад +2

    16:25 am i having a stroke or did he do the same thing twice

    • @thevaldis1167
      @thevaldis1167 Год назад

      That's what I was thinking and I was having second thoughts about did I missed something? Is it directory where output will be storaged?