This NEW open-source image generator hits hard - AuraFlow review

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 255

  • @theAIsearch
    @theAIsearch  Месяц назад +14

    Thanks to our sponsor Abacus AI. Try their new ChatLLM platform here: chatllm.abacus.ai/?token=aisearch

    • @LouisGedo
      @LouisGedo Месяц назад

      👋

    • @BlakeEM
      @BlakeEM Месяц назад +1

      Am I the only one that noticed that CalicoMix is not SDXL? If only we had an AI assistant to double check these things!

  • @maickelvieira
    @maickelvieira Месяц назад +28

    Ir is crazy how this just came out of the box with so much potential, really wanna see what this will be in some time.

    • @theAIsearch
      @theAIsearch  Месяц назад +3

      yep, looking forward to what the open source community can build from this

    • @handlenumber707
      @handlenumber707 Месяц назад +1

      @@theAIsearch Just tried it. The first attempt, despite telling it not to put random figures in the background or create mutant hands, or multiple limbs, did just that. The second attempt surprisingly obeyed some commands, but did the same, as did the third attempt. It's just dumb.

  • @lanzer22
    @lanzer22 Месяц назад +42

    The funniest thing about SDXL is that a My Little Pony model had become the gold standard for anime generation and hence forth most anime XL models are based on Pony XL

    • @KibitoAkuya
      @KibitoAkuya Месяц назад +5

      OH, so that's what they mean when they say pony based

    • @hipjoeroflmto4764
      @hipjoeroflmto4764 Месяц назад +3

      Makes sense those mlp characters have giant anime eyes

    • @hipjoeroflmto4764
      @hipjoeroflmto4764 Месяц назад +1

      So it should be good at gmod images too then

    • @lanzer22
      @lanzer22 Месяц назад

      @@hipjoeroflmto4764 The author did a diligent job in training pretty much all the popular anime characters and did a great job in tagging the training data correctly, and as a result you can give keywords to numerous character references, styles and emotions and get great results out of the box.

    • @oat1000
      @oat1000 Месяц назад +6

      ​@@hipjoeroflmto4764 its because the person that made the model has a good training dataset, nothing to do with eye size

  • @amitnishad0777
    @amitnishad0777 Месяц назад +17

    Following you since months
    Love your work
    Thanks for keeping us updated about the latest AI technology

  • @kamillatocha
    @kamillatocha Месяц назад +195

    a new KING .. cant do NSFW .........

    • @thegameboyshow
      @thegameboyshow Месяц назад +34

      until... you make it

    • @theAIsearch
      @theAIsearch  Месяц назад +55

      😭

    • @Doctor_Sex_0
      @Doctor_Sex_0 Месяц назад

      Breh

    • @silvermushroom-gamifyevery6430
      @silvermushroom-gamifyevery6430 Месяц назад +126

      > open source image gen
      > cant do NSFW
      The universe is morphing itself to make this statement false as we speak

    • @armondtanz
      @armondtanz Месяц назад +54

      I'm sick of censorship. I couldnt even do a hostage scene in luma, it kept knocking it back. Just ridiculous if your into horror stuff of wanna create a horror/thriller trailer.

  • @mirek190
    @mirek190 Месяц назад +4

    Finally something is happening in the world of picture generation!
    From more than a year only existed SD and SDXL . Lately we got SD 3 ( lol) , Kolors, AuraFlow.

  • @gameboardgames
    @gameboardgames Месяц назад +36

    Right on! As a solo indie game dev, its incredible what I can do with AI tools now versus even just a year ago. Total... game changer.

    • @handlenumber707
      @handlenumber707 Месяц назад +4

      Just tried it. The first attempt, despite telling it not to put random figures in the background or create mutant hands, or multiple limbs, did just that. The second attempt surprisingly obeyed some commands, but did the same, as did the third attempt. It's just dumb.

    • @fritt_wastaken
      @fritt_wastaken Месяц назад

      ​@@handlenumber707skill issue tbh

    • @losing_interest_in_everything
      @losing_interest_in_everything Месяц назад +7

      Technical artist here. Blender for 3D and vector graphics for 2D. It's straightforward. Relying on AI might actually consume more of your time and effort.

    • @handlenumber707
      @handlenumber707 Месяц назад +1

      @@losing_interest_in_everything Generative AI is really for dummies. I mean, come on. In order to get anything decent out of it, you have to create the artwork you want first, feed it to the machine, wait for its output, then correct what you get back, paying for your trouble. No decent artist will bite.

    • @losing_interest_in_everything
      @losing_interest_in_everything Месяц назад

      ​@@handlenumber707 Yes! In my company, I assist HR department when we have to recruit 3D artists, sound designers, and technical artists. Since the rise of AI, my job has become a lot harder. In 2018, I rejected people because their skills didn't fit our needs. Now, I reject people because they have no skills. The worst are the arrogant ones who think they can be our "thinking guys." These candidates make my day better because I get to crush their hopes of being "prompt engineers."
      This year, I've already rejected around 20 of them, and it was deserved!

  • @lukasbeyer2649
    @lukasbeyer2649 Месяц назад +2

    the stable diffusion zebra would totally be something we'd see on an album cover

  • @orthodox_gentleman
    @orthodox_gentleman Месяц назад +3

    I am a landscape architect and I have been researching the best model to help me! I have found that Midjourney so far is the best for me but after watching your video I am confident in trying SD for landscape design images!

    • @theAIsearch
      @theAIsearch  Месяц назад

      Best of luck! SD with controlnet is great for landscape design

  • @AbubakarMunir712
    @AbubakarMunir712 Месяц назад +1

    12:00 The way it separated coconut and drink is quite funny😂

  • @RedSpiritVR
    @RedSpiritVR Месяц назад +40

    SDXL is king because it's unfiltered
    It can do things better, to try and filter things in any sense removes its ability's and strengths.
    Always go unfiltered.

    • @jtjames79
      @jtjames79 Месяц назад +2

      I also don't care that much about zero shot.
      I can break up the composition, do infill and outfill, lighting and background separately.
      There's a certain minimum number of steps you need to do yourself otherwise it's not really creative. IMHO YMMV.

    • @hurktang
      @hurktang Месяц назад +7

      it's filtered, just less heavily. If you try to make NSFW, you'll be much better off with a custom model based on SD 1.5

  • @fzigunov
    @fzigunov Месяц назад +5

    I don't understand why not train the model with the NSFW images and then censor them after generation. Would make for far better models.

    • @drdca8263
      @drdca8263 Месяц назад

      Less risk of letting some through.

  • @gRosh08
    @gRosh08 Месяц назад +2

    Cool. Waiting for the beginners tutorial.

  • @Imdrekaiser
    @Imdrekaiser Месяц назад +3

    So hype for a new competitor in the field

  • @FluffRat
    @FluffRat Месяц назад +1

    High prompt adherence and NSFW restriction are mutually exclusive. The more precisely you can steer the model the easier it is to drive around their speedbumps.

  • @cmdr_talikarni
    @cmdr_talikarni Месяц назад +4

    In my testing, I've found most of them don't work well with natural human language. They won't understand wording like "next to it" or "by its side". The AI processor reads "next to it" and just places the coconut and/or drink next to whatever, maybe the tree, maybe the water, maybe the plant to the side. The generation doesn't recognize "it" as being associated with anything like the human brain would. Sometimes it may just get it right by chance, other times like the SDXL and SD3 it sees "coconut" and "drink" separately, randomizes the placement of the item.
    Using comma delimited with direct wording I found is still the best method, even with new tools. Don't use alternate descriptors like "there", "it", or similar wording, and don't use negatives like "no item" in the positive prompt.
    Polar bear wearing Hawaiian shirt, wearing sunglasses, sitting in hammock tied between palm trees, beach scene, coconut drink with red umbrella on ground

    • @theAIsearch
      @theAIsearch  Месяц назад

      Interesting, thanks for sharing!

    • @cekuhnen
      @cekuhnen Месяц назад

      Yeah that is very true - very quickly run into those models not doing what I plan to do besides all the hype.
      AI should really be more pro programmed logic - it is not more.

  • @juschu85
    @juschu85 Месяц назад +2

    Even if you don't even plan to generate NSFW pictures, it likes like you have to wait for user-trained versions of AuraFlow and SD3 if you want to generate something that involves some form of human limbs.

  • @luisbriceno5242
    @luisbriceno5242 Месяц назад +3

    If it is open source, then I hope the comunnity would be able to finetune unlike SD 3.

    • @theAIsearch
      @theAIsearch  Месяц назад +2

      im expecting a ton of fine tuned models for auraflow to come out soon!

  • @moopindomoop7442
    @moopindomoop7442 Месяц назад +1

    Thanks for your video. I watched just couple of minutes but when the day comes, and I need a picture of zebra playing Ice piano on the hilltop, I will definitely watch the rest…

  • @KillbackJob
    @KillbackJob Месяц назад

    I swear to God this is the best AI channel. Peace be upon you, good sir

  • @eSKAone-
    @eSKAone- Месяц назад +19

    SDXL baby!

  • @AnonymousFloof
    @AnonymousFloof Месяц назад +7

    OMG! This Ai takes the cake for me for one reason..
    I asked it to generate a centaur and it delivered first try, TWICE! and with all features I asked for! Do you realize just how crazy this is? if not, ask any other AI image generator the same thing. I dare you lol

    • @theAIsearch
      @theAIsearch  Месяц назад +1

      yes, this one is great for prompt following

    • @handlenumber707
      @handlenumber707 Месяц назад

      Just tried it. The first attempt, despite telling it not to put random figures in the background or create mutant hands, or multiple limbs, did just that. The second attempt surprisingly obeyed some commands, but did the same, as did the third attempt. It's just dumb. EDIT: There's no negative prompt box. Doesn't really matter as I removed those instructions and it still failed.

    • @drdca8263
      @drdca8263 Месяц назад +4

      @@handlenumber707When you say you told it to not include those things, do you mean like in a “negative prompt” field, or that you put like, “no [thing you don’t want]” in the prompt ?

    • @handlenumber707
      @handlenumber707 Месяц назад

      @@drdca8263 There's no negative prompt box. Doesn't really matter as I removed those instructions and it still failed.

    • @fritt_wastaken
      @fritt_wastaken Месяц назад

      Pony diffusion makes centaurs just fine.
      Ask my Lillia folder ..What?

  • @handlenumber707
    @handlenumber707 Месяц назад +4

    Just tried it. The first attempt, despite telling it not to put random figures in the background or create mutant hands, or multiple limbs, did just that. The second attempt surprisingly obeyed some commands, but did the same, as did the third attempt. It's just dumb.

    • @fritt_wastaken
      @fritt_wastaken Месяц назад +1

      skill issue

    • @handlenumber707
      @handlenumber707 Месяц назад +1

      ​@@fritt_wastaken It's like some con artist trying to con an actual artist to pay to hand over his work. It might work on a child, perhaps, maybe.

    • @joelrobinson5457
      @joelrobinson5457 Месяц назад

      ​@@handlenumber707these channels don't want bad or realistic feedback, they just want quick positive build up, don't trust them for anything reliable.

    • @handlenumber707
      @handlenumber707 Месяц назад +1

      @@joelrobinson5457 I trust NO RUclips channel. They exist be corrected only.

    • @Dave-rd6sp
      @Dave-rd6sp Месяц назад +1

      It's still very early in its training, thus the 0.1. Given that it's a 6.8B param model, almost triple SD3, fine tuning should be a beast.

  • @generalawareness101
    @generalawareness101 Месяц назад +7

    None of these are worth a damn until we can train them locally. That is when the magic happens.

    • @user-qn6kb7gr1d
      @user-qn6kb7gr1d Месяц назад +2

      Generation uses a tiny fraction of compute needed for training. You wouldn't live long enough to train anything decent in this domain on your home pc.

    • @generalawareness101
      @generalawareness101 Месяц назад

      @@user-qn6kb7gr1d Then it is a dead mofo.

  • @TheMadSqu
    @TheMadSqu Месяц назад

    Great video and model. THX for sharing. The most important information comes at about 20 min in the video btw :D

  • @JonnyCrackers
    @JonnyCrackers Месяц назад

    It's not that good currently, but I'm glad to see another open source model come out.

  • @deeceehawk
    @deeceehawk Месяц назад

    Thank you so much for your fantastic review comparison! I was wondering about this new model, and obviously it looks pretty fantastic when it comes to following the prompt! I have been trying to do cartoons style stories, and this will obviously help tremendously! Thanks again :-)

    • @theAIsearch
      @theAIsearch  Месяц назад

      You're welcome, and thanks for sharing!

  • @stephaneduhamel7706
    @stephaneduhamel7706 Месяц назад +3

    25:30 CalicoMix is a SD1.5 finetune, not SDXL. Wich is even more embarassing for SD3 and AuraFlow.

  • @rawkeh
    @rawkeh Месяц назад +1

    So, AuraFlow for initial generation, SDXL for refining

  • @Muz889
    @Muz889 Месяц назад

    I hope someone makes something like Fooocus for Aura flow.

  • @twindenis
    @twindenis Месяц назад

    Dont forget that SD with likely SD3(?) is going to have a non so open source friendly structure, so may as well test dalle-3 too.

  • @Iqury
    @Iqury Месяц назад

    I feel like SD just focused on realism while AuraFlow on accuracy

  • @victk64
    @victk64 Месяц назад +3

    this thing can't generate a simple image of an EMPTY room! SD 2.1 can do this easily...

  • @IMR.Project
    @IMR.Project Месяц назад

    love it

  • @loth4015
    @loth4015 Месяц назад +3

    I tested it and it's not that great. It didn't follow my prompt at all. All others I tested follow my prompt.

    • @theAIsearch
      @theAIsearch  Месяц назад +2

      do you mind sharing the prompt? i find that for simple prompts sdxl does a much better job. aura excels in understanding very complex prompts like the ones i demoed

  • @b3arwithm3
    @b3arwithm3 Месяц назад

    In your opinion, what is the good ones for generating scenes with multiple characters consistently?
    I find that most struggle with comprehension even for a single character. They focus too much on rendering quality and it takes hundreds of attempts and tweaking to produce the right content in the picture. Most of time we achieve it by luck and it is hardly reproducible.

  • @NakedSageAstrology
    @NakedSageAstrology Месяц назад +3

    All good except one point. Does a Zebra have White Stripes, or Black Stripes? Why?

    • @theAIsearch
      @theAIsearch  Месяц назад +1

      😵‍💫

    • @fritt_wastaken
      @fritt_wastaken Месяц назад

      The answer is black stripes if I remember correctly.
      There is a rare genetic anomaly in zebras that removes the formation of their stripes, which makes them white

    • @quercus3290
      @quercus3290 Месяц назад

      dazzle camo

    • @JustFor-dq5wc
      @JustFor-dq5wc Месяц назад +1

      Zebra is black with white stripes. Why? Evolution. To be precise camouflage, thermoregulation, and some other things.

  • @christopherneufelt8971
    @christopherneufelt8971 Месяц назад

    I tried the image, Portrait of Darth Vader enjoying some cocktails with girls in bikini on a beach. It is not bad, but at least we have some glimpses of the guy having some time away from the Emperor. Nice.

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j Месяц назад

    I am here to study cognition. I need to elicit fascination from a compelling subject.

  • @TheSchwarzKater
    @TheSchwarzKater Месяц назад +1

    I wonder how we can get the gear going for AuraFlow. I have so much SDXL (Pony actually) Lora saved, the switch will be hard.
    I'll personally will follow the lead of the NSFW creators ;)

    • @theAIsearch
      @theAIsearch  Месяц назад +1

      pony is awesome! it's a shame that auraflow is censored

  • @Axacqk
    @Axacqk Месяц назад

    Why didn't you try higher guidance setting with the zebra?

  •  Месяц назад

    Playground 2.5 should be a better alternative, with good anatomy images.

  • @hurktang
    @hurktang Месяц назад

    SD 1.5 is superior still, if what you want to do in NSFW. SDXL was LESS censored, and therefore it understand basic anatomy. But not to the point where it could fool anyone. Custom models made from SD 1.5 are much better at this still.

  • @SS801.
    @SS801. Месяц назад

    looks promising good video

  • @Moukrea
    @Moukrea Месяц назад

    Refined models tend to not follow prompts as well as their source models, hence the prompt not being that much followed with RealVisXL as it's one of the realistic ones which are less "creative"

  • @StefanReich
    @StefanReich Месяц назад +1

    That was great

  • @Hysorix
    @Hysorix Месяц назад

    the model is really good for alpha but its generations seem like stacked images someone would cuttout and make in photoshop and it cant do nsfw meaning slightly exposed skin like legs or arms breaks anatomy

  • @sevenseven31
    @sevenseven31 Месяц назад

    is there more sources to convert images to text is very helpful

  • @faintent
    @faintent Месяц назад

    this guy is Fireship on god

  • @PMX
    @PMX Месяц назад

    You were using 50 steps for the first couple of images but only 28 after you switched to hf, that may have made images less detailed

  • @MrInterpriser
    @MrInterpriser Месяц назад

    Do you know about AI that describes what it sees in the footage?

  • @usmankhan-xi4dl
    @usmankhan-xi4dl Месяц назад

    Why does zebra don't have a tail yet😂?

  • @BlakeEM
    @BlakeEM Месяц назад +1

    You used SDXL-Lightning based fine tuned model, but it follows prompts less than the base SDXL at higher steps/CFG scale. You are also using a fine tuned model, that is not as creative as the base SDXL model, due to overfitting of a limited training data set. CalicoMix is SD1.5 and not SDXL, as it states in this video.
    Is AuraFlow based on the SDXL architecture (like Pony Diffusion)? If so, you may want to use the specific SDXL text encoder nodes so you get the best coherency at the edges when generating different aspect ratios. So far, I haven't been very impressed by these models trained on very limited data sets. They lack creativity, because the data is so limited. They ignore words that are not in the training and negative prompts don't work as well, because they don't put the things you don't want in the training images.

  • @PatronGaming
    @PatronGaming Месяц назад

    following you from alot of months, just want to let you know you are a fkin GOAT in AI teaching.

  • @igiveupfine
    @igiveupfine Месяц назад +1

    oh. in the SDXL base model they did/do still have NSFW images/things in the base model? where as any SD AI models after that do not? interesting. a shame that removing them made it so much worse at understanding people.

  • @merce414
    @merce414 Месяц назад

    Thank you!!!

  • @user-kb1su7mg1g
    @user-kb1su7mg1g Месяц назад +8

    just started watching. i dont know it looks bad to me

  • @thirien59
    @thirien59 Месяц назад

    with midjourney, you dont get failed human bodies and wrong styles.
    it seems to me opensource is clearly lagging far too behind proprietary models, unlike chatbots where llama3 works correctly

  • @KDawg5000
    @KDawg5000 Месяц назад

    So the lesson I got is; use Auraflow to create an image that actually follows your prompt. Then use that image as a controlnet in SDXL to make a realistic image from it. 😁

    • @nicktumi
      @nicktumi Месяц назад

      Do you have an example of that workflow?

    • @theAIsearch
      @theAIsearch  Месяц назад +1

      awesome idea!

    • @hipjoeroflmto4764
      @hipjoeroflmto4764 Месяц назад

      ​@@nicktumi he just said the workflow lol, if u didn't notice I think you got other things to worry about

    • @nicktumi
      @nicktumi Месяц назад

      @@KDawg5000 duo you have a error for comfyui for this example?

    • @nicktumi
      @nicktumi Месяц назад

      @@hipjoeroflmto4764 y r u gA

  • @Ai-Art-42
    @Ai-Art-42 Месяц назад

    good video

  • @gauravsharma-bt2mt
    @gauravsharma-bt2mt Месяц назад

    Hi anything you suggest for for lip sync ai tool open-source, my use case is i have a video in english i want to convert it into hindi lipsync

  • @RIUUI007
    @RIUUI007 Месяц назад +1

    I use midjourney, how does midjourney stack up to this? Is midjourney outdated now or considered low end?

    • @Astro-uv1xq
      @Astro-uv1xq Месяц назад

      It's still considered to be the best/easiest to use, I believe

    • @JonnyCrackers
      @JonnyCrackers Месяц назад +1

      Midjourney blows this out of the water easily.

    • @theAIsearch
      @theAIsearch  Месяц назад +2

      mj is great but it's closed & paid. i'm only comparing open source here

    • @fritt_wastaken
      @fritt_wastaken Месяц назад +1

      ​@@Astro-uv1xq Midjourney was never better than SD.
      Easier to use, yes. But not really useful

    • @quercus3290
      @quercus3290 Месяц назад

      @@fritt_wastaken hmmm dunno dude, midjourney can make insanely high quality images if your good at prompting with it. A good number or loras pretty much use MJ images for training data.

  • @hognetitlestad382
    @hognetitlestad382 Месяц назад

    16GB? No smaller version?

  • @fulldivemedia
    @fulldivemedia Месяц назад

    thanks

  • @nickjones1609
    @nickjones1609 Месяц назад

    stable cascade base model is better than sd3 and sdxl base model, probably should've used that instead

  • @planethanz
    @planethanz Месяц назад +2

    This will not replace SD 1.5. or SDXL) because it's heavily censored. So it may replace SD 3 (which is also terrible - I guess censorship requires more resources than progress in companies like this)

    • @theAIsearch
      @theAIsearch  Месяц назад +1

      thanks for sharing. agree that censorship is a major limitation here

    • @planethanz
      @planethanz Месяц назад

      @@theAIsearch But what could be a use case: Using this new model just for prompt understanding and as a base for img2img workflows. I've also achieved great results with MJ base images and img2img (or img2vid) workflows in comfyUI using those images as part(!) of the input.

  • @jeremys1977
    @jeremys1977 Месяц назад

    Great channel. Ive been following since the start. AuroFlow looks promising, but no way will it take on SD. In order for these image gens to be really reliable on a professional level, they will need to integrate tools to modify iteratively. You can't truly work on prompts alone.

  • @macowayeu
    @macowayeu Месяц назад

    Replace Stable Diffusion ? 😮

  • @Energyswordsunday
    @Energyswordsunday Месяц назад +1

    It's comical we use anime thumbnails than ai ones

  • @keenheat3335
    @keenheat3335 Месяц назад

    i guess you still need controlNet and sdxl.

  • @RunnerProductions
    @RunnerProductions Месяц назад

    So if you didnt use comfy ui , it would just be hard to run this right? like you would need to clone more of the code on github and run it through VS code ..?

    • @theAIsearch
      @theAIsearch  Месяц назад

      correct, the only way that was mentioned in their blog was comfyui. you could try duplicating and tweaking one of the online spaces

  • @Entity303GB
    @Entity303GB Месяц назад

    I say the next vid is about llama 3.1 and a lot about 405b

  • @thedevo01
    @thedevo01 Месяц назад +1

    WELL ACKSHULLY 😂 have you seen baby zebras that are black with white spots? Yeah, zebras are black with white stripes, so technically, SD3 got it right 😂

  • @SP4595-gs7zw
    @SP4595-gs7zw Месяц назад

    May be someone can try fine-tuning it for NSFW task?

  • @Sujal-ow7cj
    @Sujal-ow7cj Месяц назад

    I don't think it is that good if sd3 was uncensored and with fine tuning will crush all of them 😢

  • @ryzikx
    @ryzikx Месяц назад

    its a safetensor file, wonder if i can just plug it into my comfyui folder and use it from there?

    • @igorthelight
      @igorthelight Месяц назад

      Try that! ;-)

    • @hurktang
      @hurktang Месяц назад

      No "safetensor" is a file format, it have nothing to do with the training data. SD 1.5 is actually the best still for anatomy.

  • @Hervoo
    @Hervoo Месяц назад +1

    Still no nsfw

  • @drendelous
    @drendelous Месяц назад

    is abacus good?

  • @TAGSlays
    @TAGSlays Месяц назад +1

    "A Very Tricky Font" or just a poorly written one? I agree, Auraflow is much better at creating images from shitty prompts. You literally do not tell the AI that you want the hammock tied to the two trees and then complain about it.

  • @rownakislam6404
    @rownakislam6404 Месяц назад

    Can foocus ai platform use sdxl? Is there a open source for sdxl?

    • @sherpya
      @sherpya Месяц назад

      almost every existing software does support sdxl

  • @AIChameleonMusic
    @AIChameleonMusic Месяц назад

    It makes poor images of tuxedo mask compared to others I use.

  • @twisterrjl
    @twisterrjl Месяц назад

    AROUND A WEEK AGO WEEK AGO

  • @nomad_ape
    @nomad_ape Месяц назад

    Hello anyone knows an AI that can generate 2D spritesheet like spritesheet to animate a horse walking or a bird flying? I need it for make simple game. Thanks in advance 🙏🙏

  • @HenkvanAlphen
    @HenkvanAlphen Месяц назад

    Sdxl’s yoga pose wasn’t perfect the woman’s foot is flipped 😂

  • @voltageconcepts3807
    @voltageconcepts3807 Месяц назад

    The king has no clothes lol

  • @quercus3290
    @quercus3290 Месяц назад

    training on synthetic data is sketchy at best.

  • @Badboy-zo7of
    @Badboy-zo7of Месяц назад

    copilot ai is the best

  • @reezlaw
    @reezlaw Месяц назад +1

    SD3 often looks like the worst of both worlds, I don't see the point of this model

  • @Milennin
    @Milennin Месяц назад

    With how bad these models are at anatomy, what point is there in using them? Legit question. What do people use these image generators for when they can't properly generate a human body.

    • @handlenumber707
      @handlenumber707 Месяц назад

      They're cheap, and people who can't draw think bad drawings look good. For one off images, pinups, celebrities and general throwaway artwork. the undemanding will find this useful. For actual artisans it's a source of mirth. Just tried it. The first attempt, despite telling it not to put random figures in the background or create mutant hands, or multiple limbs, did just that. The second attempt surprisingly obeyed some commands, but did the same, as did the third attempt. It's just dumb.

  • @user-fl3mk5kn8o
    @user-fl3mk5kn8o Месяц назад

    Ideogram wins by a longshot

  • @user-pc7ef5sb6x
    @user-pc7ef5sb6x Месяц назад +2

    Sorry bro, SDXL is still king

    • @theAIsearch
      @theAIsearch  Месяц назад

      yep, that's my preferred model for now

  • @carleyprice3138
    @carleyprice3138 Месяц назад

    i really enjoy your content, i can't wait to hear what you have in store for us about Flux :D

  • @Gabriel.Ponce.De.Leon.777
    @Gabriel.Ponce.De.Leon.777 Месяц назад

    Why girl punches man?

    • @TonyThai
      @TonyThai Месяц назад

      she has androphobia

  • @sadshed4585
    @sadshed4585 Месяц назад

    and now we have flux

  • @kanall103
    @kanall103 Месяц назад

    one more

  • @drdca8263
    @drdca8263 Месяц назад

    2:14 : not infinite. There are only finitely many possible images of a particular resolution and color depth.

    • @quercus3290
      @quercus3290 Месяц назад +3

      given the seed count think its some like half a quintillion.

  • @maickelvieira
    @maickelvieira Месяц назад

    The white parts on a zebra are stripes too, not only the black ones, so i guess sdl kinda did ehat you asked kk

  • @1conscience0dimension
    @1conscience0dimension Месяц назад

    Welhome

  • @juschu85
    @juschu85 Месяц назад

    2:12 Uhm, no. Just no! There is definitely not an infinite number of different images for one prompt. That's just not how math works. You'll never have infinite possibilities for a finite set of bits. For n bits, there are 2^n possibilities. You would need infinite resolution or infinite color depth for infinite possibilities.
    Please don't use the word infinite like that when you said "in theory" before, which kind of gives it all a scientific flavor.

    • @user-qn6kb7gr1d
      @user-qn6kb7gr1d Месяц назад +1

      It's fine. Seed in theory can be from 0 to infinity. Some generations would not be unique and that's it.

  • @DigitalDripUK
    @DigitalDripUK Месяц назад +2

    EARLY GANG