TRANSFER STYLE FROM An Image With This New CONTROLNET STYLE MODEL! T2I-Adapter!

Поделиться
HTML-код
  • Опубликовано: 19 дек 2024

Комментарии • 188

  • @Aitrepreneur
    @Aitrepreneur  Год назад +12

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @CapsVODS
      @CapsVODS Год назад

      does the easystablediffusion program work as well for setup?

  • @ueshita6866
    @ueshita6866 Год назад +22

    I am Japanese and have limited ability to understand English, but I am learning the content of your video by making use of an app for translation. The content of this video is very interesting to me. Thanks for sharing the information.

    • @ueshita6866
      @ueshita6866 Год назад

      @dogmantc6934
      Sorry. Maybe it was not appropriate to say that I used an application. I just turned on RUclips's automatic translation feature.

  • @danieljfdez
    @danieljfdez Год назад +33

    I would like to add that I have been playing around with style model and with help of another video I realised that I was sometimes not getting desired result just because I wrote a prompt over 75 tokens. If you keep your prompt under 75 tokens, there is no need to add another controlnet tab. Thank you very much for keeping us uptodate!!!

    • @sinayagubi8805
      @sinayagubi8805 Год назад +1

      which other video? I'd like to watch it

    • @Eins3467
      @Eins3467 Год назад +3

      Yes I once had a prompt over 75 tokens and it will generate an error where it says style adapter or cfg guidance guess mode will not work due to inference which is kinda vague. Hopefully they can update the error message where it says you're above 75 tokens or something like that.

    • @sownheard
      @sownheard Год назад +3

      Wait can u use two control net tabs to get to go over the 75 token limit?

    • @inbox0000
      @inbox0000 Год назад

      that seems WAY excessive and like it would be extremely contradictory

  • @ADZIOO
    @ADZIOO Год назад +12

    Its not working for me. I did exacly the same steps like on video, I have controlnet etc. but after render clip_vision/t2adapter it change nothing on the photo... just wtf? Tryed a lot of times with diffrent backgrounds, its always the same photo. Yes, I turned on ControlNet.

    • @lizng5509
      @lizng5509 Год назад

      same here, have you figured it out?

  • @MathieuCruzel
    @MathieuCruzel Год назад +6

    It's astounding to find all these news options in Stable Diffusion. A bit overwhelming if you did not follow up from the start but the sheer amount of possibilities nowadays is golden !

  • @wendten2
    @wendten2 Год назад +8

    in img2img. denoising strength is the ratio of noise that is applied to the old image before trying to restore it. If you pick 1.0, its works like text2img, as nothing from the input image is transferred to the output.

    • @Aitrepreneur
      @Aitrepreneur  Год назад +6

      I forgot to explain that the reason why using the img2img tab is better is that you can actually play around with the denoising strength and get different results

  • @MonkeChillVibes
    @MonkeChillVibes Год назад +2

    Always there with the new content! Love it

  • @MondayMoustache
    @MondayMoustache Год назад +7

    when I try this in controlnet, the style model doesn't affect the outputs at all, what could be wrong?

    • @Gh0sty.14
      @Gh0sty.14 Год назад +4

      Same. I followed the video exactly and it doesn't change the image at all. I've tried it in both txt2img and img2img and it's doing nothing.

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Год назад +1

      Same here, and I get runtime errors when i use style or color model

  • @StrongzGame
    @StrongzGame Год назад +7

    Damn every time I’m about to take a break there’s something new

    • @Aitrepreneur
      @Aitrepreneur  Год назад +2

      I feel you :D

    • @StrongzGame
      @StrongzGame Год назад

      😅

    • @Eins3467
      @Eins3467 Год назад

      While I love progress this is what really irks me. In a month or two we will be seeing a model that consolidates all this new tech and make it easier to do, maybe even just via text2img. Sometimes I just want to wait it off but rabbit hole I guess.

  • @tyopoyt
    @tyopoyt Год назад +7

    You can also use guidance start to make it apply just the style without putting in the whole subject of the source image. I like using values between 0.25 and 0.6 depending on how strong the style should be

    • @Skydam33hoezee
      @Skydam33hoezee Год назад +5

      Can you explain how you do that? What does 'guidance start' mean?

  • @eggtatata0-
    @eggtatata0- Год назад +9

    RuntimeError: Tensors must have same number of dimensions: got 4 and 3
    help me pls . This issue is killin me

    • @vaneaph
      @vaneaph Год назад +1

      try with git pull, @ 00:30
      i updated controlNet manually, it fixed for me
      (from the ui seems not working)

    • @frischkase1
      @frischkase1 Год назад +1

      use txt2img not img2img

  • @mr.random4231
    @mr.random4231 Год назад +4

    Thanks Aitrepreneur for another great video.
    For anyone that having this error: "Error - StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" when clip_vision preprocessor is loading and style doesn't apply.
    Try this in webui-user.bat: "set COMMANDLINE_ARGS= --xformers --always-batch-cond-uncond" the last parameter "--always-batch-cond-uncond" do the trick for me.

    • @kamransayah
      @kamransayah Год назад

      Your trick did it for me too. Before that it wasn't working at all! So, Thank you Superman! :D

  • @junofall
    @junofall Год назад +10

    You definitely need to make a video about the oobabooga text generation webui! Those of us with decent enough hardware can run 7B-13B parameter LLM models on our on machines with a bit of tweaking, it's really quite something. Especially if you manage to 'acquire' the LLaMA HF model.

    • @robxsiq7744
      @robxsiq7744 Год назад

      how did you get the config?

    • @eyoo369
      @eyoo369 Год назад

      Where can you acquire the LLaMA model? Heard quite some buzz around it and read up on it.

    • @mrrooter601
      @mrrooter601 Год назад +2

      Should probably wait for them to patch the 8bit to work out of the box first. Its still broken without a manual fix on windows.
      Its issue 147 "support for llama models" on the oobabooga UI
      Basically needed to replace some text, and install a different version of bitsandbytes manually. Otherwise I was getting errors, and 8 bit would not work at all.
      With a 3090, i was able to launch 7 no issues, and 13 but it took nearly all my vram, and basically didn't work, i think it was partially on ram too which made it even slower.
      Now with 8 bit actually working (and 4+3 coming soon) I can run 13 with plenty of ram to spare, like 4-6g iirc. 30b is also confirmed to work on 24g when 4bit is finalized.

  • @notanactualuser
    @notanactualuser Год назад

    Your videos are by far the best I've seen on all of this

    • @mydayq
      @mydayq Год назад

      this is the only video where the author couldn't get the plugin to work properly :)

  • @jasonhemphill6980
    @jasonhemphill6980 Год назад

    You've been on fire with the upload schedule. Please don't burn yourself out.

  • @therookiesplaybook
    @therookiesplaybook Год назад +2

    Where do I find clipvision? I have the latest controlnet and it ain't there.

  • @GeekDynamicsLab
    @GeekDynamicsLab Год назад +1

    Im doing amazing things with style transfer, thanks for the guide and exceptional work 😁

  • @winkletter
    @winkletter Год назад +4

    I love seeing these updates and having no idea how to use them. :-) BTW, might as well get the color adapter while you're getting style.

    • @Aitrepreneur
      @Aitrepreneur  Год назад +1

      True but the color one wasn't as interesting as the style one, so I just decided to leave it from the video but you can still get really interesting images with it too

  • @qvimera3darts444
    @qvimera3darts444 Год назад +1

    Hi Aitrepreneur which version of Stable Diffusion do you use in your videos? Im looking for the same to follow your videos but i didn't succeed. Thanks in advance

  • @duphasdan
    @duphasdan Год назад +1

    2:04 How does one add the Control Model tabs?

  • @playergame8539
    @playergame8539 2 месяца назад

    Picture style conversion,
    Your video helped me, thank you very much!

  • @wndrflx
    @wndrflx Год назад +1

    How are you getting these long prompts to work with the style transfer? It seems like style won't work without using a much smaller prompt.

  • @SoccerMomSuho
    @SoccerMomSuho Год назад +6

    RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [257, 1024] , anyone else have this error?

    • @SwordofDay
      @SwordofDay Год назад +4

      yes and I'm super frustrated trying to figure out what happened. I feel like I followed all the steps to a t

    • @alexandr_s
      @alexandr_s Год назад +2

      +1

    • @Georgioooo000
      @Georgioooo000 Год назад +2

      @@SwordofDay +1

    • @SwordofDay
      @SwordofDay Год назад +3

      ok so update. I dunno if any of you are still working on this or have committed a small crime to cope but, I found a solution ! if you go to stablediffusion folder, then extensions, sd webui control net, annotator, then clip folder. click in the address bar, backspace, type cmd in the address bar while in that folder. when the command prompt comes up type" git pull" and hit enter. restart and good luck! worked for me. basically manually updating the files.

    • @SwordofDay
      @SwordofDay Год назад

      also make sure --medvram is not in your webui-user . bat file

  • @veralapsa
    @veralapsa Год назад +3

    Unless it has been updated since that Reddit post was posted, it doesn't work if --medvram is part of your commandline. And with 8GB only gets me 1or 2 shots before I have to restart the program or else I get OOM errors. I can also confirm that the 75 token max in the pos&neg prompts does make it so only the style adapter is needed.

    • @Gh0sty.14
      @Gh0sty.14 Год назад

      I removed --medvram and it's still not working at all for me.

    • @veralapsa
      @veralapsa Год назад +1

      @@Gh0sty.14 you may need to change what config is used for the adapter models in Settings-Tab=>ControlNet to point to t2iadapter_style_sd14v1.yaml which if Control Net is up to date should be in your CN models folder. Try that, restart and test.

    • @Gh0sty.14
      @Gh0sty.14 Год назад

      @@veralapsa Working now! thanks man

    • @corwin.macleod
      @corwin.macleod Год назад

      it works with --lowvram cmd option. use it instead

  • @Skydam33hoezee
    @Skydam33hoezee Год назад +4

    It doesn't work for me. Updated the ControlNet extension, put the models in the directory. Getting this remark in the console: "Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference". Anybody else get that message and was able to fix it?

    • @Skydam33hoezee
      @Skydam33hoezee Год назад

      It does seem that prompt length has something to do with it the issue. With shorter prompts it actually does work, but I see Aitrepreneur use prompts that are much longer.

    • @Eins3467
      @Eins3467 Год назад +1

      Don't use prompts longer than 75 prompts and that error will be gone.

    • @Skydam33hoezee
      @Skydam33hoezee Год назад

      @@Eins3467 Thanks. Any idea how Aintrepreneur manages to use these much longer prompts? It seems that negative prompts also contribute to the total prompt length in this case.

  • @chayjohn1669
    @chayjohn1669 Год назад +1

    my preprocessor is different how to add preprocessor in ?

  • @devnull_
    @devnull_ Год назад +1

    Nice to see you show what happens when this thing is configured incorrectly, not only step by step without failures. 👍

  • @danowarkills4093
    @danowarkills4093 Год назад +1

    Where do you get clip vision preprocessor?

    • @HANKUS
      @HANKUS 9 месяцев назад +1

      my question exactly, this is a poor tutorial if it skips over the installation of a key component

  • @j_shelby_damnwird
    @j_shelby_damnwird Год назад +1

    If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?

  • @AbsalonPrieto
    @AbsalonPrieto Год назад +2

    What's the difference between the T2I-models and the regular controlnet ones?

  • @friendofai
    @friendofai Год назад

    That's super cool, cant wait to try it! Thanks again K!

  • @respectthepiece4833
    @respectthepiece4833 Год назад +1

    Please help, in my preprocessor list I do not have clip vision to pick
    I do have the t2istyle though

    • @3diva01
      @3diva01 Год назад +1

      It's been renamed to "t2ia_style_clipvision".

    • @respectthepiece4833
      @respectthepiece4833 Год назад

      @3Diva thanks for letting me know I'll try that

    • @respectthepiece4833
      @respectthepiece4833 Год назад

      @3Diva for some reason it seems like it totally ignores that preprocessor, or maybe it is the model? I've tried both t2iadapter style fp16 and t2iadapter style sd14v1

    • @3diva01
      @3diva01 Год назад

      @@respectthepiece4833 I haven't tried it yet, but I was looking forward to it. So it's a bummer that it sounds like it't not working. I'll have to try it out and see if I can figure it out. Thank you for letting me know. *hugs*

    • @respectthepiece4833
      @respectthepiece4833 Год назад +1

      No problem, yeah it looks amazing, I'll let you know too

  • @zirufe
    @zirufe Год назад +1

    I can't find Clip Vision Preprocessor. Where should I install it?

    • @megaaziib
      @megaaziib Год назад

      it is now named t2ia_style_clipvision

  • @TheVertigo2
    @TheVertigo2 Год назад

    T2I or IP adapter in ControlNet, both are for styling, what is preferable today?

  • @ErmilinaLight
    @ErmilinaLight 7 месяцев назад

    Thank you! What should we choose as Control Type? All?
    Also, noticed that generating image with txt2img controlnet with given image it takes veeeeery long time, though my machine is decent. Do you have the same?

  • @Unnaymed
    @Unnaymed Год назад

    it's not only fun it's an epic feature. I a have so many artist picture that i want to reuse for my own idea and portraits.

  • @FleischYT
    @FleischYT Год назад

    thx! safetensors? ;)

  • @jameshughes3014
    @jameshughes3014 Год назад

    I wouldn't have figured out that error, thanks for being awesome

  • @TREXYT
    @TREXYT Год назад

    how to get multiple tabs like you in controlnet ? i dont have it

    • @veralapsa
      @veralapsa Год назад

      Update Control Net like he says in the first half of the video.

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      You need to select multiple models in settings, controlnet section, check out my first multi-controlnet video

    • @TREXYT
      @TREXYT Год назад

      @@Aitrepreneur thanks a lot, sorry i missed some videos i was busy

    • @TREXYT
      @TREXYT Год назад

      @@veralapsa already did, i got the answer but thanks anyway

  • @the_one_and_carpool
    @the_one_and_carpool Год назад

    where do you get clip vision i cant find it

  • @elmyohipohia936
    @elmyohipohia936 Год назад

    I use an other UI and I don't have the preprocessor "clip_vision", I don't find where to get it?

  • @sownheard
    @sownheard Год назад

    Wow this is so epic 🤩

  • @squiddymute
    @squiddymute 10 месяцев назад

    i don't have a "clip_vision" preprocessor , any idea why ?

  • @davidlankester
    @davidlankester Год назад

    Is this consistent like I could apply the same style on batch to an image sequence for video?

  • @theairchitect
    @theairchitect Год назад +1

    clip vision preprocessor set with tiadapter_style_sd13v1 not working for me =( no errors. just generate and style image not impact in final result. anyone got this same issue? contronet and stable diffusion up to date ... frustrating =(

    • @339Memes
      @339Memes Год назад

      yeah ,same setting ,not getting what he's showing

    • @theairchitect
      @theairchitect Год назад

      @@339Memes i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Год назад

      ​@@theairchitect i got the same message! Unfortunately i have no idea whats wrong
      Do you use colab?

    • @theairchitect
      @theairchitect Год назад

      ​@@CrixusTheUndefeatedGaul i got the solution! users with low vram like me (im 6gb) have to remove --medvram at startup. works for me! =)

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Год назад +1

      @@theairchitect Thanks for the reply! I actually fixed it yesterday though. The problem for me was that i had to change the yaml file used in the settings of webui. Cheers man! The style adapter is awesome, especially when you use multinet to mix the styles of two different images

  • @hoangtejieng2247
    @hoangtejieng2247 Год назад

    I have a question, Clipvision, open pose hand, and color under "Preprocessor -controlnet" how do you install and get it? Thank you

  • @victorvideoeditor
    @victorvideoeditor Год назад

    My control net tab disappear, any idea? There is no option in settings < control net :c

  • @amj2048
    @amj2048 Год назад

    This is so cool! thanks for sharing!

  • @michaelli7000
    @michaelli7000 Год назад

    amazing, is there a colab version for this function?

  • @kamransayah
    @kamransayah Год назад +5

    Hey Super K, thanks for your amazing video as usual! Unfortunately for some reason the t2iadapter_style_sd14v1 model is not working for me at. All other models working except this one. So I just thought to leave my comment here to see if maybe other people with the same problem could fix the issue and can lead me in the right direction. Thanks for reading! :)

    • @rvre9839
      @rvre9839 Год назад

      same

    • @DmitryUtkin
      @DmitryUtkin Год назад

      not work at all ("Enable CFG-Based guidance" in settings is ON)

    • @DmitryUtkin
      @DmitryUtkin Год назад

      the solution is in comments below!

    • @kamransayah
      @kamransayah Год назад +1

      @@DmitryUtkin Thanks for your help but it didn't work.

  • @vincentvalenzuela3171
    @vincentvalenzuela3171 Год назад

    Hi how do I install clip_vision preprocessor?

  • @jippalippa
    @jippalippa Год назад

    Cool tutorial!
    And how can I apply the style I got to a batch of images taken from a video sequence?

  • @ai-bokki
    @ai-bokki Год назад +1

    Great video K! You are epic as always! I will try making a video on this ;)
    As of now I think controlnet only supports 1.5 models. Can't wait for 2.1 release.

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      The 2.1 is already supported :)

    • @ai-bokki
      @ai-bokki Год назад

      @@Aitrepreneur Oh is it!? I am getting some error with Illuminati Diffusion model, but with other 1.5 model the ControlNet works fine. I will ask with the tech advisor in your discord. Your discord is super helpful btw!

  • @OriBengal
    @OriBengal Год назад +1

    Have you seen anything that does Style Transfer the way that Deep Dream Generator does? It's not my favorite tool- but that feature alone is quite powerful. I was hoping this was the same thing... In that one, you can upload a photo/painting/etc... and then another image -- and it will recreate the original in this new style.... Complete with copying brush stroke style / textures, etc... I liked it for doing impasto type of stuff....

    • @the_one_and_carpool
      @the_one_and_carpool Год назад

      check out visions of chaos that has a lot of machine learning and a style transfer in it

  • @coleledger6613
    @coleledger6613 Год назад +1

    clip skip on your homepage how does one makes this happen?

    • @Aitrepreneur
      @Aitrepreneur  Год назад +1

      in settings, user interface, quicksettings put CLIP_stop_at_last_layers

    • @coleledger6613
      @coleledger6613 Год назад

      @@Aitrepreneur Thank you for the help I really appreciate it. Keep up the good work!!!!

  • @snatvb
    @snatvb Год назад +1

    clip_vision doesn't work for me :(

  • @victorwijayakusuma
    @victorwijayakusuma Год назад

    thank you so much for this video, but i am getting this error "AttributeError: 'NoneType' object has no attribute 'unsqueeze'" if using this feature

  • @SergioGuerraFX
    @SergioGuerraFX Год назад

    Hi, I updated the files and restarted the UI. Now the clip vision pre processor shows up but not the t2iadapter_style model...did I miss a step?

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      make sure you have correctly placed the model, then refresh the model list

  • @CProton69
    @CProton69 Год назад

    Cannot see clip_vision in Pre-processor drop down. Updating extensions is not working for me!

  • @tariksaid4536
    @tariksaid4536 Год назад

    very usefull thanks
    could please make another video about runing kohya ss on runpod
    the last methode is not working for me

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      Yes I have these kinds of videos planned

    • @tariksaid4536
      @tariksaid4536 Год назад

      @@Aitrepreneur thanks I m really looking forward to it

  • @Kontor23
    @Kontor23 Год назад

    I‘m still looking for a way to generate consistent styles with same character(s) in different scenes (for a picture book for example) without using dreambooth to train the faces. Like using img2img with a given character and place the character in different poses and scenes with the help of controlnet without the need of train the character with dreambooth. Is there a way? Like in Midjourney you can use the url of a previous generated image follwed by the new prompt to get results with the same character in a new setting.

  • @Rocket-Gaming
    @Rocket-Gaming Год назад

    Ai, how come you have 5 controlnet options but i have one?

  • @dcpln7
    @dcpln7 Год назад

    hi, may i ask, what are the pros and cons of running stable diffusion on GC vs running it on PC locally? I have a RTX 3070, and when I used controlnet, it will run very slow. Sometimes it will out of memory, will running on GC become faster? and what do you recommend? thanks in advance.

    • @exshia3240
      @exshia3240 Год назад

      Colab is limited by time and space.
      Something with your settings ?

  • @duskfallmusic
    @duskfallmusic Год назад +1

    Too many toys to keep track of, i'm trying to make a small carrd website for tutorials XD this is just gonna go in the resources OLOLOLOL

  • @lujoviste
    @lujoviste Год назад +2

    for some reason i can't make this work :(. it just makes random pictures

  • @pladselsker8340
    @pladselsker8340 Год назад +2

    Interesting model. It doesn't look like you get a lot of control with this style transfer model, though. It's kind of in the right direction, but it's still very very far from being the same style at all. I'll try it out!

    • @mydayq
      @mydayq Год назад +1

      It's just not a very good guide. You need to keep your prompt under 75 tokens and it work perfectly

  • @iz6996
    @iz6996 Год назад

    how can i use this for sequence?

  • @hugoruix_yt995
    @hugoruix_yt995 Год назад

    how do you get different control model tabs?

    • @xellostube
      @xellostube Год назад +2

      inside Settings, Control Net
      Multi ControlNet: Max models amount (requires restart)

    • @hugoruix_yt995
      @hugoruix_yt995 Год назад

      @@xellostube hero, thanks!

  • @valter987
    @valter987 Год назад

    how to "enable" skip clip on the top?

    • @Aitrepreneur
      @Aitrepreneur  Год назад +1

      in settings, user interface, quicksettings put CLIP_stop_at_last_layers

    • @valter987
      @valter987 Год назад

      @@Aitrepreneur thanks for repyling

  • @ariftagunawan
    @ariftagunawan Год назад

    Please teacher, I need How-To inpaint batch processing with inpaint batch mask in directory...

  • @Grumpy-Fallboy
    @Grumpy-Fallboy Год назад

    can u make a deep instructional video about deforum-for-automatic1111-webui, ty

  • @sephia4583
    @sephia4583 Год назад +1

    I don't have clip vision preprocessor on my install of controlnet. Should I reinstall it?

    • @Aitrepreneur
      @Aitrepreneur  Год назад +1

      You just need to update the extension

    • @sephia4583
      @sephia4583 Год назад

      @@Aitrepreneur you are lifesaver

  • @novalac9910
    @novalac9910 Год назад

    if your getting Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference try changing the prompt to 75 or less token.

  • @tuhoci9017
    @tuhoci9017 Год назад

    I want to download the model you used in this video. Please give me the download link.

  • @adapptivtech
    @adapptivtech Год назад

    Thanks!

  • @EpochEmerge
    @EpochEmerge Год назад +1

    HOLY GOD COULD WE JUST STOP FOR A MONTH OR SO I CAN`T EVEN HANDLE THIS AMOUNT OF UPDATES

  • @draggo69
    @draggo69 Год назад

    Appreciate the jojo reference!

  • @jurandfantom
    @jurandfantom Год назад

    now models take 400MB instead 700MB ?

  • @0rurin
    @0rurin 9 месяцев назад

    Any better way to do this, a year later?

  • @wgxyz
    @wgxyz Год назад

    AttributeError: 'NoneType' object has no attribute 'convert' Any ideas?

  • @TheDropOfTheDay
    @TheDropOfTheDay Год назад

    Jojo one was crazy

  • @the_RCB_films
    @the_RCB_films Год назад

    YEAH SWEET nice!

  • @ratside9485
    @ratside9485 Год назад +1

    Can you also achieve better photorealistic images with it? If you transfer the style?

    • @sefrautiq
      @sefrautiq Год назад

      Technically i think it interrogates the image with CLIP, and then adds extracted data to you prompt (completely speculating, don't punch me). I don't think that you gain photorealistic quality from this, better use Visiongen\Realistic vision models for this. But anyway, you can experiment

    • @ratside9485
      @ratside9485 Год назад

      @@sefrautiq Maybe you can add skin detalis as a style. Let's have a look later and test it.

  • @Thozi1976
    @Thozi1976 Год назад

    you are using the "posex" extension?* *german laughter following: hehehehehehhehehhhehehe hehehehehehe*

  • @eyoo369
    @eyoo369 Год назад +1

    Wouldn't that really call a style transfer imo. That painting by Hokusai has a lot of impresisonistic elements which doesn't seem to be transferred over to the new image. The female character displayed still has that very typical "artgerm, greg rutkowski" style look to it. Still a cool feature nonetheless but misleading title. Better call it "transfer elements from an image"

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      This was one example among many, each image produces different results

  • @rageshantony2182
    @rageshantony2182 Год назад

    How to convert a anime movie frame to a Realistic photograph like image?

  • @johnjohn5932
    @johnjohn5932 Год назад

    Pedro is this you???

  • @justinthehedgehog3388
    @justinthehedgehog3388 Год назад +1

    This is all way beyond me. I couldn't possibly keep up with it. I'm still using command line SD 1.5. Let alone a webui.
    I use it to turn my scribbles into something decent looking, I think that's quite enough for me.

  • @TheAiConqueror
    @TheAiConqueror Год назад

    I know it... 😁 that you upload a video about this. 💪

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA Год назад +1

    не работает!!!

  • @robxsiq7744
    @robxsiq7744 Год назад

    no doubt the tensor stuff is due to a bunch of mismatched wonkery from having all these weird AI programs going on. tavern, kobold, mika, SD, jupyter, etc...the question is, how to fix it without nuking the OS from orbit and starting over.

  • @blackbauer
    @blackbauer Год назад

    You can do this better in Photoshop right now blending

  • @LeroyFilon-xh2wp
    @LeroyFilon-xh2wp Год назад

    Anyone else running this slow? I'm on a 3090 gtx but it takes 2 minutes to render 1 image. Not what i'm used to hehe

  • @dk_2405
    @dk_2405 Год назад

    bruh, too fast for the explanation, but thanks for the video

  • @squiddymute
    @squiddymute 10 месяцев назад

    this ain't working , it needs an update

  • @sefrautiq
    @sefrautiq Год назад

    Hmm, is he french?

  • @StrongzGame
    @StrongzGame Год назад

    So way have runwayml GEN-1 but a lot a lot better and gen1 is not even full released yet 😂😂😂😂

    • @Aitrepreneur
      @Aitrepreneur  Год назад

      GEN-1 isn't that great from what I heard :/

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Год назад

    why does everyone always assume we’re using stupid windows PCs?

  • @sigmondroland
    @sigmondroland Год назад +1

    I get this error when trying to generate, any ideas?
    RuntimeError: !(has_different_input_dtypes && !config.promote_inputs_to_common_dtype_ && (has_undefined_outputs || config.enforce_safe_casting_to_output_ || config.cast_common_dtype_to_outputs_)) INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/TensorIterator.cpp":407, please report a bug to PyTorch.

  • @KarazP
    @KarazP Год назад

    I got this message when I used color adapter
    runtimeerror: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8
    Does anyone know how I can fix it? I did some research last night but still no sign of any luck 😭