Show, don't tell!
Show, don't tell!
  • Видео 18
  • Просмотров 22 946

Видео

ComfyUI AI: Uncut Version, IP adapters plus weight scheduling and Animate Diff evolved, sped up 500%
Просмотров 485День назад
In this video, I show the basic workflow for using the new IP-Adapter nodes for Animate Diff Evoled. The thought behind this uncut version is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of the misclicks are at least entertaining. I sped up hours of video footage by 500%. Have fun watching!
ComfyUI AI: What if the new IP adapter weight scheduling meets Animate Diff evolved?
Просмотров 4,3 тыс.14 дней назад
This is the first part of a series, in the coming episodes I will show a workflow in which the upscaling and image enhancement method Perturbed Attention Guidance is integrated and with which the animations can be generated in high resolution and long playback time. And where I try out various additional methods to control the output video, such as different control nets. Once again, it's incre...
ComfyUI AI: Uncut Version, IP adapter new nodes + using Perturbed Attention Guidance, sped up 1500%
Просмотров 619Месяц назад
In this video, I show a workflow for creating cool realistic sceneries using the new IP adapter nodes and Perturbed Attention Guidance. This creative test series was amazing, Perturbed Attention Guidance is a great tool together with the new Ip adapter nodes. The thought behind this uncut versions is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of t...
ComfyUI AI: IP adapter new nodes, create complex sceneries using Perturbed Attention Guidance
Просмотров 5 тыс.Месяц назад
In this video, I show a workflow for creating cool realistic sceneries using the new IP adapter nodes and Perturbed Attention Guidance. This creative test series was amazing, Perturbed Attention Guidance is a great tool together with the new Ip adapter nodes. 0:00 - 0:24 Intro 0:25 - 6:25 Setup Workflow 6:25 - 8:09 Perturbed Attention Guidance 8:10 - 9:33 Outro
ComfyUI AI: Uncut Version, IPadapter V2, create image with Gestalt laws of perception, sped up 1500%
Просмотров 283Месяц назад
The thought behind this uncut version is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of the misclicks are at least entertaining. I sped up over 6 hours of video footage by 1500%. Have fun watching!
ComfyUI AI: IP adapter version 2, create artistic images with the Gestalt laws of perception
Просмотров 3,7 тыс.Месяц назад
In this video, I show a workflow for creating cool images using the new IP adapters and the Gestalt Laws of perception in the prompts. This creative test series was a lot of fun. It's really cool what is possible with the new IP Adapter Nodes and the JuggernautXL-Lightning Model!
ComfyUI AI: Uncut version, create background with Fibonacci, Euler, Heisenberg, Brown, sped up 1500%
Просмотров 2472 месяца назад
Hello and welcome to the almost unedited version of my last video on the SDXL Lightning Model and IP adapters and artistic backgrounds with Fibonacci, Euler, Heisenberg and Brown's formulae The thought behind this version is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of the misclicks are at least entertaining. I sped up over 4 hours of video foota...
ComfyUI AI: create artistic backgrounds with Fibonacci, Euler, Heisenberg and Brown's formulae
Просмотров 2,1 тыс.2 месяца назад
In this video, I show a workflow for creating IP adapter embeds and using them as backgrounds for artistic images. This creative test series continues to be a lot of fun. It's really cool what is possible with the IP Adapter models and the SDXL Lightning models. The new JuggernautXL-Lightning model is also included!
ComfyUI AI: Uncut version, SDXL Lightning Lora and IP-Adapters workflow, sped up by 1500%
Просмотров 2953 месяца назад
Hello and welcome to the almost unedited version of my last video on the SDXL Lightning Lora Model and IP adapters. The thought behind this version is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of the misclicks are at least entertaining. I sped up over five hours of video footage by 1500%. Have fun watching!
ComfyUI AI: create artistic images without using artist names; SDXL Lightning Lora + IP adapter
Просмотров 9413 месяца назад
In this video I show a way to create artistic images with Stable Diffusion and ComfyUI without using artist names in the prompts. The new SDXL Lightning Lora model has really sped up the whole process. IP adapter embeds are just wonderful for applying different art styles to your own images. Have fun watching and trying it out! If you want my SDXL-Lightning Ip adapter node setup, go to my websi...
ComfyUI AI: Uncut version, Stable Video Diffusion Model + IP-Adapter workflow, sped up by 1500%
Просмотров 2703 месяца назад
Hello and welcome to the almost unedited version of my last video on the Stable Video Diffusion Model and IP adapters. The thought behind this version is that maybe some of the ideas are inspiring, some of the workflows are instructive and some of the misclicks are at least entertaining. I sped up over five hours of video footage by 1500%. Have fun watching!
ComfyUI AI: SDXL model, IP adapter + Embeds and Stable Video Diffusion model in one workflow
Просмотров 1,8 тыс.3 месяца назад
In this video, I show a workflow for creating IP adapter embeds and using them for images to create videos via Stable Video Diffusion. This creative test series was a lot of fun. It's really cool what is possible with the IP Adapter models, SDXL and SVD models. The new Stable Video Diffusion model 1.1 is also included, along with audio file integration!
ComfyUI AI: Stable Diffusion Video Model, setup and creative testing
Просмотров 3714 месяца назад
So the Stable Diffusion Video Model is out. I have tested it extensively over the last few weeks with a simple SVD model setup. Even though it only produces short videos, it's a lot of fun to test the model! If you want the SDV node setup, go to my website: www.alienate.de, scroll down, once you see the logo image of my channel, the eye or the woman, drag it to the ComfyUI interface. Then go to...
German, english subtitles: Crowdfunding project for my "Sieben Welten" science fiction book series
Просмотров 616 месяцев назад
German, english subtitles: Crowdfunding project for my "Sieben Welten" science fiction book series
ComfyUI AI: SDXL Canny Controlnet Model, Setup and Creative Testing
Просмотров 1,3 тыс.8 месяцев назад
ComfyUI AI: SDXL Canny Controlnet Model, Setup and Creative Testing
ComfyUI AI: SDXL Base Model only, creative testing with image to image + HiRes fix.
Просмотров 8469 месяцев назад
ComfyUI AI: SDXL Base Model only, creative testing with image to image HiRes fix.

Комментарии

  • @AmazenWisdom
    @AmazenWisdom День назад

    Another great workflow. Thank you so much for sharing! Is is possible to create a prompt tutorial? I love the way you create your images. They're the best that I have seen out there.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk День назад

      Thanks a lot for that! I'm currently planning a kind of celebration video for the soon to be thousand subscribers. In it, I'm going to put all the prompt lists I've made so far into a few Note+ nodes and then upload the workflow to my website. I want to try a few of these out along with the IP adapter weight scheduling nodes and Animate Diff. So basically a prompt tutorial, at least a tutorial on how I use prompts :)

    • @AmazenWisdom
      @AmazenWisdom 23 часа назад

      @@Showdonttell-hq1dk That'll be so cool. I love to create story telling animation, but my prompting skill is still very basic. By the way, are you on discord? Maybe we can connect?

  • @Showdonttell-hq1dk
    @Showdonttell-hq1dk День назад

    You can download the workflow from my website alienate.de. Have fun with it!

  • @wizards-themagicalconcert5048
    @wizards-themagicalconcert5048 3 дня назад

    Fantastic Content and video,keep em up ! Subbed !

  • @wonder111
    @wonder111 3 дня назад

    Great approach to teaching what only the programmers can understand. I worked on this for a few hours, it fails at the last (video combine) node with this error: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED. Any idea what maybe in error? Thanks, and I will be following..

  • @daoshen
    @daoshen 4 дня назад

    Amazing work and results! The voice is annoying to listen to and distracts from the content. This is, of course, subjective. A more neutral voice might appeal to more of us?

  • @MrXRes
    @MrXRes 5 дней назад

    Thank you! What voice generator did you use?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 3 дня назад

      This is the AI voice profile Charlotte from Elevenlabs. Thanks for watching.

  • @czlaczimapping
    @czlaczimapping 6 дней назад

    I have an error message: 'VAE' object has no attribute 'vae_dtype' Do you know what is the problem?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 6 дней назад

      Have you tried using a different Vae? Or connect the Vae from the checkpoint to the Vae decode node?

  • @xxab-yg5zs
    @xxab-yg5zs 6 дней назад

    it stops at "sampoler custom" and nothing heppens ... any idea how to fix it?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 6 дней назад

      Thanks for watching and subscribing. There have already been three people with similar problems. All my attempts to reproduce the error have failed. Besides updating ComfyUI and all nodes and if necessary Python and installing all necessary models correctly, it might be a possibility to use a normal Ksampler. But don't forget to set the sampler to LCM and the scheduler to sgm_uniform. Have you also tried to create the workflow as I did in the edited version of this video? By the way, I use the ComfyUI_windows_portable version, maybe that also plays a role. I'm sorry I can't be more specific. At least I downloaded the workflow again in the last few days and tried it out. It works as it should.

    • @xxab-yg5zs
      @xxab-yg5zs 6 дней назад

      @@Showdonttell-hq1dk Im also on portable, all updated, nodes and models ... Workflow is downloaded from your website. So basically, image appears in custom ksampler and after it all stops. No error, any warning, nothing just stops and after hitting queue prompt nothing happens. I will try later normal sampler. Thanks!

  • @xxab-yg5zs
    @xxab-yg5zs 6 дней назад

    Good stuff, subbed!

  • @mehradbayat9665
    @mehradbayat9665 6 дней назад

    I understood every part except what PAG and automatic CFG does, can anyone help me understand?

  • @SylvainSangla
    @SylvainSangla 7 дней назад

    Thanks a lot for sharing these tutos and workflows !

  • @alexhalka
    @alexhalka 7 дней назад

    Amazing!!! Would love to have your workflow, can't acces your site.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 7 дней назад

      The website takes a while to load. Would you try again, it should actually work.

  • @BuzzJeux_Studio
    @BuzzJeux_Studio 9 дней назад

    Fantastic tutorial and very useful, but idk why i have an issue (oom) with 16 VRAM.. how much VRAM do you use for that ?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 9 дней назад

      Thanks for watching, glad you like it. My graphics card has 12 GB of vram. Maybe it helps to enlarge the swap file, i.e. the virtual memory. I have set mine to 80 GB and since then I have hardly had any problems of this kind.

    • @BuzzJeux_Studio
      @BuzzJeux_Studio 9 дней назад

      @@Showdonttell-hq1dk First of all thanks for your quick reply, after increasing my virtual memory (I was currently at 30 GB) as you mentioned, but I still had the problem, after several hours looking for the why and wherefore, I finally found where my error was coming from, I was using input images that were far too large in terms of resolution! Problem solved by using basic 512x512 images :)

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 8 дней назад

      @BuzzJeux_Studio ​ So business as usual! :) An error occurs, a simple fix doesn't work --> many hours, countless websites read and how-to-fix-problem-xyz-vidoes later --> problem was basically easy to solve. However, the images are usually downscaled to a low resolution of 224x224 by the image batch multiple node anyway. I have just tried it again with 5 images in a resolution of 6000x6000. I only got an error message when I tried to upload an image with 20480x12288 to the load image node. This means that images larger than 512x512 should also work in principle, at least with a graphics card like yours.

  • @MisterCozyMelodies
    @MisterCozyMelodies 10 дней назад

    everything on this tutorial is awesome, the voice, the background music, the detailed in each step of the tutorial, very immersive video, thanks a lot! you are doing a next lvl video here

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      Thanks a lot, I really appreciate that! It always drives me crazy when I watch tutorials and numerous in-between steps are simply skipped. I definitely didn't want to do that in my videos. That's why I always rebuild the workflow after the video has been completed according to its instructions to see if it works.

    • @eccentricballad9039
      @eccentricballad9039 8 дней назад

      @@Showdonttell-hq1dk Thanks a lot for actually creating art instead of creating content. It's so immersive and i feel like i stepped into my own artificial intelligence work studio.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 8 дней назад

      That's a wonderful compliment, thanks a lot!

    • @electronicmusicartcollective
      @electronicmusicartcollective 5 дней назад

      @Showdonttell-hq1dk ...uhhhm except the room of the voice ;) better dry signal. pls not a recognize reverb/delay

    • @wizards-themagicalconcert5048
      @wizards-themagicalconcert5048 3 дня назад

      @@Showdonttell-hq1dk It works very well ! Very easy to understand and follow ! Thanks !

  • @nirdeshshrestha9056
    @nirdeshshrestha9056 10 дней назад

    It didnot work I get an error, can you help?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      What is the error message?

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 10 дней назад

      @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter return (work_model, face_image, ) ^^^^^^^^^^

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 10 дней назад

      @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter return (work_model, face_image, ) ^^^^^^^^^^

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      @@nirdeshshrestha9056 I have tried to reproduce the error, but without success. What you can do is first click on “Update all” in the Comfy Manager and then restart ComfyUI. Then you can check in the ComfyUI Manager if all extensions (custom nodes) are updated, if not then update them manually. If relevant nodes are marked in red in the ComfyUI manager under “import failed”, try uninstalling and reinstalling them. And check if all necessary models: Clipvision, IP-Adapter, and AnimateDiff Motion-Models and Motion-Loras are installed. Please also make sure that the images you are using are still in the same folder and have not been moved somewhere else in the meantime. I hope this helps. If not, please let me know. Good luck!

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 10 дней назад

      @@Showdonttell-hq1dk tried but failed ahain

  • @skycladsquirrel
    @skycladsquirrel 10 дней назад

    amazing!

  • @double-7even
    @double-7even 10 дней назад

    I can't understand the weights for IPAdapter weights. There are two values: e.g. "0.0, 1.0" in IPAdapter weights. Is the first value (0.0) the weight for first image batch (blue in workflow) and the second value (1.0) for second image batch (cyan in workflow)?. Btw. Amazing work 👍

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      Thanks for watching! I spent another couple of hours today looking for a detailed explanation of the nodes involved. But it seems that there are no detailed texts available. So I can only tell you what my very long tests have shown. By the way, I'm currently working on a new video about it, and some things have become a bit clearer. My approach is empirical, so to speak; that is, I test it and see how the nodes behave with each other. It's incredibly complex, even though it looks so simple sometimes. My observations are; The two values (0.0, 1.0) indicate how much weight is given to the IP adapters on the one hand and the prompt on the other. 1.0 = the Ip adapter receives the greater weight, i.e. the images. 0.0 = the prompts receive the greater weight. As the outputs of the IP adapter weights nodes are called Image_1 and Image_2, I assume that the first image of the Images Batch Multiple node is processed by the first IP adapter batch node, at least more strongly. This is also shown by the test. And therefore the second image from the second Ip adapter batch node. However, things get more complex here. I'll try to shed more light on this darkness in the next few videos. :) But the short answer to your question is; yes, something like that.

    • @double-7even
      @double-7even 10 дней назад

      @@Showdonttell-hq1dk Thank You! I'm looking forward to the new video and I really appreciate Your hard work! Another problem which I found is that changing resolution to 2x (768x768) produce broken video. Details are repeated vertically and overall the whole scene is mixed up. Do You know why and how can I prevent this? EDIT: I think I know answer. It's latent size and it's limited to trained model data size (512x512). For bigger size we need to upscale it?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      @@double-7even Yes, that's right. I had the same problem, but after a few runs with the same seed and a resolution of 768 x 512 the problem disappeared completely. Anyway, it seems advisable to use the same seed, even if the changes only occur after a few runs. My seed is 998999, so if you use a copy of my workflow, there's a good chance that it will work there too. I don't know if you have changed it. But I would be interested to know whether the seed works across all computers.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      @@double-7even And that is, as you say, a typical SD.1.5 problem. With the SDXL models, you no longer have these worries. But unfortunately Animate Diff does not yet work properly with these models.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      But I have just found out that one can integrate additional IP adapter embeds into the workflow. That's pretty cool and will definitely be included in the new video.

  • @urbanquiz
    @urbanquiz 11 дней назад

    Im following your instructions and the workflow from your website. I've got this error message. Im a noob and trying to figure this out. Thank you for your help. BatchPromptSchedule.animate() missing 4 required positional arguments: 'pw_a', 'pw_b', 'pw_c', and 'pw_d'

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      I have tried to reproduce the error, but without success. What you can do is first click on “Update all” in the Comfy Manager and then restart ComfyUI. If that doesn't work, then I suggest first uninstalling the “FizzNodes” via the Comfy Manager and then reinstalling them. If that doesn't work either, you may need to update Python. It may also help to open the “Update” directory in the ComfyUI folder and start the “update_comfyui_and_python_dependencies.bat” file. At least if you are using ComfyUi-windows_portable. It also makes sense to check whether you have installed or downloaded all the models used in the workflow and whether they are in the correct directory. I hope this helps. If not, please let me know.

  • @GamingDaveUK
    @GamingDaveUK 11 дней назад

    Do you have a tutorial for the sdxl version? so far every guide i look at for animation always shows 1.5 models. given sdxl's prompt cohesion and better image quality, its surprising so many are still using 1.5.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Unfortunately, this does not yet work with SDXL. At least not the version 2 Motionloras etc. yet. This means that you can't really use everything that AnimateDiff Evolved provides with SDXL. It was also an adjustment for me because I have only been using SDXL models for the last few months. In the next few days I want to try everything with HotshotXL, maybe it will work better, but I can't really say anything about that yet. You can download a basic XL workflow from the site. But as I said, there's not much you can do with it. Most of the workflows I've found mix SD 1.5 with SDXL in some way with different adapter loras, but they're not satisfactory. Link: civitai.com/articles/2950/guide-comfyui-animatediff-xl-guide-and-workflows-an-inner-reflections-guide

  • @FlippingSigmas
    @FlippingSigmas 11 дней назад

    great video!

  • @ismgroov4094
    @ismgroov4094 11 дней назад

    Sir, workflow plz, sir

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      On my website alienate.de, soldier! :) Downloadable as a json file: IPA Weight Scheduling + Animate Diff Workflow, right click --> save link as ... --> drag it into the ComfyUI user interface and the mission can start!

    • @ismgroov4094
      @ismgroov4094 11 дней назад

      @@Showdonttell-hq1dk thx sir!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      @@ismgroov4094 You're welcome! You are dismissed and have orders to have fun with it! :P

  • @ismgroov4094
    @ismgroov4094 11 дней назад

    Thx

  • @ojciecvaader9279
    @ojciecvaader9279 11 дней назад

    I love it. even if I don't understand most of what you are doing :))

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Thanks! This is the uncut version of my video about the weights scheduling nodes. Maybe it would be helpful if you look at the edited version. Everything that is not explained in it, I will show in the follow-up videos.

  • @martinkaiser5263
    @martinkaiser5263 11 дней назад

    Wo genau kann ich den workflow downloaden ? Sehe es einfach nicht

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Hey, danke fürs Anschauen. Der Workflow ist als Json-Datei auf meiner Webseite, alienate.de, zu finden. Einfach herunterscrollen bis zu den Comfy-Bildern, das erste Bild ist das Logo meines Kanals, rechts daneben ist eine Liste mit der Überschrift: "Download Workflow Json". Der letzte Punkt auf der Liste lautet: "IPA Weight Scheduling + Animate Diff Workflow", das ist der Link zum Workflow. Mittels Rechtsklick öffnet sich das Kontextmenü, dann auf: "Link speichern unter ...", klicken und die heruntergeladene Json-Datei anschließend einfach in die Benutzeroberfläche von ComfyUI ziehen und die rotmarkierten Knoten via ComfyUI-Manager und: "install missing custom nodes", installieren. So sollte es gehen. Ich hoffe, das war hilfreich. Wenn ja, dann viel Spaß damit.

  • @697_
    @697_ 11 дней назад

    no AI voice that says IP A(dapt:1.5)er

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Did you miss it? :)

    • @697_
      @697_ 11 дней назад

      @@Showdonttell-hq1dk Yes, lol. I’m excited for your new videos! I'm tailoring my workflow to fit my 6GB VRAM to avoid memory errors.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      @@697_ Oh, 6 gb Vram, that's not so easy. When I started with Stable Diffusion a year ago, I only had 6gb of Vram available, so you quickly reach the limits. I sold my graphics card back then and got myself an RTX 3060, plus 32 GB of ram, which helps. That was quite an investment. But I already had the idea of making RUclips videos to focus my learning process and because it's fun. In general, I also think it's a good approach to get the best out of the available equipment. I don't know if you're already using it, but with the ComfyUI Command Line Arguments alone you can already set a few things as far as low-vram is concerned. Here is a list of useful commands, you just need to add them to run_nvidia_gpu.bat, at least if you are using the ComfyUI Windows Portable version: www.reddit.com/r/comfyui/comments/15jxydu/comfyui_command_line_arguments_informational/

    • @697_
      @697_ 10 дней назад

      @@Showdonttell-hq1dk I have a laptop with a 3060 graphics card that performs well for my work. However, the task manager shows 14GB of VRAM, which is actually divided into 6GB of dedicated VRAM and 8GB of shared memory. This is odd because the shared 8GB is rarely utilized, and the system crashes whenever an application requires more than the 6GB of dedicated VRAM.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 10 дней назад

      @@697_ Have you tried increasing the virtual memory? More precisely, the swap file? I have set mine to 80 gigabytes. Since then I've had no more problems with it.

  • @697_
    @697_ 11 дней назад

    The way your AI says hugging face is quite cute tbh 1:36

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Tbh, one of the reasons I chose Charlotte was because her voice keeps me motivated when making the videos. And, if that works for me, then there's a good chance that viewers will like her AI voice too. ;)

  • @697_
    @697_ 11 дней назад

    ip adAPTer

  • @AmazenWisdom
    @AmazenWisdom 11 дней назад

    Wow. Another great tutorial! Thank you so much for sharing!

  • @MilitantHitchhiker
    @MilitantHitchhiker 11 дней назад

    Why do people insist on using bad models just because they're popular?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Thanks for watching. I don't actually insist on using specific models. There are now so many models that you can't keep up with trying them out. What's more, not all models work equally well with the IP adapters. I therefore usually use the models where the results look good. But thanks for the hint, I will keep an eye out for interesting models in the future.

  • @victor20886
    @victor20886 12 дней назад

    Wow! impressive quality! good job. What features does your computer have??

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      Thank you, I appreciate that. My computer has the following features: AMD Ryzen 5 2600 Six-Core Processor 3.40 GHz, 32.0 GB Ram, Nvidia Geforce RTX 3060 with 12GB VRam, Windows 10.

  • @pixelcounter506
    @pixelcounter506 12 дней назад

    Impressive work, thanks for sharing. Always the same... fighting the challenges with noodles and new topics despite old concepts and workflows are still waiting for being improved!

  • @697_
    @697_ 12 дней назад

    your AI voice really puts a lot of emphasis on the word "adapter"

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      I think she ‘ senses’ my appreciation for the IP adapters! :)

  • @DerekShenk
    @DerekShenk 14 дней назад

    Since viewers will want to learn what you teach them, it would be far more beneficial if you included links to your workflow. Additionally, if you really want to stand out ahead of other tutorials, if you include links to the actual images you use in your workflow, thereby enabling viewers to fully reproduce what you show them, it would be fantastic!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      Thanks for watching! You can find and download the workflow on my website alienate.de. As for the images, my idea is to show in the tutorials how you can set up and use the workflow yourself. Without exception, all the images I use in the videos are created or photographed by myself. I also work as a photographer, which means that some of the images used are also linked to image rights. Apart from all the fun of learning how to use ComfyUI and create videos from it, it's also a financial matter. Thanks for your remarks and interest anyway.

    • @clangsison
      @clangsison 11 дней назад

      sometimes people are lazy that’s why they want the workflow. others view these types of videos (and matteo’s) as very insightful if one truly wants to understand how things work.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 11 дней назад

      @@clangsison I didn't want to say it out loud. But yes, it's probably true. Although I can understand it somewhat. When you come into contact with it for the first time, a fully functional workflow like this is really helpful. You can take it apart and understand step by step how it works. Thanks for watching. :)

    • @amorgan5844
      @amorgan5844 7 дней назад

      ​@Showdonttell-hq1dk it's always appreciated, your work and workflows are some pf the best I've ever seen

  • @abaj006
    @abaj006 14 дней назад

    Amazing work! Thanks for sharing, much appreciated!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      I'm glad you like it. Thanks for watching and subscribing

  • @CosmicFoundry
    @CosmicFoundry 14 дней назад

    awesome, thanks for this! Do you have workflow somewhere?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      Thanks for watching. I'm glad you like it. I'll definitely upload the workflow to my website later today, www.alienate.de.

    • @WhySoBroke
      @WhySoBroke 14 дней назад

      @@Showdonttell-hq1dkGreat method and thanks in advance for the workflow!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      You can now download the workflow as a json file from my website if you like. Have fun trying it out. The link is usually in the video description.

    • @CosmicFoundry
      @CosmicFoundry 11 дней назад

      @@Showdonttell-hq1dk got it thanks! keep up the great work!

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 10 дней назад

      @@Showdonttell-hq1dk I got error pls help

  • @planethanz9762
    @planethanz9762 18 дней назад

    Großartig - Danke! (sofort abonniert). Hab gerade AI-Musicvideos via SD für mich entdeckt und mehr Kontrolle über die Szenerie (neben mehr Kontrolle über das Licht, was sich durch IC ja gelöst hat) war noch ein fehlendes Puzzle-Teil.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      Freut mich, wenn ich helfen konnte und danke fürs Abo. Ich habe heute ein neues Video hochgeladen, darin gehts um einen Workflow der IP-Adapter mit Animate Diff Evolved kombiniert. Das kann man sich am Ende als Video-Datei ausgeben lassen, also auch ganz leicht in die eigenen Videos einbinden. Sicher auch nützlich für Musikvideos.

  • @decoryder
    @decoryder 18 дней назад

    Sauber - echt interessante Konzepte! Thank you for your ambitious and involving work, much appreciated!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      Danke, das Coole an Stable Diffusion und ComfyUI ist, dass sich damit die wildesten Konzepte umsetzen lassen, zumindest meistens. Thanks, the cool thing about Stable Diffusion and ComfyUI is that it can be used to realise the wildest concepts, at least most of the time.

  • @drucshlook
    @drucshlook 19 дней назад

    nice ! you got a discord for your channel ? I'd love to exchange about that kind of stuff

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      At the moment I don't have a Discord for my channel. But that's a good idea, I'll see if I can get it set up soon.

  • @philippeheritier9364
    @philippeheritier9364 20 дней назад

    Very good result thanks a lot

  • @drucshlook
    @drucshlook 20 дней назад

    very good tutorial, got yourself a new subscriber

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 19 дней назад

      Thank you very much and I'm delighted. The next video is about the new IP adapter weight nodes and AnimateDiff. It's crazy what you can do with them.

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 22 дня назад

    wow amazing technique

  • @AmazenWisdom
    @AmazenWisdom 25 дней назад

    Great tutorial and workflow as always. Thank you so much! Is the voice from Elevenlabs? That's a good voice for story telling.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 19 дней назад

      Thank you for the kind words and for watching the videos. I'm glad you like them. And yes, this is the voice profile:“ Charlotte”, I just kind of like it. :)

    • @AmazenWisdom
      @AmazenWisdom 19 дней назад

      @@Showdonttell-hq1dk Thank you so much for sharing her name. I love to make motivational animation video. I hope you don't mind me using it for my future video :)

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      No, of course I don't mind. As I said, I've been looking for a suitable AI voice for my videos for a long time and Charlotte is simply the best! :)

  • @MilesBellas
    @MilesBellas 25 дней назад

    Great content. Superb.

  • @MilesBellas
    @MilesBellas 25 дней назад

    These videos are great ! Thanks for posting!

  • @MilesBellas
    @MilesBellas 25 дней назад

    Great ! Thanks!

  • @Oscar-ie9jo
    @Oscar-ie9jo 27 дней назад

    The ai voice sounds like Greta 😂

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 26 дней назад

      In fact, it is based on a Swedish-British-Irish accent. :)

  • @bryangalisa4337
    @bryangalisa4337 28 дней назад

    Hey man, thats a amazing learning of how use ComfyUI and the infinite possibilities for making AI images. Hou can you made to appear the use of your CPU, RAM ,GPU, VRAM... this was very cool, where and how can I do this appear in my ComfyUI? thanks for the video!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 27 дней назад

      Thanks for watching, glad you like it! This is a node that you can install directly via the ComfyUI Manager. Just go to the menu: “install custom nodes”, enter “monitor” in the search field and install the “Chrystools”, or use the link: github.com/crystian/ComfyUI-Crystools Have fun trying out the endless possibilities in ComfyUI. :)

    • @bryangalisa4337
      @bryangalisa4337 27 дней назад

      @@Showdonttell-hq1dk Hey man this is amazing, thanks for share this important information, now I can see it in my ComfyUi!

  • @saberkz
    @saberkz 29 дней назад

    Please sent me your discord or contact i need your help for building a custom workflow 😊

  • @EternalKernel
    @EternalKernel Месяц назад

    Wonderful videos and work! Any way you can improve the recording quality though? seems maybe over compressed.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 19 дней назад

      Thanks a lot! I'll look out for the recording thing next time. Thanks for the hint.

  • @aliyilmaz852
    @aliyilmaz852 Месяц назад

    thanks for showing us. I sometimes can not choose which output has more quality, cause I see some blur in the generated images and need to sharpen them manually. Is it something common to have blurry images from the first pass or I am doing wrong? P.S:I use turbo and lightning models while getting these blurry images

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk Месяц назад

      Yes, with the Turbo and Lightning models I often encounter the problem with blurred images, especially on the first pass. Although these models are impressively fast, they also have their weaknesses. Especially with very complex workflows. That's why I was so pleased with the Perturbed Attenion Guidance method, you have to try a lot, but after a few attempts the results are simply impressive. However, for images that are meant for professional use, I'm already used to post-processing them with image editing software. Only sometimes do perfect pictures come out of the camera. However, in my videos I try to create everything using ComfyUI's internal tools wherever possible. Additional pixel upscaler nodes (4xUltrasharp Upscaler, etc.) are usually sufficient for resharpening blurred images. As shown in the video, if the NNlatent upscaler node is integrated into the workflow and several Ksamplers are in use, upscaling is often enough for sharp images. But overall the problem is apparently "normal", as you can also see in my uncut versions of the different videos. Thanks for watching! :)

    • @aliyilmaz852
      @aliyilmaz852 Месяц назад

      @@Showdonttell-hq1dk thanks a lot, I am just a noob and you helped me a lot. Hope someday you show us how you do post-processing, it is also another mystery for me. Thanks again and till next time I watch uncut versions

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk 14 дней назад

      The great thing about the whole Stable Diffusion AI thing is that it's so new that nobody really knows how it works in all aspects yet. It's something like pioneering work. We're probably all just noobs at different starting levels. But of course I'm glad if I can help.