Stable Diffusion ComfyUI Married With Ollama LLM - A Streamline Prompting Workflow

Поделиться
HTML-код
  • Опубликовано: 24 сен 2024
  • In this video, we're excited to introduce you to the revolutionary AI Image and Video creation process using Ollama and ComfyUI. With the power of large language models and stable diffusion, you can now bring your ideas to life like never before.
    Ollama Windows Setup Guide : • How To Install Ollama ...
    More Detail About This Tutorial : thefuturethink...
    Workflow For Supporters: / 100582862
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
    Say goodbye to complicated steps and multiple tools. Ollama streamlines the workflow, allowing you to generate stunning visuals, immersive animations, and engaging stories all in one place.
    In this video, we'll walk you through the process of setting up Ollama on your local machine, downloading large language models, and using the custom node "IF Prompt To Prompt" or "ComfyUI If AI Tools" to generate prompts for stable diffusion style of image generation.
    We'll show you how to connect the custom node with Ollama, create workflow templates, and even generate different styles of images using stable diffusion and Clip Visions. Plus, we'll compare the IP adapter with the stable diffusion method to help you understand the differences and choose the right approach for your projects.
    Whether you're a beginner or an experienced creator, this video is packed with valuable insights and practical tips to enhance your AI image and video creation process.
    Don't miss out on this opportunity to unlock your creativity and take your projects to the next level with Ollama and ComfyUI. Watch the video now and start creating breathtaking visuals today!

Комментарии • 48

  • @impactframes
    @impactframes 5 месяцев назад +4

    😀Hi thank you so much for making this video and letting more people know about IF_AI_tools custom_node . It is really well explained I put the link to this video on the repo. Thank you please stay tune I will be updating more feature soon

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад +1

      Oh 😄hey! Love your work!!!
      Please keep it update and have more llm support. Can you make an option for connect LM Studio too?

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад +1

      Guys! Please give some love to the author!❤ And support the great work for Comfy community!

    • @impactframes
      @impactframes 5 месяцев назад

      @@TheFutureThinker Yes, I will connect LM studio too in the next update. Thank you.

    • @impactframes
      @impactframes 5 месяцев назад

      @@TheFutureThinker Thank you so much for your kindness and support

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад

      😉

  • @igor_timofeev
    @igor_timofeev 5 месяцев назад +1

    This is super cool! Just tested it out and it's working, I'm amazed

  • @kalakala4803
    @kalakala4803 6 месяцев назад +1

    Great tutorial! Thanks! I love to see more new idea like this for image crossover with LLM.

  • @shareeftaylor3680
    @shareeftaylor3680 Месяц назад

    How do u install the models? This is the third video I've watched and people skip how to install the model

  • @unknownuser3000
    @unknownuser3000 6 месяцев назад +1

    Great video, I haven't used comfyui but this might get me to try that looks amazing!

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 5 месяцев назад +1

    Can you do a video about using Llava nodes to batch output image captions for training but nudging the model to look at specific aesthetic elements in the dataset images like lighting or photography style?

  • @MilesBellas
    @MilesBellas 2 месяца назад

    Blender/Maya with Ollama and Comfyui = pipeline ?

  • @DavidCHO-i4s
    @DavidCHO-i4s 5 месяцев назад

    Hello, I listened to your lecture very well, but I received this error message in the area where the backend is run using the ollama serve command.
    Error: listen tcp 127.0.0.1:xxxxx: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
    Can you tell me what the problem is?

  • @xcom9648
    @xcom9648 6 месяцев назад +1

    It is interesting, How does it compare to using Blip and wd14 captioning? Using those has been working pretty good for me.

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад +1

      I have use it too, and back in the old days in Automatic 1111. But when it comes to develop an app, it need LLM better than wd14. Because , blip and wd14 are only for image to text, when you use LLM it process story to text prompt. Also with LLM integrated, we are able to do other things in an app as well.

  • @MilesBellas
    @MilesBellas 2 месяца назад

    SD with NeMo by Mixtral?

  • @laoAA-es6kg
    @laoAA-es6kg 4 месяца назад

    How to save prompt words, call, and save the next loading time of Ollama when Ollama is repeatedly loaded every time it is generated?

  • @EvgenyCh-th8dc
    @EvgenyCh-th8dc 6 месяцев назад

    there is a problem, Load images (path) does not connect to the If image to prompt node, is there any way to solve this?

  • @Mayssus-qp6jy
    @Mayssus-qp6jy 5 месяцев назад

    Hi, thank you for sharing this video. I'm intrigued by the idea of customizing a Ollama model to use my own vocabulary. I wonder if it's feasible to refine the model to generate images based solely on the words I provide. Do you think this is achievable?

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад

      Fine tune the LLM first then load it with Ollama, then connect it in this workflow. Its possible.

  • @Ekopop
    @Ekopop 6 месяцев назад

    great stuff as usual !

  • @luckypenguin5
    @luckypenguin5 16 дней назад

    I use ComfyUI on LightingAI. Is it possible to connect with local llama?

  • @TheColonelJJ
    @TheColonelJJ 4 месяца назад

    Is this the same as IF_prompt_MKR for Automatic1111, Forge, etc? If so, will you do a tutorial on it?

    • @TheFutureThinker
      @TheFutureThinker  4 месяца назад

      I will check if IF AI Tools available in Automatic 1111 with promote MKR or not.

  • @GrantLylick
    @GrantLylick 6 месяцев назад

    I wonder if you can incorporate LLM studio instead...i imagine its just a path and port?

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад +1

      I think it can. But that will require to modify the code by connecting LM Studio http IP and port number

  • @teealso
    @teealso 6 месяцев назад

    just curious, how was the video with the flying cars achieved? I've been unable to make anything like that.

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад

      Oh that one was created long ago , I use animatediff. It was from stock video and I AnimateDiff with it.

    • @teealso
      @teealso 6 месяцев назад

      @@TheFutureThinker Got it. Thanks.

  • @teealso
    @teealso 6 месяцев назад

    Question:
    I got Ollama installed and running per your script that I grabbed off your Patreon page, it all seems to work (thank you for that), I’ve downloaded the models and I’m not getting the same results when it analyzes the image. I put in a picture of a plane parked on a runway with a stormy sky in the background and the analysis of the image consistently isn’t even in the zone; it keeps thinking it’s a skyscraper, or a lake setting, or a circuit board, etc. Pretty much everything other than what’s in the image. Any ideas why?

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад

      What is your prompt in LLM ? i wanna try it too

    • @teealso
      @teealso 6 месяцев назад

      @@TheFutureThinker Here's the prompt:
      a futuristic fighter plane in a stormy sky, dark, moody, ominous, medium wide angle
      As an addendum, I believe I've downloaded the wrong models. I grabbed the llama rather than the llava. Trying this now.

  • @toothpastesushi5664
    @toothpastesushi5664 6 месяцев назад

    we went from "u can still do art just type the prompt" to "it writes the prompt for you" real fucking quick

  • @LahiruBandara-iq8xd
    @LahiruBandara-iq8xd 2 месяца назад

    Can I do this in a RTX 3060 gpu

  • @MilesBellas
    @MilesBellas 2 месяца назад

    Have you seen GRIPTAPE nodes ?

  • @datrighttv
    @datrighttv 6 месяцев назад

    How long were those videos made in one process?

  • @rsunghun
    @rsunghun 6 месяцев назад

    🎉