Stable Diffusion ComfyUI Married With Ollama LLM - A Streamline Prompting Workflow
HTML-код
- Опубликовано: 24 сен 2024
- In this video, we're excited to introduce you to the revolutionary AI Image and Video creation process using Ollama and ComfyUI. With the power of large language models and stable diffusion, you can now bring your ideas to life like never before.
Ollama Windows Setup Guide : • How To Install Ollama ...
More Detail About This Tutorial : thefuturethink...
Workflow For Supporters: / 100582862
If You Like tutorial like this, You Can Support Our Work In Patreon:
/ aifuturetech
Discord : / discord
Say goodbye to complicated steps and multiple tools. Ollama streamlines the workflow, allowing you to generate stunning visuals, immersive animations, and engaging stories all in one place.
In this video, we'll walk you through the process of setting up Ollama on your local machine, downloading large language models, and using the custom node "IF Prompt To Prompt" or "ComfyUI If AI Tools" to generate prompts for stable diffusion style of image generation.
We'll show you how to connect the custom node with Ollama, create workflow templates, and even generate different styles of images using stable diffusion and Clip Visions. Plus, we'll compare the IP adapter with the stable diffusion method to help you understand the differences and choose the right approach for your projects.
Whether you're a beginner or an experienced creator, this video is packed with valuable insights and practical tips to enhance your AI image and video creation process.
Don't miss out on this opportunity to unlock your creativity and take your projects to the next level with Ollama and ComfyUI. Watch the video now and start creating breathtaking visuals today!
😀Hi thank you so much for making this video and letting more people know about IF_AI_tools custom_node . It is really well explained I put the link to this video on the repo. Thank you please stay tune I will be updating more feature soon
Oh 😄hey! Love your work!!!
Please keep it update and have more llm support. Can you make an option for connect LM Studio too?
Guys! Please give some love to the author!❤ And support the great work for Comfy community!
@@TheFutureThinker Yes, I will connect LM studio too in the next update. Thank you.
@@TheFutureThinker Thank you so much for your kindness and support
😉
This is super cool! Just tested it out and it's working, I'm amazed
Great to hear!
Great tutorial! Thanks! I love to see more new idea like this for image crossover with LLM.
Glad you liked it!
How do u install the models? This is the third video I've watched and people skip how to install the model
Great video, I haven't used comfyui but this might get me to try that looks amazing!
Go for it! Just try 😊
Can you do a video about using Llava nodes to batch output image captions for training but nudging the model to look at specific aesthetic elements in the dataset images like lighting or photography style?
Ok i will try
Blender/Maya with Ollama and Comfyui = pipeline ?
Hello, I listened to your lecture very well, but I received this error message in the area where the backend is run using the ollama serve command.
Error: listen tcp 127.0.0.1:xxxxx: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
Can you tell me what the problem is?
It is interesting, How does it compare to using Blip and wd14 captioning? Using those has been working pretty good for me.
I have use it too, and back in the old days in Automatic 1111. But when it comes to develop an app, it need LLM better than wd14. Because , blip and wd14 are only for image to text, when you use LLM it process story to text prompt. Also with LLM integrated, we are able to do other things in an app as well.
SD with NeMo by Mixtral?
How to save prompt words, call, and save the next loading time of Ollama when Ollama is repeatedly loaded every time it is generated?
there is a problem, Load images (path) does not connect to the If image to prompt node, is there any way to solve this?
Hi, thank you for sharing this video. I'm intrigued by the idea of customizing a Ollama model to use my own vocabulary. I wonder if it's feasible to refine the model to generate images based solely on the words I provide. Do you think this is achievable?
Fine tune the LLM first then load it with Ollama, then connect it in this workflow. Its possible.
great stuff as usual !
Appreciate your support, thanks.🖐️
I use ComfyUI on LightingAI. Is it possible to connect with local llama?
What is that? Haven't heard of it.
Is this the same as IF_prompt_MKR for Automatic1111, Forge, etc? If so, will you do a tutorial on it?
I will check if IF AI Tools available in Automatic 1111 with promote MKR or not.
I wonder if you can incorporate LLM studio instead...i imagine its just a path and port?
I think it can. But that will require to modify the code by connecting LM Studio http IP and port number
just curious, how was the video with the flying cars achieved? I've been unable to make anything like that.
Oh that one was created long ago , I use animatediff. It was from stock video and I AnimateDiff with it.
@@TheFutureThinker Got it. Thanks.
Question:
I got Ollama installed and running per your script that I grabbed off your Patreon page, it all seems to work (thank you for that), I’ve downloaded the models and I’m not getting the same results when it analyzes the image. I put in a picture of a plane parked on a runway with a stormy sky in the background and the analysis of the image consistently isn’t even in the zone; it keeps thinking it’s a skyscraper, or a lake setting, or a circuit board, etc. Pretty much everything other than what’s in the image. Any ideas why?
What is your prompt in LLM ? i wanna try it too
@@TheFutureThinker Here's the prompt:
a futuristic fighter plane in a stormy sky, dark, moody, ominous, medium wide angle
As an addendum, I believe I've downloaded the wrong models. I grabbed the llama rather than the llava. Trying this now.
we went from "u can still do art just type the prompt" to "it writes the prompt for you" real fucking quick
😅true
Can I do this in a RTX 3060 gpu
Yup 👍
Have you seen GRIPTAPE nodes ?
How long were those videos made in one process?
for me about 2-3 mins.
@TheFutureThinker what GPU are you using
🎉