How To Create ComfyUI API Endpoint For AI Image Generation (Tutorial Guide)

Поделиться
HTML-код
  • Опубликовано: 11 окт 2024
  • In this tutorial , we dive into how to create a ComfyUI API Endpoint. This will enable you to communicate with other applications or AI models to generate Stable Diffusion images or videos.
    It allow you to automate the workflow of image or video creation and build your own Automation System.
    It may sound a bit complex, but don't worry, I'll guide you through the process step by step.
    thefuturethink...
    Materials For This Tutorial : / 99239386
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
    First, we need to install the Comfy UI manager and enable the developer mode options. This will allow us to save our workflow diagrams as API formats. We'll also need to set up the comfy UI as a web server to access the interface through our web browsers.
    Once we have everything set up, we'll dive into the websocket API examples provided in the Comfy UI project on GitHub. We'll make some necessary edits to fit our needs, such as setting unique titles for custom notes and updating the text prompts.
    I'll demonstrate a basic text-to-image prompt using the Juggernaut SD 1.5 models. We'll explore how to load the JSON files for our workflow and make changes accordingly. I'll also show you how to set seat numbers for different sampler nodes.
    Throughout the video, I'll provide helpful tips and tricks to make the process smoother. We'll save our workflow as an API call and then move on to editing the Python code in Visual Studio Code (VS Code). Don't worry if you prefer a different text editor, you can use whichever suits you best.
    I'll guide you through checking the server address, creating client IDs, and requesting the server. We'll modify the code to load the JSON workflow data and make the necessary changes to our text prompts and seat numbers.
    By the end of this video, you'll have a clear understanding of how to create an API surfer with Comfy UI and customize it to your specific needs. So let's dive in and get started!
    #stablediffusion #comfyui #apiserver #AImultimodels

Комментарии • 42

  • @Antone0218
    @Antone0218 6 месяцев назад +2

    Great job on the tutorial. would love to see a playlist for this project. I kinda want something like that myself.

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад +1

      Coming up it will be connect with Ollama

  • @TheLoreLabs
    @TheLoreLabs 5 месяцев назад +1

    legit got a brand new custom computer just cuz of the epic stuff ive seen done on your channel! cant wait to get confused everyday for the next couple of months learning this stuff!

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад

      Ok, I will have to ask Nvidia get me the commission that I recommend people to use their hardware for AI. LOL😂

    • @TheFutureThinker
      @TheFutureThinker  5 месяцев назад +1

      Anyway, no confuse just enjoy and have fun.
      This thing is learning and entertaining at the same time.

    • @TheLoreLabs
      @TheLoreLabs 5 месяцев назад

      @@TheFutureThinker loving it already canceled runwayml cuz i no longer need depth maps for video lmao

  • @crazyleafdesignweb
    @crazyleafdesignweb 7 месяцев назад

    Great tutorial, you are getting more in depth on this.

    • @TheFutureThinker
      @TheFutureThinker  7 месяцев назад +1

      More to come!

    • @blackfoxai
      @blackfoxai 3 месяца назад

      @@TheFutureThinker Hi, I'm very interested in this, please continue.

  • @muhammadardhian7667
    @muhammadardhian7667 Месяц назад

    Amazing tutorial! 😍😍 Can it be integrated with FastAPI to create multiple workflows based on different endpoints?

  • @esuvari
    @esuvari 7 месяцев назад +1

    Just when I need it! Thank you. Any ideas on how to run consecutive flows, back to back, or some kind of queuing mechanisms etc?

    • @TheFutureThinker
      @TheFutureThinker  7 месяцев назад +1

      yup, I am doing a flow step by step, and I will record the progress on each. thanks :)

    • @Ekopop
      @Ekopop 7 месяцев назад

      As a total coding muggle, why would you need that ? could you give me a clear application example for this tutorial ? I usually get the gist but today .. Oo?

    • @TheFutureThinker
      @TheFutureThinker  7 месяцев назад

      @AxelBrault good question, you will see in the progress. stay tune. :)

    • @esuvari
      @esuvari 7 месяцев назад

      ​@@Ekopop for me it's about dynamic automation. I have a user generated content project where I ask the user a head shot pic & couple of questions (e.g. gender, body type, hair colour etc.) and generate a custom movie trailer like video in which they are the star. So it's important for me to be able to tweak my prompts according to the answers to those questions. Also this API scripting option opens up the possibility to run consecutive flows, organize the output and batch render the final video.
      In short, it's essential for a web app.

    • @Ekopop
      @Ekopop 7 месяцев назад

      @@esuvari gotchaaa, the money maker. thanks for the explanation, and link your app wjhe its ready, I'm curious ahah

  • @yql-dn1ob
    @yql-dn1ob 7 месяцев назад

    Great tutorial, this has really helped me

  • @chrisder1814
    @chrisder1814 2 месяца назад

    bonjour j'aimerais te poser une question à ce qui concerne les logiciels de retouche d'image par AI avec API

  • @chrisgreenwell3404
    @chrisgreenwell3404 6 месяцев назад

    great tutorial, is there a way to query the existing models and controlnets and select them from the returned query results.

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад

      Yes, absolutely in client app set a drop menu or list and select those model name in value. Then return it in the API parameters

  • @sadikrizvi6468
    @sadikrizvi6468 6 месяцев назад

    Sir really great and helpful tutorial ❤.
    Sir can you also make a video tutorial on how we can remotely access it on other devices ? That would be really helpful 🙏.

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад

      If you want to remote access Comfyui from other device, the basics method is typing the host Ip address and port number in your device web browser.
      For API, all you do is put that Python script into other device as one of the feature in a program, and that is dealing with further software development.

  • @jiexu-j9w
    @jiexu-j9w 4 месяца назад

    Thanks , do you know how comfy ui api can work with unreal engine , for example : a unreal engine ui button can trigger the comfy ui's workflow . i don't find related tutorial about it

    • @TheFutureThinker
      @TheFutureThinker  4 месяца назад

      Think this way, just call an API web request after Comfyui hosted in a server.
      In Unreal, I remember it can do code for action, then do a web request POST in the function.

    • @jiexu-j9w
      @jiexu-j9w 4 месяца назад

      thanks. Thanks . i know it need some code action such as websocket for that , but in practice , i don't get exactly how to do it , anyway , thanks for your idea.

  • @nitingoyal1495
    @nitingoyal1495 7 месяцев назад

    Would it work if we also use IP adapter and take image input to it.. also what would be the format of the image input

    • @TheFutureThinker
      @TheFutureThinker  7 месяцев назад

      It could be, in theory, for an app client side use image upload and pass the image bytedata from the file upload to the server side API.

    • @stevietee3878
      @stevietee3878 5 месяцев назад

      @@TheFutureThinker I have a similar project: I need to be able to connect my custom comfyui workflow to a mobile app (created using React by another developer) the user of the mobile app is presented with a choice of base stylized images from a library of images, (images of couples, 2 people in each image, totally safe for work) the user then uploads an image of themselves and an image of their partner or friend, and via IPAdapter, ControlNet (Canny) and ReActor faceswap, the user uploaded faces are swapped onto the original base stylized image from the library and the final image with the now swapped faces is sent to the user via google drive or dropbox or similar. Can you help me with this please ?

  • @GrantLylick
    @GrantLylick 6 месяцев назад

    only thing you forgot to mention is to import random. good thing vs codes codeium helps out. Now i just have to finger out how to get an AI thru llm studio to generate the process. Are you looking at anything like that?

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад +1

      I did use Ollama connect with SD ruclips.net/video/EQZWyn9eCFE/видео.htmlsi=1l3kSfZ4AbysuLs4
      I think it its the same concept for LM Studio. Maybe you can use this node and change to code connect with LM Studio url and port num.

    • @GrantLylick
      @GrantLylick 6 месяцев назад

      @@TheFutureThinker thanks, much appreciated and keep up the great work!

  • @UnsolvedMystery51
    @UnsolvedMystery51 7 месяцев назад

    great content! thanks

  • @jakubsiekiera8098
    @jakubsiekiera8098 6 месяцев назад

    python_embedded folder is not there when I clone the repo on my M1 Mac. I guess it's only for windows portable version ://

    • @TheFutureThinker
      @TheFutureThinker  6 месяцев назад +1

      As you mentioned, Windows portable. Mac , do what needa do.

    • @jakubsiekiera8098
      @jakubsiekiera8098 6 месяцев назад

      found a workaround guys - just do pip install of websocket_client using your default python version and instead of python.exe use python in the run command

  • @kalakala4803
    @kalakala4803 7 месяцев назад

    😮you are monster, how can you know how to code, create graphic, and run business😂 crazy guy

    • @TheFutureThinker
      @TheFutureThinker  7 месяцев назад

      😂na... I just like computer stuff, that my hobby. that's all

  • @ebben23
    @ebben23 3 месяца назад

    BISAYA😄