ComfyUI - Learn how to generate better images with Ollama | JarvisLabs

Поделиться
HTML-код
  • Опубликовано: 12 апр 2024
  • In this video we will learn how to use the power of LLMs using Ollama and Comfy_IF_AI nodes to generate the best images. Vishnu will also take us through on how to set up ollama in JarvisLabs' instances.
    Workflow : github.com/jarvislabsai/comfy...
    Check out Ollama: ollama.com/
    Check out our ComfyUI basics playlist: • ComfyUI - Getting star...
    Check out our socials:
    Website: jarvislabs.ai/
    Discord: / discord
    X: / jarvislabsai
    LinkedIn: / jarvislabsai
    Instagram: / jarvislabs.ai
    Medium: / jarvislabs
    Connect with Vishnu:
    X: / vishnuvig
    Linkedin: / vishnusubramanian
  • НаукаНаука

Комментарии • 28

  • @chorton53
    @chorton53 13 дней назад

    This is really amazing !! Great work guys !

  • @evolv_85
    @evolv_85 15 дней назад

    Awesome. Thanks.

  • @rishabh063
    @rishabh063 Месяц назад

    Hi Vishnu, great vide9

  • @aimademerich
    @aimademerich Месяц назад

    Phenomenal

  • @RickySupriyadi
    @RickySupriyadi 10 дней назад

    is this what they call magic prompt? where ollama model refine user prompt?

  • @hempsack
    @hempsack Месяц назад

    I cannot get this to run on my laptop, it fails to load into comfyui in manager and manual install. I have a full update on comfy, and I also run the txt file to get all the required files needed, and it still fails. Any idea why? I am running a Asus Rog strix 2024 with 64 gigs of ram, and a 4090 16 gigs Vram card. I have all the requirements needed for ai generations.

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад

      Did you try checking the error log to narrow down the issue?

  • @Ai-dl2ut
    @Ai-dl2ut Месяц назад

    Hello sir... doees this IF_AI node take lot of time???... for me its taking like 15mins to load for every queue...using RTX3060

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад +1

      It depends on what model you choose. Also try running ollama directly and see how fast it is.

    • @Ai-dl2ut
      @Ai-dl2ut Месяц назад

      @@JarvislabsAI Thanks, let me try that

  • @Necro-wr2tn
    @Necro-wr2tn Месяц назад +1

    Hey, this looks great but i have a question. How much it costs to generate this images?

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад

      These are all opensource software, so there is not much cost associated with it. If you need a GPU, and the base software setup then you would be paying for the compute. The pricing starts at 0.49$ an hour, and the actual billing happens per minute. jarvislabs.ai/pricing

    • @goodchoice4410
      @goodchoice4410 Месяц назад +1

      lol

    • @Necro-wr2tn
      @Necro-wr2tn Месяц назад

      @@goodchoice4410 why lol?

  • @mufasa.alakhras
    @mufasa.alakhras Месяц назад

    How do i get the Load Checkpoint?

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад +1

      You can double click and search for load checkpoint node. If you want the checkpoint model. You can download it from this link: huggingface.co/RunDiffusion/Juggernaut-XL-v8/tree/main

    • @mufasa.alakhras
      @mufasa.alakhras Месяц назад

      Thank you!@@JarvislabsAI

  • @israeldelamoblas5043
    @israeldelamoblas5043 Месяц назад +2

    Ollama is superslow, I would like a faster version using LM Studio or similar. Thanks

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад

      Noted!

    • @RickySupriyadi
      @RickySupriyadi 10 дней назад

      slow or fast isn't that depends which model you are using? phi3 in ollama blazing fast

  • @impactframes
    @impactframes Месяц назад +1

    Hi, Thank you very much great tutorial ❤

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад

      Thanks for creating the node, waiting for your future works 😊

    • @impactframes
      @impactframes Месяц назад +2

      @@JarvislabsAI I made a super update please check it out also my other nodes for talking avatars😉 and thank again for the tutorial ❤️

    • @JarvislabsAI
      @JarvislabsAI  Месяц назад

      @@impactframes Sure, we will look into it 🙌

    • @impactframes
      @impactframes Месяц назад +1

      @@JarvislabsAI thank you :)

  • @spiffingbooks2903
    @spiffingbooks2903 15 дней назад

    The Topic is interesting but (in common with most RUclips Comfy experts, the whole presentation is confusing for the 90% of the audience that has just stumbled upon this. I think to be more successful you need to be clearer about what you want to achieve, why its a good idea. Explain how this JarvisAI fits into this, make it clear what resources need to be downloaded and exactly how in the least problematic way.
    I dont want to appear too negative as of course you are wanting to be helpful just trying to give some tips how to improve your presentation and hopefully consequently increase subscriber numbers