ComfyUI Tutorial: Save the Strawberry People! Optimizing AI Image Generation with TensorRT Nodes

Поделиться
HTML-код
  • Опубликовано: 11 июл 2024
  • Help us save the Strawberry People!
    Join me on a journey as I explore the world of TensorRT Nodes for NVIDIA CUDA RTX cards! In this video, I'll demonstrate how these nodes can convert models like Stable Diffusion 1.5, including the latest SDXL and SD3, into CUDA optimized models for a faster and more efficient performance. But that's not all - our mission is to save the legendary Strawberry People by reducing our footprint in the energy consumption of AI image generation.
    What to Expect:
    I'll show you the TensorRT Nodes and their benefits for NVIDIA CUDA RTX cards.
    You'll get an overview of how to use TensorRT Nodes in ComfyUI and how to install them!
    I'll do Real-world tests to see if NVIDIA's promises hold true.
    You'll learn about a mysterious tribe of the so called Strawberry People and how optimizing your GPU setup can save gigawatts of energy each year to allow them to live their lives as they did millions of years before us.
    Why This Matters:
    AI is consuming immense amounts of energy, and it's up to us to make a difference. By optimizing our AI image generation, we can reduce our carbon footprint and help save our planet - and the Strawberry People!
    Tools & Resources:
    Get ComfyUI: github.com/comfyanonymous/Com...
    You need to have a NVIDIA CUDA RTX cards
    You need to install the TensorRT Nodes(use comfyUI Manager)
    Stable Diffusion models (SD1.5, SD2.1, SDXL and SD3)
    Join the Community:
    Let's work together to create a more energy-efficient future for AI. Share your thoughts, questions, and results in the comments below. Don't forget to like, subscribe, and hit the notification bell to stay updated with our latest tutorials and research videos!
    Watch Now and Be a Part of the Change!
    #AI #TensorRT #NVIDIA #ComfyUI #EnergyEfficiency #StrawberryPeople #Tutorial #StableDiffusion #RTX
  • КиноКино

Комментарии • 10

  • @trisvin7068
    @trisvin7068 6 дней назад

    Salut comment a tu fait pour SD3 je ne trouve pas le .json pour build la version static?

  • @nkofr
    @nkofr 8 дней назад

    nice! is it compatible with LoRas, controlnets, etc.?

    • @EdwinBraun
      @EdwinBraun 7 дней назад +1

      It should as it just replaces the checkpoint, so everything after should work just fine.,

  • @trisvin7068
    @trisvin7068 6 дней назад

    How to make .engine sd3 ? Pls

    • @EdwinBraun
      @EdwinBraun 5 дней назад

      What you mean? I show the process so far it is all the options you have.

    • @trisvin7068
      @trisvin7068 5 дней назад

      @@EdwinBraun i dont have the Tensor rt SD3 , i m build sdxl turbo good , but i dont build thé et sd3

    • @trisvin7068
      @trisvin7068 5 дней назад

      How did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol​@@EdwinBraun

  • @openroomxyz
    @openroomxyz 25 дней назад

    This is all nice
    but it just add complexity which is not
    optimimal nvidia could host all models for all their GPUs in optimal format

    • @cebasVT
      @cebasVT  24 дня назад

      Yes that would be nice. However it seems the amount of drivers,hardware and CUDA models is way too complex. It is pure chaos out there right now. Just look at PyTorch and CUDA versions and how this sometimes just does not work. Maybe AI can save us :)

    • @trisvin7068
      @trisvin7068 5 дней назад

      ​@@cebasVTHow did you build sd3 tensor rt static? I can't figure it out, did you use the trt engine sdxl base build or did you find a build trt sd3 workflows? help me I beg lol