Это видео недоступно.
Сожалеем об этом.

Practical Introduction for TyDiffusion

Поделиться
HTML-код
  • Опубликовано: 17 авг 2024
  • TyDiffusion is an implementation of Stable Diffusion in 3ds Max. In this video I'll show you theory and help you understand how Stable Diffusion works in a practical, every day sense with real world examples.
    Links from the Video #
    docs.tyflow.co...
    Contact Links #
    Website: hallettvisual.com/
    Website AI for Architecture: www.hallett-ai...
    Instagram: / hallettvisual
    Facebook: / hallettvisual
    Linkedin: / matthew-hallett-041a3881

Комментарии • 17

  • @LudvikKoutnyArt
    @LudvikKoutnyArt Месяц назад +5

    I believe the technical term for an AI enthusiast is a "proompter" :)

    • @AB-wf8ek
      @AB-wf8ek Месяц назад

      Not true. Although language is an integral part, with complex node based processes, it's only a fraction of it.

  • @ramdpshah
    @ramdpshah Месяц назад +1

    Thanks for tutorial🎉🎉

  • @YansRiegel
    @YansRiegel Месяц назад +1

    Thanks! Great one

  • @USEFization
    @USEFization Месяц назад

    you are just a genius. Amazing the job you are doing !

    • @matthallettai
      @matthallettai  Месяц назад +1

      Ah thanks man! Thats so kind of you.

  • @ivanibanez1273
    @ivanibanez1273 Месяц назад

    Finally!!

  • @omer133
    @omer133 Месяц назад

    Thank you for the video. What stable diffusion models can you recommend, specifically for interior design and architecture separately?

    • @matthallettai
      @matthallettai  19 дней назад

      Don't bother with any model that claims it's good for it, interiors or architecture. Unless it's a Lora addon to experiment with adding certain looks. My favored checkpoints right now are AlbedoXl 2.1 for exteriors. NightVision. EpicPhotogasm. Real Vision XL some others and spelling is off...I'm away from my PC. Best to download popular XL models that are for photorealism. Portrait examples are OK. And compare them with the XYZ plot script at the bottom of A1111 or Forge. Makes a handy grid for you to compare.

  • @R1PPA-C
    @R1PPA-C 20 дней назад

    Have you worked with the animation side of things yet ? I'm struggling to get the animations to come out like the single images are...the results aren't wildly diffeent but almost like it's using a different model...
    Also how do you have it setup so that you can see the image as it's generating? mine just goes through the whole process then outputs the final image, I mainly want to see what's happening as the anim is processing as currently I have to wait for the whole sequence to be finalised before I see what the result will look like, thanks :)

    • @matthallettai
      @matthallettai  19 дней назад +1

      You're always going to have that weird morphing effect with frame by frame SD animation. No matter what tricks you try there no frame is 100% the same as the last. At least currently. I'm sureone out there is working on it. Ai Video you see now is made with video trained models. What we need is a hybrid or controlnet designed for frame by frame denoising img to img. The current tech is animatediff, deform. - see example on this channel. Personally I like SVD but that has little control.

    • @R1PPA-C
      @R1PPA-C 19 дней назад

      @@matthallettai well the issue I'm having is not the difference in frames but the initial outcome is completely different when doing a single frame with the same settings as when I hit animation.
      I said not wildly different but sometimes they are... I train a model to be something which I want for each frame but when I go to animate it's like I've used completely different prompts.. I'm lost

    • @matthallett4126
      @matthallett4126 19 дней назад

      ​@R1PPA-C Depending on the complexity of your scene the more interpolation the AI does with what it "sees" the examples you've seen of other animations look smooth because of their simplicity in size and materials. Leaves and grass for example with change dramatically between frames no matter what you do. Small details change so much it's not worth it. Trust me it's not you.

  • @jhgil2204
    @jhgil2204 Месяц назад

    I want to know the sequence rendering!

    • @AB-wf8ek
      @AB-wf8ek Месяц назад

      In order to get animation with temporal consistency, you'll need to use something like ComfyUI, which is a browser based node editor.
      Just diffusing over individual frames with a plugin like this will look very flickery.