TouchDesigner Walkthru - 360° Projection of AI-Diffused Content

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024
  • This demonstration is an approach I used when first attempting to map images created from AI onto a spherical projection. Many different methods have arisen since the uploading of this tutorial, however I hope to inspire and share my first techniques to those attempting their own design. The process includes using a commercial license of touchdesigner in order to enable high resolution images needed for projection. I encourage enthusiasts to improve upon this concept and to find their own more efficient methods of achieving the same.
    Tools & Resources used:
    derivative.ca/
    polyhaven.com/...

Комментарии • 29

  • @markus_knoedel
    @markus_knoedel Месяц назад

    Thank you. This is a very good starting point to go into AI VR video.

  • @PhilippLenssen
    @PhilippLenssen Год назад +1

    I'm amazed by AI, but I'm also amazed by humanity. There's so much brainpower showing in this.

  • @delayedchaos
    @delayedchaos Год назад +2

    I legit share your videos almost daily! Super inspiring. What you're doing is so fascinating. Between you and and the riffusion devs my mind is blown.

  • @vividoo
    @vividoo Год назад +2

    Very cool! Thank you so much for this tutorial!

  • @dreamingtulpa
    @dreamingtulpa Год назад

    Thanks for sharing Scottie!

  • @Bobberg90
    @Bobberg90 Год назад

    Awesome. I'm going to do this to my office.

  • @Aryadei
    @Aryadei Год назад

    Amazing tutorial. Thank you so much for sharing!

  • @x4nder207
    @x4nder207 Год назад

    Nice video, Thanks.

  • @dylanhanson5650
    @dylanhanson5650 Год назад

    Beast mode. Thanks.

  • @SanderBos_art
    @SanderBos_art Год назад

    Owww man cant wait to try this!

  • @diegocaumont
    @diegocaumont Год назад

    Great stuff ❤️

  • @SpirusFilms
    @SpirusFilms 7 месяцев назад

    great work! how would you create a click and drag implementation like youtube’s 360 viewer?

  • @FraztheWizard
    @FraztheWizard Год назад

    Amazing! Thanks for sharing :)

  • @gianniskaragiannis4300
    @gianniskaragiannis4300 Год назад +1

    Amazing work. I was really curious you talk a bit at the end of the video that it would be possible to make the diffusion be automated. I've been experimenting but no luck. I was tried using the api (from Chat gpt) from a text-to-image Ai and then constantly sending the final image back to recalculate some other parts of the image? So I would have a constantly changing image. I couldn't found a way to diffuse smaller parts with a percentage or something so I added a noise map with changing seed for the parts that would be recalculated. Could you please explain a bit further how it is done?

  • @Middlecurvewar
    @Middlecurvewar Год назад +2

    Sorry if I missed something, but what step creates the "morphing" between the original and diffused image?

  • @TheHorrySheetShow
    @TheHorrySheetShow Год назад +1

    omg do we have access to any of this to play with yet!?

  • @AlexCaptain
    @AlexCaptain Год назад

    Howdy! Any plans to make more in depth tuts regarding that topic? Great work! Cheers!

  • @DSJOfficial94
    @DSJOfficial94 Год назад +1

    cool

  • @jnwarnotte2479
    @jnwarnotte2479 Год назад

    hello, sometimes i don't understand some parts because english is not my first language, and particulary the one with multiple images. Anyway, this seems to work too with the base image only. Great tutorial thanks.

  • @pedronan5938
    @pedronan5938 Год назад

    Have you tried connecting TD with an VR viewer such as oculus to view the content?

  • @ParkinsonMax
    @ParkinsonMax Год назад

    This is amazing! I've been following your work for a while and was very curious to see your approach in TD. I have a question - how do you diffuse the image in MJ or another SD model without too many changes? I've been trying for a couple of hours with different prompts, no prompts, image weights etc. Everything I try I get a completely different image from my original HDRI / equi. Is there a command to ensure that the original image stays (mostly) intact while adding diffusion on top? Thanks for your time!

    • @scottiefox2525
      @scottiefox2525  Год назад +4

      If your selected AI image generator has a % option or a "strength" option, try values of 50% or 0.5, then adjust if needed. It also depends in some cases which sampler you're using, as well as number of steps.

    • @maxburstyn
      @maxburstyn Год назад

      @@scottiefox2525 Thanks for getting back to me so quickly! I’ll try that out

  • @RusticRaver
    @RusticRaver Год назад

    amazing, damn resolution limit makes it pointless. still very interesting! shame they do not recognise autodidacts as students!

  • @tripadvisor5240
    @tripadvisor5240 Год назад

    how do you exporting the render from touchdesigner to get a 360°degree file that i can upload on youtube?

    • @scottiefox2525
      @scottiefox2525  Год назад +1

      That's a completely different workflow that includes exporting an equirectangular video with a 2:1 aspect ratio using Premiere Pro set to VR mode during export. RUclips interprets that footage as 360⁰ and uploads accordingly. You can search for tutorials on that subject for more info.

  • @wedgeewoo
    @wedgeewoo Год назад

    is this rendered in real time or just a rendered video?

    • @scottiefox2525
      @scottiefox2525  Год назад

      This is actual output of using the software. Touchdesigner is a platform designed around real-time visual content.

  • @taitedubard670
    @taitedubard670 Год назад

    You are awesome. You need 'promo sm'!!!