AI Projection Mapping Tutorial

Поделиться
HTML-код
  • Опубликовано: 3 янв 2025

Комментарии • 33

  • @TheClumsyAcrobat
    @TheClumsyAcrobat 8 месяцев назад +1

    This is an awesome concept and good for quick projects
    Please continue sharing projection mapping tutorials!

  • @evadermusic
    @evadermusic 6 месяцев назад

    This is an amazing tutorial man, thankyou so much.

  • @joshuabrost9431
    @joshuabrost9431 10 месяцев назад

    This is exactly what I was looking for! Thank you so much!!

  • @КсенияМартынова-д3м

    Thank you for this super video! Wanna try to do something like this.

  • @StackTrace_404
    @StackTrace_404 Год назад

    super underrated channel!!

    • @reflekkt_net
      @reflekkt_net  Год назад

      Thank you, I appreciate your words! It's the little things like this that pushes my motivation to work on more videos 🙂

  • @pizzakarton468
    @pizzakarton468 10 месяцев назад +1

    hey, i am a long-term videomapper and it's cool to see how you implement AI into mapping in a very hands-on approach. in my opionen the 'creative fusion' of prome-ai is the key, bc it respets the givin' proportions of the image, which you need for using it in mapping. so far I haven't figured out a way to use stable diffion (or similiar) to work in similar ways. have you?

    • @reflekkt_net
      @reflekkt_net  10 месяцев назад +3

      Thank you! I tried some workflows with Stable Diffusion and ControlNets, but couldn't get high quality results like with PromeAI yet.
      Right now I'm doing a lot of research tho for a video to video workflow in SD and am getting closer to what I want. Might do a tutorial in future if I figure it out properly!👍

  • @freivonstil
    @freivonstil Год назад +1

    clean and simple - thanks for sharing!

  • @Nono-cd7wg
    @Nono-cd7wg 11 месяцев назад

    wooow! Really great tutorial

  • @davidalanmedia
    @davidalanmedia Месяц назад

    That is so cool!

  • @arnonymous7211
    @arnonymous7211 11 месяцев назад +1

    This is awesome! Many thanks for making this video. Really appreciated and provides me with tons of creative input. Kudos!

  • @hablalabiblia
    @hablalabiblia Год назад

    Nicely done. Clean.

  • @markus_knoedel
    @markus_knoedel Год назад

    so relaxed and so cool. thanks.

  • @lenas6192
    @lenas6192 Месяц назад

    Hey! Quick question: Do I need two monitors for this? I somehow cant output the window and work in the editor at the same time :(

    • @reflekkt_net
      @reflekkt_net  Месяц назад

      @@lenas6192 No need for 2 monitors. Maybe you click on open as perform window instead of open as second window? Perform mode will only have the output running to save resources, but if you just open as second window you can keep working in the editor.

  • @victorcerda4862
    @victorcerda4862 11 месяцев назад

    please more videos!!! thank you very much!

    • @reflekkt_net
      @reflekkt_net  11 месяцев назад

      Thank you! Next tutorial is already in progress 🙂

  • @ithamardimarco9082
    @ithamardimarco9082 Год назад

    Great tutorial! Thanks!!!

  • @osamajamil4804
    @osamajamil4804 10 месяцев назад

    thats great. How can we connect this to depth camera so the projection changes with human motion?

    • @reflekkt_net
      @reflekkt_net  10 месяцев назад +1

      Thank you! You could for example use data from a depth camera to drive the switch, so the images projected changes depending on the distance of persons. Or composite the projection with a noiseTOP, which is connected to depth data that drives the transform parameter of the noise or something like that. You can literally use external data on almost every operator by referencing the value of your incoming data to a parameter of the network, almost no limits there :-)

  • @usahome
    @usahome 3 месяца назад

    Wow! Subscribed. 👍

  • @neokortexproductions3311
    @neokortexproductions3311 7 месяцев назад

    Thank you!

  • @נחמןגבאי-ו2י
    @נחמןגבאי-ו2י Год назад

    Amazing and accurate thank you

  • @brandonjoffe
    @brandonjoffe 7 месяцев назад

    What projector are you using?

    • @reflekkt_net
      @reflekkt_net  7 месяцев назад

      It's a NEC ME372W with a NP47LP lamp. I got it used from ebay for ~300$. It is not FullHD, but for hobby stuff or smaller parties it is really good, as it has a brightness of 3700lumen.

  • @julianhaidbaner1634
    @julianhaidbaner1634 Год назад

    Thanks you so much 4 share

  • @helloteko
    @helloteko Год назад

    Thanks!

  • @Septin1983
    @Septin1983 Год назад

    ich wette dass man das mit comfyui mit weniger aufwand hinbekommt. mein ansatz wäre zb. das pikachu bild ins comfyui laden dort erstellt controlnet die maske die du dann auch automatisch speichern lassen kannst und an hand der maske erstellt comfyui im selben atemzug die bilder bzw kannst du auch gleich ein video draus machen lassen somit musst du dann in touch designer nur das vieo laden anstelle der einzelnen bilder.

    • @reflekkt_net
      @reflekkt_net  Год назад +1

      Das ist eine Möglichkeit mit der ich mich auf jeden Fall mehr auseinander setzen möchte, sodass externe Tools wie PromeAI evtl garnicht mehr nötig sind.
      Wäre nice alles komplett in TD erstellen zu können.

    • @Septin1983
      @Septin1983 Год назад

      externe tools sind in manchen fällen nicht verkehrt zb hab kann man comfyui mit krita oder protoshop verbinden und wenn man ein bild malt wird mit comfyui aus deinem gemalten in realtime ein bild generiert oder du verbindest eine webcam mt comfyui wo dann über die helleren und dunkleren bereiche des webcam bildes die bilder in comfyui beeinflusst werden wie zb posen von personen oder so @@reflekkt_net

  • @prinnycupcakes4992
    @prinnycupcakes4992 11 месяцев назад

    awesome \m/

  • @SuminHan-f4k
    @SuminHan-f4k 11 месяцев назад

    👍👍👍