From sketch to 2D to 3D in real time! - Stable Diffusion Experimental

Поделиться
HTML-код
  • Опубликовано: 29 ноя 2024

Комментарии • 66

  • @fredericchauveau9889
    @fredericchauveau9889 5 часов назад

    Very good video, and especially, finally someone who explains very clearly what they're doing, without skipping steps. Well done!

  • @prodmas
    @prodmas 7 месяцев назад +4

    You don't need the lightning Lora with a lightning model. It's only used for the non-lightning models.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +2

      Uh, I was sure I needed it! Well, the more you know - I guess it’s a remnant of the SAI workflows with non fine tuned models then. I’m pretty sure one of the first workflows tutorials I had seen used it, but they were using a base model IIRC. I’ll try without and see if there’s any differences.

  • @JavierCamacho
    @JavierCamacho 7 месяцев назад +4

    Dude!! You are awesome!!! Thanks for the amount of time and effort you put into this.
    Is it possible to use SD to add more details to the 3d texture after you have a "good" 3d model? For example, rotate the model and then generate some extra texture?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      Thank you for the kind words! Yeah, there’s a couple of workflows that go from mesh to texture, I’ll reply to this comment in a couple of hours with some links.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      You might want to take a look at these links:
      - www.reddit.com/r/StableDiffusion/s/n603cJOsgC
      - www.reddit.com/r/comfyui/s/AHKvo5UkXD
      - www.reddit.com/r/comfyui/s/YEAPX125Db
      - www.reddit.com/r/StableDiffusion/s/iZin0p4Fv9
      - www.reddit.com/r/StableDiffusion/s/T5sfUsckAs
      - and lastly www.reddit.com/r/StableDiffusion/s/gUP5d5pgFF
      The last one and the unity one are what I’d like to cover if this video is well received enough. I originally wanted to release this video as a full pipeline implementing retexturing, but just generating 2D and 3D assets was a 25 minutes tutorial, I thought it better to split it into two different videos and check if there’s interest in the subject.

    • @MotuDaaduBhai
      @MotuDaaduBhai 7 месяцев назад

      @@risunobushi_ai I don't know how to thank you but Thank you anyway! :D

  • @shshsh-zy5qq
    @shshsh-zy5qq 5 месяцев назад +2

    you are the best. the workflow is so unique and your explanation is so good. I love each time you explain why we need a node to be connected. By learning it, I could improvise the workflow since I know which node is for what. thank you so much!.

    • @risunobushi_ai
      @risunobushi_ai  5 месяцев назад

      Thank you! I try to explain a lot of stuff so no one gets left behind

  • @poly_base3d
    @poly_base3d 9 дней назад

    hello
    i did everything exactly like tutorial but when open comfyui it says :
    Cannot import C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Flowty-TripoSR module for custom nodes: No module named 'trimesh'
    and import failed for TripoSR

  • @ЕкатеринаЧудинова-э1ш

    Hi! Thank you for a great tutorial! Unfortunately, my TripoSR Viewer is stucked instantly on Loading scene mode. Do you know how to fix that, please?

  • @jaoltr
    @jaoltr 7 месяцев назад +2

    🔥Great content and video. Thanks for the detailed information!

  • @Crovea
    @Crovea 7 месяцев назад +1

    Very exciting stuff, thanks for sharing!

  • @Gounesh
    @Gounesh 2 месяца назад

    Is there a way to retexture with higher res images?

  • @bkdjart
    @bkdjart 5 месяцев назад

    Nice tutorial. Would you get faster inference using a lcm model? Also the official tripo website service has a refine feature and it looks great! Do you know if that feature is available in comfyui?

    • @risunobushi_ai
      @risunobushi_ai  5 месяцев назад +1

      Off the top of my head I can't remember what's the speed differences between LCM and Lighting models, but I guess you could speed things up even more with a Hyper model.
      It's also been a hot minute since I looked at new TripoSR features, but I'll take a look!

  • @LarryJamesWulfDesign
    @LarryJamesWulfDesign 6 месяцев назад +1

    Awesome stuff!
    Thanks for the detailed walk through.

  • @shape6093
    @shape6093 7 месяцев назад +2

    Great content! Love the way you explain everything.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      Thank you!

    • @KartikayMathur-y8e
      @KartikayMathur-y8e 7 месяцев назад

      @@risunobushi_ai too slow my guy... Too slow...

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +2

      I make a point to not jump ahead and leave people behind, while providing time stamps for people who already know how it all works so they can just jump ahead. Up till now I’ve seen a positive reception to this approach, but if enough people would rather have me skipping parts to get to the bottom of the workflows I’ll do that.

    • @KartikayMathur-y8e
      @KartikayMathur-y8e 7 месяцев назад

      @@risunobushi_ai No No, ignore my silly comment You are doing great!. I had to understand that not everyone is at higher level 😅.

  • @placebo_yue
    @placebo_yue 2 месяца назад

    Sadly tripoSR is broken now. Do you have any solution to that bro? Nobody can give me answers or help so far :(

  • @remaztered
    @remaztered 6 месяцев назад

    So great video! But I have a problem with RemBG node - how can I install this node?

    • @risunobushi_ai
      @risunobushi_ai  6 месяцев назад

      you can either find the repo in the manager, on github, or just drag and drop the workflow json file in a comfyui instance and install missing nodes from the manager. let me know if that works for you

    • @remaztered
      @remaztered 6 месяцев назад

      ​@@risunobushi_ai Oh yes, of course, working like a charm, thanks!

  • @kalyanwired
    @kalyanwired 7 месяцев назад

    this is awesome. How do you save the model as fbx/obj to use in blender ?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      the model is automatically saved in the default output folder inside of your comfyui folder, it's not clear in the github unfortunately.

  • @---Nikita--
    @---Nikita-- 2 месяца назад

    more tuts for generating 3d models pls

  • @mechanicalmonk2020
    @mechanicalmonk2020 7 месяцев назад

    Have another neural net generate the starting sketch from the 2D image

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      Error: stuck in a loop. Please send help.

  • @voxyloids8723
    @voxyloids8723 7 месяцев назад

    Always confused by scetch node because I use sketch book pro and then cntr+v ....

  • @Darkfredor
    @Darkfredor 7 месяцев назад

    hello, for me, on the linux mint, the nodes import failled...:(

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I’m not well versed in Linux, but did you fail importing them via comfyUI manager or via git pull?

  • @ValleStutz
    @ValleStutz 7 месяцев назад

    Is it anyhow possible to set the output directory?
    And what about the image textures in the obj?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      The output directory is the default one, comfyui -> output. I realized afterwards that it's not clear in the github docs, I should have specified it in the video.
      As for the image textures, I'm not sure in other programs, but in Blender it's baked into the obj's material. So inside of the shading window, you'd need to add a new material, and then hook up a "color attribute" node to the base color input.

    • @ValleStutz
      @ValleStutz 7 месяцев назад

      @@risunobushi_ai thanks, saw that. But I'm wondering, if it's possible to give a own prefix filename, since I like my data being structured

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I'm not sure you can, I've tried inspecting the viewer and the sampler node's properties and there's no field for filename. I guess you could change the line of code that gives the file its filename, but I wouldn't risk breaking it when it's far easier to rename the files after they're generated.

  • @miayoung1343
    @miayoung1343 7 месяцев назад

    when I select auto Q and hit prompt once, then my comfyUI start prompting constantly even if i didn't change anything, why is that?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      That’s because auto queue does precisely that: it keeps generating automatically, as soon as the previous generation ends. We keep the seed fixed in the KSampler to “stop” new generations from continuing if nothing changes, but the generations are queued regardless of any change happening. If no changes have happened, the generation stops as soon as it starts, and a new generation starts (and will stop as soon as it starts if nothing changed).
      This is the logic scheme:
      Auto queue > generation starts > see that ksampler has “fixed” seed attribute > has something changed?
      If yes: finish generating > generation starts > see that ksampler has “fixed” seed attribute > has something changed?
      If no: stop the generation > generation starts > etc.

    • @miayoung1343
      @miayoung1343 7 месяцев назад

      @@risunobushi_ai I see~ I changed the seed to "fixed", it's worked just as your's. THX!

    • @miayoung1343
      @miayoung1343 7 месяцев назад

      @@risunobushi_ai But, I couldn't Install dependenciesIt says "The "pip" item cannot be recognized as the name of a cmdlet, function, script file, or executable program."

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      Sounds like you don’t have pip installed! You can look up how to install pip, it’s pretty easy. Nowadays it should come with python, but if you installed an older version of python it might not have been included

    • @miayoung1343
      @miayoung1343 7 месяцев назад

      ​@@risunobushi_ai You are right! And now I have successfully made it.Thanks~~

  • @SilvioEngelhardt-i7p
    @SilvioEngelhardt-i7p 7 месяцев назад

    Is this also possible in A1111?

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      I haven’t followed the latest developments in auto1111 closely since I’ve been focusing on comfyUI, but I’ll check if it’s possible to do it

  • @samu7015
    @samu7015 Месяц назад +1

    TrippoSR is broken atm

  • @bitreign
    @bitreign 7 месяцев назад +2

    Core of the whole video: 19:53 (as expected)

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      I build my tutorials so that no matter the knowledge of the viewer they can follow through and understand why every node is used. I know it takes a long time to go through all the workflow building process, but I hope that by giving a disclaimer at the start of the video that one can jump ahead to see how it all works, everyone can jump to the starting point that’s best for them.

  • @justanothernobody7142
    @justanothernobody7142 7 месяцев назад

    It's an interesting workflow but unfotunately the 3D side of things just isn't good enough yet. These assets would need to be completely reworked at which point you might as well be creating them from scratch anyway. I think at the moment you're far better off just modeling the asset and using SD to help with textures.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      That's a future part 2 of this video. Originally I wanted to go through SD texturing as well, but once I saw that the video would be 25 minutes just for the 2D and 3D generations, I thought it would be better to record a texturing tutorial separately.

    • @justanothernobody7142
      @justanothernobody7142 7 месяцев назад

      @@risunobushi_ai Ok I'll keep an eye out for that. Will be interesting seeing another workflow. The problem with texturing is that so far most workflows I've seen involve generating diffuse textures, which isn't good because you're getting all light and shadow information baked into the texture. What you really want is albedo textures. My workflow for that is long winded but it's the only way I've found to try and avoid the problem. I usually generate a basic albedo texture in Blender and then use that with img2img. I then also use a combination of 3D renders for adding in more detail and for generating controlNet guides.
      For texturing what we really need is a model trained only on albedo textures so it can generate images without shadow and lighting but nothing like that has been trained as far as I know. There's a few loras for certain types of textures but they don't work that well.

    • @PSA04
      @PSA04 7 месяцев назад

      ​@@risunobushi_aican't wait! This is a really powerful workflow and puts the control back into the humans hands. 100% for this type of AI creation.

  • @leandrogoethals6599
    @leandrogoethals6599 Месяц назад

    the requirement for Tripo are an absolute wall for me i cannot continue it's always the damn torch related things:
    Building wheels for collected packages: antlr4-python3-runtime, torchmcubes
    Building wheel for antlr4-python3-runtime (setup.py) ... done
    Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.9.3-py3-none-any.whl size=144577 sha256=94db4768f9c65c129ffcf3ca5ace44270622aa6fe8001270ac5a05d8106aea22
    Stored in directory: c:\users\l\appdata\local\pip\cache\wheels\23\cf\80\f3efa822e6ab23277902ee9165fe772eeb1dfb8014f359020a
    Building wheel for torchmcubes (pyproject.toml) ... error
    error: subprocess-exited-with-error
    × Building wheel for torchmcubes (pyproject.toml) did not run successfully.
    │ exit code: 1
    ╰─> [45 lines of output]
    2024-10-11 19:56:30,599 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw
    ormal\Lib\site-packages\cmake\data\bin\cmake -E capabilities
    2024-10-11 19:56:30,616 - scikit_build_core - INFO - CMake version: 3.30.4
    *** scikit-build-core 0.10.7 using CMake 3.30.4 (wheel)
    2024-10-11 19:56:30,632 - scikit_build_core - INFO - Build directory: build
    *** Configuring CMake...
    2024-10-11 19:56:30,679 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
    2024-10-11 19:56:30,687 - scikit_build_core - INFO - RUN: C:\Users\L\AppData\Local\Temp\pip-build-env-_noi4nnw
    ormal\Lib\site-packages\cmake\data\bin\cmake -S. -Bbuild -Cbuild\CMakeInit.txt -DCMAKE_INSTALL_PREFIX=C:\Users\L\AppData\Local\Temp\tmpx1ax6fn6\wheel\platlib
    loading initial cache file build\CMakeInit.txt
    -- Building for: Visual Studio 17 2022
    -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045.
    -- The CXX compiler identification is MSVC 19.39.33523.0
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.39.33519/bin/Hostx64/x64/cl.exe - skipped
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Looking for a CUDA compiler
    -- Looking for a CUDA compiler - NOTFOUND
    -- NO CUDA INSTALLATION FOUND, INSTALLING CPU VERSION ONLY!
    -- Found Python: C:\Users\L\Documents\Analconda\envs\tripo2\python.exe (found version "3.9.20") found components: Interpreter Development Development.Module Development.Embed
    -- Performing Test HAS_MSVC_GL_LTCG
    -- Performing Test HAS_MSVC_GL_LTCG - Success
    -- Found pybind11: C:/Users/L/AppData/Local/Temp/pip-build-env-_noi4nnw/overlay/Lib/site-packages/pybind11/include (found version "2.13.6")
    -- Found OpenMP_CXX: -openmp (found version "2.0")
    -- Found OpenMP: TRUE (found version "2.0")
    CMake Error at CMakeLists.txt:51 (find_package):
    By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has
    asked CMake to find a package configuration file provided by "Torch", but
    CMake did not find one.
    Could not find a package configuration file provided by "Torch" with any of
    the following names:
    TorchConfig.cmake
    torch-config.cmake
    Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set
    "Torch_DIR" to a directory containing one of the above files. If "Torch"
    provides a separate development package or SDK, be sure it has been
    installed.
    -- Configuring incomplete, errors occurred!
    *** CMake configuration failed
    [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for torchmcubes
    Successfully built antlr4-python3-runtime
    Failed to build torchmcubes
    ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (torchmcubes

  • @manolomaru
    @manolomaru 7 месяцев назад +1

    ✨👌😎😵😎👍✨

  • @sdafsdf9628
    @sdafsdf9628 5 месяцев назад

    it's nice that the technology works, but the result in 3d is so bad... 2d can still keep up and is nice to look at.

  • @ManwithNoName-t1o
    @ManwithNoName-t1o 7 месяцев назад +1

    the final result looks like poop.

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад

      it does a bit, but it also looks like a promising first step that can be polished through a dedicated pipeline.

  • @PietKargaard
    @PietKargaard 7 месяцев назад

    way to complex for me

    • @risunobushi_ai
      @risunobushi_ai  7 месяцев назад +1

      You can try downloading the json file for the workflow and scribble away, you just need to install the missing nodes! No need to follow me through all the steps, I spend time explaining how to build it for people who want to replicate and understand what every node does, but it’s not required.

  • @dispholidus
    @dispholidus 6 месяцев назад

    I don't think I will ever understand the appeal of the spaghetti graph stuff over clear, reusable code. Comfy UI really is pure crap.
    Thanks anyways, the process itself is quite interesting.

    • @risunobushi_ai
      @risunobushi_ai  6 месяцев назад

      The appeal lies in it being nothing else but a interface for node-based coding. I don’t know how to code well, but node coding is a lot easier to understand, at least in my experience.
      Also, in my latest video I go over a implementation of frequency separation based on nodes, and that has both a real life use case and is pretty invaluable in order to create a all in one, 1-click solution that would not be possible with other UIs (well, except swarm, which is basically a webui frontend for comfyUI)

    • @kenhew4641
      @kenhew4641 5 месяцев назад

      I'm the exact opposite, the moment i see nodes and lines I immediately understood everything, it's just so clear and instinctive. Not to mention a node based system is by its very design an open and potentially unlimited system which makes it exponentially more powerful than any other UI system, so much so that I find myself increasingly unable to use any software if it's not a node-based UI.

    • @dispholidus
      @dispholidus 5 месяцев назад

      @@kenhew4641 I'd agree it's a nice UI as long as it remains simple and you don't need too much automation or dynamic stuff.
      In a way, it is similar to spreadsheet systems. Yes, it is powerful, but as complexity increases, it turns into pure hell compared to a real database associated with dedicated code.
      The grudge I hold against Comfy UI is the same I have against excel in corporate environments. It becomes a standard, cannibalizing and eclipsing other, more flexible and powerful solutions.
      Just the idea of a 2D representation, by itself, can be a serious limitation if you have to abstract relatively complex data and processes.