Это видео недоступно.
Сожалеем об этом.

From sketch to 2D to 3D in real time! - Stable Diffusion Experimental

Поделиться
HTML-код
  • Опубликовано: 31 июл 2024
  • Game ready assets, both 2D and 3D? In real time? From simple sketches?
    Well, it depends on your definition of "game ready" I guess. In this episode of Stable Diffusion Experimental, we'll set up a workflow to turn your sketches into 2D illustrations and 3D meshes for your games.
    If you like my contributions, you can buy me a coffee here: ko-fi.com/risunobushi
    In every Stable Diffusion Experimental video, we will look at new tools that are not quite production ready yet, but are still interesting and exciting enough to warrant further inspection.
    Resources needed:
    - Workflow: pastebin.com/7X6eNUJ2
    - Painter Node github: github.com/AlekPet/ComfyUI_Cu...
    - TripoSR github: github.com/flowtyone/ComfyUI-...
    - TripoSR checkpoint: huggingface.co/stabilityai/Tr...
    - (optional, windows only) Photoshop to ComfyUI github: github.com/NimaNzrii/comfyui-...
    Models:
    - Juggernaut SDXL Lightning (SDXL Lightning model used in this video): civitai.com/models/133005/jug...
    - SDXL Lightning LoRAs: huggingface.co/ByteDance/SDXL...
    Timestamps:
    00:00 - Intro
    00:31 - Workflow Overview
    02:12 - Setting up the 2D workflow
    11:05 - Testing the 2D workflow
    14:30 - Setting up the 3D workflow
    19:03 - Testing the 3D workflow
    20:23 - Testing everything in real time
    22:57 - Expanding the workflow
    24:26 - Outro
    #stablediffusion #stablediffusiontutorial #ai #generativeai #generativeart #comfyui #comfyuitutorial #sdxllightning #triposr #3d #2d #illustration #render #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni

Комментарии • 60

  • @LarryJamesWulfDesign
    @LarryJamesWulfDesign 2 месяца назад +1

    Awesome stuff!
    Thanks for the detailed walk through.

  • @Crovea
    @Crovea 3 месяца назад +1

    Very exciting stuff, thanks for sharing!

  • @shshsh-zy5qq
    @shshsh-zy5qq Месяц назад +2

    you are the best. the workflow is so unique and your explanation is so good. I love each time you explain why we need a node to be connected. By learning it, I could improvise the workflow since I know which node is for what. thank you so much!.

    • @risunobushi_ai
      @risunobushi_ai  Месяц назад

      Thank you! I try to explain a lot of stuff so no one gets left behind

  • @jaoltr
    @jaoltr 3 месяца назад +2

    🔥Great content and video. Thanks for the detailed information!

  • @shape6093
    @shape6093 3 месяца назад +2

    Great content! Love the way you explain everything.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      Thank you!

    • @user-wx9ms5zc4j
      @user-wx9ms5zc4j 3 месяца назад

      @@risunobushi_ai too slow my guy... Too slow...

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +2

      I make a point to not jump ahead and leave people behind, while providing time stamps for people who already know how it all works so they can just jump ahead. Up till now I’ve seen a positive reception to this approach, but if enough people would rather have me skipping parts to get to the bottom of the workflows I’ll do that.

    • @user-wx9ms5zc4j
      @user-wx9ms5zc4j 2 месяца назад

      @@risunobushi_ai No No, ignore my silly comment You are doing great!. I had to understand that not everyone is at higher level 😅.

  • @JavierCamacho
    @JavierCamacho 3 месяца назад +4

    Dude!! You are awesome!!! Thanks for the amount of time and effort you put into this.
    Is it possible to use SD to add more details to the 3d texture after you have a "good" 3d model? For example, rotate the model and then generate some extra texture?

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      Thank you for the kind words! Yeah, there’s a couple of workflows that go from mesh to texture, I’ll reply to this comment in a couple of hours with some links.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      You might want to take a look at these links:
      - www.reddit.com/r/StableDiffusion/s/n603cJOsgC
      - www.reddit.com/r/comfyui/s/AHKvo5UkXD
      - www.reddit.com/r/comfyui/s/YEAPX125Db
      - www.reddit.com/r/StableDiffusion/s/iZin0p4Fv9
      - www.reddit.com/r/StableDiffusion/s/T5sfUsckAs
      - and lastly www.reddit.com/r/StableDiffusion/s/gUP5d5pgFF
      The last one and the unity one are what I’d like to cover if this video is well received enough. I originally wanted to release this video as a full pipeline implementing retexturing, but just generating 2D and 3D assets was a 25 minutes tutorial, I thought it better to split it into two different videos and check if there’s interest in the subject.

    • @MotuDaaduBhai
      @MotuDaaduBhai 3 месяца назад

      @@risunobushi_ai I don't know how to thank you but Thank you anyway! :D

  • @prodmas
    @prodmas 3 месяца назад +4

    You don't need the lightning Lora with a lightning model. It's only used for the non-lightning models.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +2

      Uh, I was sure I needed it! Well, the more you know - I guess it’s a remnant of the SAI workflows with non fine tuned models then. I’m pretty sure one of the first workflows tutorials I had seen used it, but they were using a base model IIRC. I’ll try without and see if there’s any differences.

    • @quercus3290
      @quercus3290 3 месяца назад

      @@risunobushi_ai you could also try one of the LCM hyper accelerators with any other checkpoints, 8-12 steps usualy fine fine.

  • @voxyloids8723
    @voxyloids8723 3 месяца назад

    Always confused by scetch node because I use sketch book pro and then cntr+v ....

  • @kalyanwired
    @kalyanwired 3 месяца назад

    this is awesome. How do you save the model as fbx/obj to use in blender ?

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      the model is automatically saved in the default output folder inside of your comfyui folder, it's not clear in the github unfortunately.

  • @bkdjart
    @bkdjart Месяц назад

    Nice tutorial. Would you get faster inference using a lcm model? Also the official tripo website service has a refine feature and it looks great! Do you know if that feature is available in comfyui?

    • @risunobushi_ai
      @risunobushi_ai  Месяц назад +1

      Off the top of my head I can't remember what's the speed differences between LCM and Lighting models, but I guess you could speed things up even more with a Hyper model.
      It's also been a hot minute since I looked at new TripoSR features, but I'll take a look!

  • @Luca-fb6sq
    @Luca-fb6sq 2 месяца назад

    How to export the 3D mesh generated by Triposr and attach textures, is this possible?

    • @risunobushi_ai
      @risunobushi_ai  2 месяца назад

      The meshes are automatically exported in the output folder inside of your comfyUI directory, and the textures are already baked in. If you don't see them when you import the mesh, you might need to alter the material and append a color attribute node in the shader window

  • @remaztered
    @remaztered 2 месяца назад

    So great video! But I have a problem with RemBG node - how can I install this node?

    • @risunobushi_ai
      @risunobushi_ai  2 месяца назад

      you can either find the repo in the manager, on github, or just drag and drop the workflow json file in a comfyui instance and install missing nodes from the manager. let me know if that works for you

    • @remaztered
      @remaztered 2 месяца назад

      ​@@risunobushi_ai Oh yes, of course, working like a charm, thanks!

  • @ValleStutz
    @ValleStutz 3 месяца назад

    Is it anyhow possible to set the output directory?
    And what about the image textures in the obj?

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      The output directory is the default one, comfyui -> output. I realized afterwards that it's not clear in the github docs, I should have specified it in the video.
      As for the image textures, I'm not sure in other programs, but in Blender it's baked into the obj's material. So inside of the shading window, you'd need to add a new material, and then hook up a "color attribute" node to the base color input.

    • @ValleStutz
      @ValleStutz 3 месяца назад

      @@risunobushi_ai thanks, saw that. But I'm wondering, if it's possible to give a own prefix filename, since I like my data being structured

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      I'm not sure you can, I've tried inspecting the viewer and the sampler node's properties and there's no field for filename. I guess you could change the line of code that gives the file its filename, but I wouldn't risk breaking it when it's far easier to rename the files after they're generated.

  • @Darkfredor
    @Darkfredor 3 месяца назад

    hello, for me, on the linux mint, the nodes import failled...:(

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      I’m not well versed in Linux, but did you fail importing them via comfyUI manager or via git pull?

  • @miayoung1343
    @miayoung1343 3 месяца назад

    when I select auto Q and hit prompt once, then my comfyUI start prompting constantly even if i didn't change anything, why is that?

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      That’s because auto queue does precisely that: it keeps generating automatically, as soon as the previous generation ends. We keep the seed fixed in the KSampler to “stop” new generations from continuing if nothing changes, but the generations are queued regardless of any change happening. If no changes have happened, the generation stops as soon as it starts, and a new generation starts (and will stop as soon as it starts if nothing changed).
      This is the logic scheme:
      Auto queue > generation starts > see that ksampler has “fixed” seed attribute > has something changed?
      If yes: finish generating > generation starts > see that ksampler has “fixed” seed attribute > has something changed?
      If no: stop the generation > generation starts > etc.

    • @miayoung1343
      @miayoung1343 3 месяца назад

      @@risunobushi_ai I see~ I changed the seed to "fixed", it's worked just as your's. THX!

    • @miayoung1343
      @miayoung1343 3 месяца назад

      @@risunobushi_ai But, I couldn't Install dependenciesIt says "The "pip" item cannot be recognized as the name of a cmdlet, function, script file, or executable program."

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      Sounds like you don’t have pip installed! You can look up how to install pip, it’s pretty easy. Nowadays it should come with python, but if you installed an older version of python it might not have been included

    • @miayoung1343
      @miayoung1343 3 месяца назад

      ​@@risunobushi_ai You are right! And now I have successfully made it.Thanks~~

  • @user-is4pu6tw6r
    @user-is4pu6tw6r 3 месяца назад

    Is this also possible in A1111?

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      I haven’t followed the latest developments in auto1111 closely since I’ve been focusing on comfyUI, but I’ll check if it’s possible to do it

  • @mechanicalmonk2020
    @mechanicalmonk2020 3 месяца назад

    Have another neural net generate the starting sketch from the 2D image

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      Error: stuck in a loop. Please send help.

  • @manolomaru
    @manolomaru 3 месяца назад +1

    ✨👌😎😵😎👍✨

  • @justanothernobody7142
    @justanothernobody7142 3 месяца назад

    It's an interesting workflow but unfotunately the 3D side of things just isn't good enough yet. These assets would need to be completely reworked at which point you might as well be creating them from scratch anyway. I think at the moment you're far better off just modeling the asset and using SD to help with textures.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      That's a future part 2 of this video. Originally I wanted to go through SD texturing as well, but once I saw that the video would be 25 minutes just for the 2D and 3D generations, I thought it would be better to record a texturing tutorial separately.

    • @justanothernobody7142
      @justanothernobody7142 3 месяца назад

      @@risunobushi_ai Ok I'll keep an eye out for that. Will be interesting seeing another workflow. The problem with texturing is that so far most workflows I've seen involve generating diffuse textures, which isn't good because you're getting all light and shadow information baked into the texture. What you really want is albedo textures. My workflow for that is long winded but it's the only way I've found to try and avoid the problem. I usually generate a basic albedo texture in Blender and then use that with img2img. I then also use a combination of 3D renders for adding in more detail and for generating controlNet guides.
      For texturing what we really need is a model trained only on albedo textures so it can generate images without shadow and lighting but nothing like that has been trained as far as I know. There's a few loras for certain types of textures but they don't work that well.

    • @PSA04
      @PSA04 3 месяца назад

      ​@@risunobushi_aican't wait! This is a really powerful workflow and puts the control back into the humans hands. 100% for this type of AI creation.

  • @bitreign
    @bitreign 3 месяца назад +2

    Core of the whole video: 19:53 (as expected)

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      I build my tutorials so that no matter the knowledge of the viewer they can follow through and understand why every node is used. I know it takes a long time to go through all the workflow building process, but I hope that by giving a disclaimer at the start of the video that one can jump ahead to see how it all works, everyone can jump to the starting point that’s best for them.

  • @sdafsdf9628
    @sdafsdf9628 Месяц назад

    it's nice that the technology works, but the result in 3d is so bad... 2d can still keep up and is nice to look at.

  • @user-zq4pb1lm5k
    @user-zq4pb1lm5k 3 месяца назад

    way to complex for me

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад +1

      You can try downloading the json file for the workflow and scribble away, you just need to install the missing nodes! No need to follow me through all the steps, I spend time explaining how to build it for people who want to replicate and understand what every node does, but it’s not required.

  • @user-pl4pz2xn2c
    @user-pl4pz2xn2c 3 месяца назад +1

    the final result looks like poop.

    • @risunobushi_ai
      @risunobushi_ai  3 месяца назад

      it does a bit, but it also looks like a promising first step that can be polished through a dedicated pipeline.

  • @dispholidus
    @dispholidus 2 месяца назад

    I don't think I will ever understand the appeal of the spaghetti graph stuff over clear, reusable code. Comfy UI really is pure crap.
    Thanks anyways, the process itself is quite interesting.

    • @risunobushi_ai
      @risunobushi_ai  2 месяца назад

      The appeal lies in it being nothing else but a interface for node-based coding. I don’t know how to code well, but node coding is a lot easier to understand, at least in my experience.
      Also, in my latest video I go over a implementation of frequency separation based on nodes, and that has both a real life use case and is pretty invaluable in order to create a all in one, 1-click solution that would not be possible with other UIs (well, except swarm, which is basically a webui frontend for comfyUI)

    • @kenhew4641
      @kenhew4641 Месяц назад

      I'm the exact opposite, the moment i see nodes and lines I immediately understood everything, it's just so clear and instinctive. Not to mention a node based system is by its very design an open and potentially unlimited system which makes it exponentially more powerful than any other UI system, so much so that I find myself increasingly unable to use any software if it's not a node-based UI.

    • @dispholidus
      @dispholidus Месяц назад

      @@kenhew4641 I'd agree it's a nice UI as long as it remains simple and you don't need too much automation or dynamic stuff.
      In a way, it is similar to spreadsheet systems. Yes, it is powerful, but as complexity increases, it turns into pure hell compared to a real database associated with dedicated code.
      The grudge I hold against Comfy UI is the same I have against excel in corporate environments. It becomes a standard, cannibalizing and eclipsing other, more flexible and powerful solutions.
      Just the idea of a 2D representation, by itself, can be a serious limitation if you have to abstract relatively complex data and processes.