Perfect upscales with SUPIR v2 + full comfyUI workflow

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 130

  • @stephantual
    @stephantual  8 месяцев назад +11

    Update: 3/29/24 and 4/17/24 (yes it moves fast) - it's been updated again and again - Because vRAM is still a concern, I've made it available as a one-click app at tinyurl.com/supirv2 , and updated the downloadable worfklow to reflect the addition of better Lightning support as well as LORAs. Cheers! 👽Also made another video at ruclips.net/video/EMAz8KktB5U/видео.html

    • @Thecroods923
      @Thecroods923 8 месяцев назад +1

      Will it work with 8gb vram and 16gb ram ? Can you make the workflow for these settings. And what will be the max image I can use with above configuration?

  • @97BuckeyeGuy
    @97BuckeyeGuy 8 месяцев назад +4

    5:00 I think you were the person commenting on my GitHub issue regarding resolutions requiring a division by 64. If you check the dev's replies in that post, he actually reduced it to 32. And then after that, he made additional code changes to make it not an issue at all. The last several images I upscaled were not divisible by 32 and they turned out fine. 👍🏼
    This version of Comfy SUPIR is SO much better than the first. The DEV really did some miracles with this version.

    • @stephantual
      @stephantual  8 месяцев назад

      Wasn't me! 👽I'm @stephantual on github. And yes, Kijai fixed it while I was rendering the video, didn't have the heart to re-record, especially given it will likely change again (there's some cool stuff people have dug up from the original repo). Cheers! What's your repo?

  • @user-hx1wz1lv4r
    @user-hx1wz1lv4r 8 месяцев назад +1

    TY bro you covered everything and I learn a lot watching you walk through the building of the workflow

  • @stepahinigor
    @stepahinigor 8 месяцев назад

    Hey thanks for update! That's amazing! I've been actively using SD (ComfyUI) for only a month and honestly SUPIR is not only the best upscale but also the easiest (v1 workflow) which is the fastest to start working correctly as it was in the tutorial, with the others always some problems.

    • @stephantual
      @stephantual  8 месяцев назад

      Yeah the CN in Supir is absolutely sick. 👽 I still clean up videos with a quick low denoise Animate Diff LCM pass, but it's pretty much replaced all my upscalers by now!

  • @AkshayAradhya
    @AkshayAradhya 3 месяца назад +1

    6:31
    Why are you using an Encoder ? You already have the denoised_latents from the Denoiser.
    It is redundant to encode the denoised image again.

  • @runebinder
    @runebinder 8 месяцев назад

    Tried this earlier. Took 3 hours on my 2080Ti to take an image where the subject's face takes up a large portion of the image and I hadn't even been able to get it past 1536 x 1024 as face detailers were not doing a good job past that resolution. Used SUPIR to get it to 4608 x 3072 with hardly any loss of detail, the face and in particular the eyes look amazing. Didn't change any of the settings so will have to have try and tweak to see if I can get the time down, but I'm very impressed. Think when Nvidia release the 50 series it may be upgrade time...

  • @giusparsifal
    @giusparsifal 6 месяцев назад

    Hello, This is this first workflow that really works (at least for me), thank you! I wrote you already I was looking for img2img workflow to get a realistic skin other than upscaling, this work perfect! I'm now trying with different Loras to achieve what I want, thank you again!
    Just a thing, the SUPIR Conditioner node doesn't have a positive prompt but just the negative, in the workflow I downloaded.
    EDIT: I realized now I have downloaded the next version you did :)

  • @aleassimarro
    @aleassimarro 8 месяцев назад +1

    Thank you. Can't say enough

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 8 месяцев назад

    great video Steph!

  • @xyzxyz324
    @xyzxyz324 8 месяцев назад

    Great explanation, got it worked like a charm on first try and results were amazing! Keep it up thank you!

    • @stephantual
      @stephantual  8 месяцев назад

      Glad it helped! 👽👽👽

  • @WhySoBroke
    @WhySoBroke 8 месяцев назад

    This is exactly what I was looking for... many thanks amigo!! ❤️🇲🇽❤️

    • @stephantual
      @stephantual  8 месяцев назад

      Glad I could help! 👽

  • @OptimBro
    @OptimBro 8 месяцев назад

    The boss is finally here 👌👌

  • @xphix9900
    @xphix9900 8 месяцев назад +1

    love your videos, so thanks, and you're my 2nd favorite french/english speaking person... 1st is Charles Leclerc lol, but it's saying a lot because i'm from Mtl, Qc ;)

  • @UAknight
    @UAknight Месяц назад

    Amazing! Do you have a workflow for video Upscale with SUPIR? I really need it! Thanks

  • @clearstoryimaging
    @clearstoryimaging 8 месяцев назад

    Thank you again for another great video!

    • @stephantual
      @stephantual  8 месяцев назад

      Glad you like them!👽

  • @Vigilence
    @Vigilence 8 месяцев назад +1

    The image resize node used here doesn’t allow resizes past 8000 pixels, is there a way to override this or another alternative mode I’m an use?

    • @stephantual
      @stephantual  8 месяцев назад +1

      mmm beyond the usual 7680 × 4320 of 8k, I like that attitude! 👽 Image resize (stock comfy) lets me go into 6 digits, so use that :) also you could just chain models (4x, 4x, 4x etc). That's... intense - let me know your results on discord if you pull it off, I'm genuinely curious!

    • @Vigilence
      @Vigilence 8 месяцев назад +1

      @@stephantualcan you name the node you are referencing for me? I don’t use comfy very much, and I have tested another image resize that only shows the latent type and scale number and it seems to slow the process down for some reason when compared to the image resize+ node you use here (where I manually input the output resolution).

  • @gcardinal
    @gcardinal 8 месяцев назад +1

    Thank you for the video and the intusiasm with all the details and explanations. I would kindly ask you to consider making smaller workflows. Currently there is a lot of bloatware like watermark, comments, switches etc., I get that this is your workflow and you like to show it off - but for most people and especially the ones who are just getting started - it is too much of unnecessery stuff. My suggestion is to limit workflow to the task at hand, if needed rather post several workflows.
    just my 2cents. thanks for the great content anyhow

  • @SamBeera
    @SamBeera 2 месяца назад

    Hey Stephan, Thank you for the details. Is there a downloadable workflow json file for this? BTW the tinyurl link gives a 404 error

  • @autonomousreviews2521
    @autonomousreviews2521 8 месяцев назад

    Fantastic :) Thank you for sharing!

  • @loubakalouba
    @loubakalouba 8 месяцев назад

    Thank you, you are a Hero!

  • @ExacoMvm
    @ExacoMvm 5 месяцев назад

    Doesn't work.
    Image Resize node has red outline almost immediately, no idea why:
    Error occurred when executing ImageResize+:
    'bool' object has no attribute 'startswith'

  • @Vigilence
    @Vigilence 8 месяцев назад +1

    I noticed you link to tcd lora, but the workflow doesn’t reference it, should we ignore it?

    • @stephantual
      @stephantual  8 месяцев назад

      Yeah it's just the one at huggingface.co/h1t/TCD-SDXL-LoRA/tree/main, now Kijai has updated the github to support an easy import if you really want to use it. However, that's entirely up to you of course! 👽

  • @SjonSjine
    @SjonSjine 8 месяцев назад

    Yes! Liked! Follower! Thanks!

    • @stephantual
      @stephantual  8 месяцев назад +1

      Welcome aboard the mothership! 👽👽👽

  • @internetperson2
    @internetperson2 8 месяцев назад

    Another banger brother

  • @BedTimeQuest
    @BedTimeQuest 8 месяцев назад

    Thanks this explanation is amazing, got it working instantly! I just kept getting errors on the final image rescale node, taking down the rescale_factor here from 4 to 3 somehow fixed it but I don't really understand what this affects. Also in the final image I get these weird added faces onto objects and 'ghosts' bleeding trough. not sure if it's a problem with my settings, the original (AI generated photo) or the checkpoint I'm using... Will have to do some more fiddling around

    • @stephantual
      @stephantual  8 месяцев назад

      Thanks for the feedback and your message on discord, it was very useful, new video on how to apply settings based on the input messages coming up soon! 👽

  • @cfcrow
    @cfcrow 8 месяцев назад +1

    not sure why i keep getting this error. The size of tensor a (60) must match the size of tensor b (500) at non-singleton dimension 3. without the resize node it works fine

    • @소금-v8z
      @소금-v8z 8 месяцев назад +1

      he has the answer at 15:35 just set the image size that can be divided by 64 such as 1280 and redo your workflow

  • @RobertJene
    @RobertJene 7 месяцев назад

    1:55 - building the workflow

    • @stephantual
      @stephantual  7 месяцев назад

      Hey it's nice to have your here Robert, love your videos! 👍👍👍

    • @RobertJene
      @RobertJene 7 месяцев назад

      @@stephantual oh sorry. I was making a doc with timestamps

  • @BackStab1988
    @BackStab1988 8 месяцев назад

    after writing at 1:35 i've got this: ERROR: Exception:
    Traceback (most recent call last):
    File "C:\Program Files\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
    ^^^^^^^^^^^^^^^
    File "C:\Program Files\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli
    eq_command.py", line 245, in wrapper
    return func(self, options, args)
    You didn't recieve such message

  • @mr.entezaee
    @mr.entezaee 8 месяцев назад +1

    no module 'xformers'. Processing without...
    What should I do to fix it?

    • @stephantual
      @stephantual  8 месяцев назад +1

      Don't worry about xFormers, pytorch came a long way and SUPIR will work without. If you want to install it anyways, check out my other supir video, it's in the description.

    • @mr.entezaee
      @mr.entezaee 8 месяцев назад

      @@stephantual Yes, finally, I'm glad it worked for me. But could not find the workflow of that old man. I have some very old photos that I want to test. Can you give me a workflow in this case?

    • @stephantual
      @stephantual  8 месяцев назад +1

      @@mr.entezaee There is no 'workflow' per image (as per 16:40) - just download the image from the magnific website, then update the settings to be in line with that source (HAT type model upscale, etc etc).

    • @mr.entezaee
      @mr.entezaee 8 месяцев назад

      Oh, I just realized now. Thank you for the good training you provided us@@stephantual

  • @idontcare9041
    @idontcare9041 8 месяцев назад

    Great video. Unfortunately I pretty much only get those error messages you talked about regarding resolution with SUPIR and I have absolutely no clue why. A few times it started sampling and most of those times I ran out of VRAM. 832*1216 should work if it requires to be divided by 64, I tried both with F and Q and all kinds of things. It's a really great upscale technique though.

    • @stephantual
      @stephantual  8 месяцев назад

      Hey! This was fixed now by Kijai I believe, did you update the repo in the last 48h? The error regarding resolution multiplier is unrelated to it running out of vRam btw, they are two distinct things. Yes 832x1236 should fit in an 8gb envelop, especially if you set unet to FP8 and don't use big tiles (try dropping it to smaller scales progressively until it fits in vram, using the task manager or equivalent to track its usage - you'll get the hang of it!).

  • @michaelbayes802
    @michaelbayes802 8 месяцев назад

    Hi, for some reason when executing your workflow I get an error at the step "supir conditioner" - TypeError: 'NoneType' object is not callable. Not sure how to resolve this

    • @stephantual
      @stephantual  8 месяцев назад

      That error in comfy is typical of one of the noodles not passing any data to an input. "Follow Execution"> find the node that breaks, Stick "beautify" nodes from Trung's 0246 (or similar) in every position and find the one that says 'null' - that's the culprit. Then trace it back to where it came from and figure out why it's not passing data - solved! I might make a tutorial on how to debug comfy errors because it's such a common question. Cheers! 👽

  • @cosmingurau
    @cosmingurau 6 месяцев назад

    I am looking for a Windows executable for a SUPIR upscaler with just a handful of options. Has anyone found anything like that yet? I found some for REALESRGAN, so I figured there might be at least one for SUPIR, seeing how awesome it is.

  • @SkN097
    @SkN097 8 месяцев назад

    Awesome videos! Keep it up, bro.
    There's something I'm not getting, though. How do the CFG scale start and end work? and same question about the Control Start and End

    • @stephantual
      @stephantual  8 месяцев назад +1

      (oversimplication due to YT comments): it controls the application of the controlnet over time in the diffusion process. Best way to explain it is to think of it as the same as 'start_at' and 'end_at' in Ipadapter. 👽

  • @headscout
    @headscout 7 месяцев назад

    Can I change the 'Interrogator' to Gemini Pro by Google?

  • @BackStab1988
    @BackStab1988 8 месяцев назад

    also got this message after opening a workflow: When loading the graph, the following node types were not found:
    GetNode
    CR LoRA Stack
    CR Apply LoRA Stack
    SetNode
    Bookmark (rgthree)
    SUPIR_decode
    ColorMatch
    CR Simple Text Watermark
    SUPIR_first_stage
    SUPIR_encode
    PlaySound|pysssss
    ImageResize+
    GetImageSize+
    SimpleMath+
    SUPIR_model_loader_v2
    Image Comparer (rgthree)
    SUPIR_conditioner
    Fast Groups Bypasser (rgthree)
    Image Resize
    Integer
    SUPIR_sample
    Nodes that have failed to load will show as red on the graph.
    Nothing works.That's pitty.

    • @donzitrone
      @donzitrone 8 месяцев назад

      you have to install the custom nodes

  • @TheGladScientist
    @TheGladScientist 8 месяцев назад

    awesome video! if you wanted to use this for a video, what would be the recommended approach?

    • @stephantual
      @stephantual  8 месяцев назад +1

      I do a lot of video upscaling with SUPIR these days, I found the best approach is:
      a) upscale, then supir (the standard way)
      b) now pass every frame to AD + LCM (or Lighning or whatever you prefer) directly into Ultimate SD upscale to maintain temporal consistency.
      this is because otherwise, the output will be 'grainy' and 'flicker'. I have a demo of this at flowt.ai/community/universal-video-generator-and-upscale-v4-lgdkt-f

  • @fabiotgarcia2
    @fabiotgarcia2 8 месяцев назад

    Does it work for Mac M2 Pro Max?

  • @iFilipis
    @iFilipis 8 месяцев назад

    I spent a full hour trying to understand if whatever that was published on flowt ai actually works. For me it just shows the custom nodes in red, and no documentation anywhere how to make it work. Or is it not meant to work there, but only locally?

    • @stephantual
      @stephantual  8 месяцев назад

      Good question. At the moment, it's my understanding that SUPIR does not operate on any SaaS service (unless you use a VPS and put it there yourself) due to a) a non-commercial license, b) no reply from the original devs (of the algo). Hopefully this changes soon. And yes it's purely for download purposes. 👽

  • @mr.entezaee
    @mr.entezaee 8 месяцев назад

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    fastai 2.7.11 requires torch=1.7, but you have torch 2.0.1 which is incompatible.
    simple-lama-inpainting 0.1.2 requires numpy=1.24.3, but you have numpy 1.22.4 which is incompatible.
    simple-lama-inpainting 0.1.2 requires torch!=2.0.1,>=1.13.1, but you have torch 2.0.1 which is incompatible.
    torchtext 0.14.1 requires torch==1.13.1, but you have torch 2.0.1 which is incompatible.
    xformers 0.0.25 requires torch==2.2.1, but you have torch 2.0.1 which is incompatible.

    • @mr.entezaee
      @mr.entezaee 8 месяцев назад

      I am an amateur. Someone tells me step by step what should I do? plz
      Oh yes, it was finally fixed with this:
      ......\python_embedded\python.exe -m pip install -r ./requirements.txt

  • @RayDusso
    @RayDusso 8 месяцев назад

    I don't get it, you say "if you didn't installed supir yet" then right after you give command to install the requirements in the supir folder. I don't have a supir folder since I didn't install it yet.

    • @stephantual
      @stephantual  8 месяцев назад

      Yeah it's tricky to cover 'most scenarios' especially when there are so many platforms, good point. But basically all they need to do now is hit 'install' in manager, run requirements (possibly, this varies), install xformers (if they want to AND don't already have it)... you see how all this conditional logic gets pretty hard to make a meaningful video after just a few branches :) Cheers!

  • @timothykrell
    @timothykrell 8 месяцев назад

    I get an error "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!" when the SUPIR sampler runs. Anyone know what that problem might be?

    • @stephantual
      @stephantual  8 месяцев назад

      Answer at 16:28 👽

    • @timothykrell
      @timothykrell 8 месяцев назад

      @@stephantual That was it! Thank you! I thought the denoised_latent could be passed directly to the sampler's latent input. Should have watched that end bit!

  • @jocg9168
    @jocg9168 8 месяцев назад

    Very interesting I need figure to work with less Ram I have 32gb but my GPU Vram is 24gb but usually my poor 32gb memory goes off.

    • @stephantual
      @stephantual  8 месяцев назад

      Ah you mean regular, old boring ddim? So that seems to be mostly linked to the encode/decode process. Drop the precision on those and limit the tile size. There's no shame in using tiny titles 😹👽

  • @pabloruizgarcia5628
    @pabloruizgarcia5628 8 месяцев назад

    Will this work with a 1080Ti with 11GB vRAM?

    • @stephantual
      @stephantual  7 месяцев назад

      Yes, SUPIR is not vRAM-bound, in the sense that it scales linearly with the size of the image you pass. Pass it a meme that's 320p and it will run on 6gb vRAM. Pass it a 4k frame and i don't think there's any local card that could give it the vRAM it would require. Best I was able to do upscale was 2k with 24gb vRAM, but the good news is supir is NOT an upscaler, it's a controlnet. So do a 1.2 upscale, loosen the controlnet, highen the CFG and you're golden. I have a video coming up on just that :) 👽

  • @ALEXEINAV
    @ALEXEINAV 8 месяцев назад

    ERROR: Invalid requirement: '/requirements.txt' ?

    • @stephantual
      @stephantual  8 месяцев назад

      The syntax depends on your platform (win/mac/osx) - just use what's appropriate - (hint: use 'tab' to cycle through the various files, and remember, it's in the SUPIR custom node folder, not the one above :))

  • @97BuckeyeGuy
    @97BuckeyeGuy 8 месяцев назад

    Is there any chance you could release a video and workflow using SUPIR to upscale an AnimateDef video? 😊

    • @stephantual
      @stephantual  8 месяцев назад

      I'm currently updating my general video generation worfklow to support supir V2, I'll push it to flowt.ai when it's all done :)

    • @97BuckeyeGuy
      @97BuckeyeGuy 8 месяцев назад

      @@stephantual You're a good man. Thank you.

  • @freefryz462
    @freefryz462 7 месяцев назад

    This does not work with the latest versions of ComfyUI and the Supir nodes, no matter which checkpoint/sampler/steps/cfg I use the image just comes out much more noisy than originally - it even adds some details so the functionally it's attempting to do the same thing the standalone does but instead fails horribly. Tested with multiple pictures, different lighting, checkpoints, etc. even tried your older workflows and it's the same story there using the latest versions at the least.

    • @stephantual
      @stephantual  7 месяцев назад +1

      I have it updated again at app.flowt.ai/flow/6605cc9703edf98c9e73567f-v. Currently recording video 3 - things move VERY fast. The principles do work though, I've spent 5 days recording this new 'complete guide' that should solve every issue. The trick is to understand what each paramaters do as SUPIR is running on a totally different pipeline. 👽

  • @pawansharma-lw9ny
    @pawansharma-lw9ny 8 месяцев назад

    can I run supir on Mac Studio m2 max 32 gb?

    • @stephantual
      @stephantual  8 месяцев назад

      I heard it runs on Mac M2s yes, however I don't have a mac so i can't 100% verify this information. 👽

    • @pawansharma-lw9ny
      @pawansharma-lw9ny 8 месяцев назад

      @@stephantual Thanks for the response but I think its not for Mac. I have 32 gb ram but I still getting llvm error.

    • @stephantual
      @stephantual  7 месяцев назад

      LLVM is doing to be linked to a visual model, likely in this case moondream - just ditch it and try SUPIR from there with a regular text prompt :)

  • @goodie2shoes
    @goodie2shoes 8 месяцев назад +3

    Luckily I can play this at 0,75 or 0,5 speed. You are going so fast (and I'm an old f(*k )

    • @stephantual
      @stephantual  8 месяцев назад +1

      Sorry about that :) 👽

    • @sbfisher
      @sbfisher 4 месяца назад +2

      Sometimes it's very fast if you're trying to see a detail of something you did, but overall it's fine since I would rather have a shorter video so it's easier to get past parts I'm not as interested in. It works to just slow down and replay relevant sections where someone needs to see the details. I'm just glad someone gave details and is documenting things.

    • @livinagoodlife
      @livinagoodlife 2 месяца назад

      learn keyboard shortcuts. Space to pause. arrow keys to go back and forward 5 seconds etc

  • @huytuenguyen6917
    @huytuenguyen6917 3 месяца назад

    Can it done on a video?

  • @franlp32
    @franlp32 8 месяцев назад

    Seems like your 3D usage is spiking a lot when running generations. I had the same issue when I had crystools monitor enabled.

    • @stephantual
      @stephantual  8 месяцев назад

      Thing is, I use shareX to record at 4k lossless, and it's violent - so it's very hard for me to run benchmarks - I try to take screenshots of the task manager between renders 👽👽

    • @franlp32
      @franlp32 8 месяцев назад

      @@stephantualif you have 3D usage spiking when not recording try to disable crystools monitor.

    • @weirdscix
      @weirdscix 8 месяцев назад

      @@stephantual OBS would be better, it can take full advantage of NVenc encoding

  • @kolkutta
    @kolkutta 8 месяцев назад

    Is it still imposible to use with 8gb ram?

    • @stephantual
      @stephantual  8 месяцев назад +1

      Works with FP8 unet, but evidently it's a direct correlation with the original size (and upscale needed) of the image - I'm working on a cloud-based worfklow for everyone to be able to try it 👽

  • @SyamsQbattar
    @SyamsQbattar 6 месяцев назад

    please link Civitai

  • @kattamaran
    @kattamaran 8 месяцев назад

    Put 64 into the resize node „multiple of“

    • @stephantual
      @stephantual  8 месяцев назад

      Hahah yeah you're right - but I got a lot of negative comments for putting 'too many nodes' - evidently, you can just use comfyMath (at least that what I do) and find an equivalent multiplier using a quick division by 64 of the original, round up to integer, then multiply again. I'm sure someone will come up with an all-in-one but since this will likely get fixed, didn't want to complicate things 👽👽

  • @nkofr
    @nkofr 8 месяцев назад

    génial

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 8 месяцев назад

    I kind of got lost in middle. It's probably because I haven't seen your first video about supir.

    • @stephantual
      @stephantual  8 месяцев назад +1

      I'd recommend downloading the worfklow itself, and dissect it. I try to not 'put the nodes together' so it's easier to follow when learning. That's pretty much how we're all learning - one node at a time! 👽 Good luck!

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 8 месяцев назад

      @@stephantual thanks a lot

  • @firasfadhl
    @firasfadhl 8 месяцев назад

    APISR 👀👀?

    • @stephantual
      @stephantual  8 месяцев назад

      I know right! Moves so fast! 👽

  • @Razunter
    @Razunter 8 месяцев назад

    Video title has a typo

    • @stephantual
      @stephantual  8 месяцев назад +1

      A typo? no... no I don't see it... 😅 Jokes aside, thank you so much, I need to stop pulling all nighters! Appreciated! 👽👽👽

  • @RobertJene
    @RobertJene 7 месяцев назад

    could you please cut the high end on your vocal track? thanks

    • @stephantual
      @stephantual  7 месяцев назад

      Yup! It's a huge pain to get good sound. I'd love some help actually - this was recorded on a R0de NT USB, and I just got a Shure MV7 where I pass the sound through Resolve instead of Audacity. If you know how to make it sound good and professional, please let me know on discord :) 👽

  • @epelfeld
    @epelfeld 8 месяцев назад

    Very interesting, too fast and hard to get what's going on. Thank you

    • @stephantual
      @stephantual  8 месяцев назад

      I know - it's difficult to maintain a pace that works for everyone. I reckon some will play it at 2x, others at 0.75x. :) 👽

  • @kalicromatico
    @kalicromatico 8 месяцев назад

    777!

  • @iamarto
    @iamarto 2 месяца назад

    Not for beginners.

  • @octopuss3893
    @octopuss3893 17 дней назад

    #404

  • @ThoughtFission
    @ThoughtFission 8 месяцев назад +1

    Amazing video. But OMG, slow down! Some of us are trying to learn and you go sooooo fast, and skip over things that are really important that a newbie won't know or understand.

    • @stephantual
      @stephantual  8 месяцев назад +1

      Heheh sorry - I suppose I am SO worried about doing a boring video with just a screen recording and me talking at the same time, that sometimes I get a *tiny* bit too excited. I'll try to organize myself to still be concised but keep a better pace. Thanks for the feedback! 👽

    • @ThoughtFission
      @ThoughtFission 8 месяцев назад

      @@stephantual🙂

    • @BadgerDogCat27
      @BadgerDogCat27 7 месяцев назад

      @stephantual Your content is definitely not boring, could you elaborate more on installing Moondream interrogator? Do you have instructions how to install it... Do I just run the python script?

    • @KOSMIKFEADRECORDS
      @KOSMIKFEADRECORDS 2 месяца назад

      no its refreshing. just rewatch parts. I got it up and running in no time.. just by pausing inbetween steps and using a bit of logic for the really brief parts. DONT SLOW DOWN please.

  • @twilightfilms9436
    @twilightfilms9436 8 месяцев назад +1

    Comfy died before it’s inception. Nodes are not wanted by artists, period. Steve Jobs said it back in 1999…..

    • @b4ngo540
      @b4ngo540 8 месяцев назад +5

      lol

    • @ChrisGunningham-t1x
      @ChrisGunningham-t1x 8 месяцев назад

      Apple bought Shake in 2002.. just saying

    • @stephantual
      @stephantual  8 месяцев назад +1

      Tempted to pin this :) 🛸

    • @albert93911
      @albert93911 8 месяцев назад +3

      True. Nobody wants Houdini. Or Blender. Or Nuke. Or Davinci Resolve. Nobodeh!

  • @dan_VFX
    @dan_VFX 8 месяцев назад

    Hi, I'm on a Mac M2 and I'm getting this error:
    Error occurred when executing SUPIR_first_stage:
    No operator found for `memory_efficient_attention_forward` with inputs:
    query : shape=(1, 3136, 1, 512) (torch.float32)
    key : shape=(1, 3136, 1, 512) (torch.float32)
    value : shape=(1, 3136, 1, 512) (torch.float32)
    attn_bias :
    p : 0.0
    `ck_decoderF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 256
    device=mps (supported: {'cuda'})
    operator wasn't built - see `python -m xformers.info` for more info
    `ckF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 256
    device=mps (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
    operator wasn't built - see `python -m xformers.info` for more info
    Any idea on how to fix it? Thanks