欧阳老师,绘图时出现,Requested to load AutoencoderKL Loading 1 new model ERROR:asyncio:Exception in callback _ProactorBasePipeTransport,接着远程主机强迫关闭了一个现有的连接,这个错误如何修复呢?
为什么。我一生成就报错啊,换了好多 安装包了, 卸载安装重复好多遍了。。就是不行,欧阳,有什么办法吗? Error occurred when executing KSampler (Efficient): module 'comfy.sample' has no attribute 'prepare_mask'
I’ve added Chinese subtitles to the video, so you can enable RUclips’s auto-translate feature for subtitles to view real-time English translations. I hope this helps! If you have any questions, feel free to leave a comment at any time!😊
@@OUYCC @OUYCC thanks for the reply, but adding only Chinese subtitle doesn't make youtube translate to English subtitles but Chinese alone, pls add an English subtitle too, also add English subtitles to the recent animatediff video too pls
老师讲的太细了,我研究了好几天了,终于有人讲原理了,谢谢老师!期待出一期图生视频的讲解,谢谢!
B站过来的,这里的沙发我抢了~😁😁😁
你厉害了,还会翻墙
非常喜欢你的课程!!多多更新~
感谢支持,会持续更新哒!😋
老师很好老师最棒
视频是我看过讲的最仔细的
该更新了大兄弟~!感谢!
支持你,希望视频越做越好,等你更新
感谢支持!😄
啃原文真的太难了感谢大大通俗易懂的讲解🥰
感谢你的评论支持!😀
基础都学会了,期待大佬进阶教程
就在等你更新了
坐等更新
辛苦了。
自己研究了好多天了,各种环境链接报错……谢谢up分享,逻辑清晰,期待更新
多谢多谢!🎉🎉🎉
请问效率加载器在哪里?找了好久也找不到哪里可以下载,下载了也没看到您的节点
老师跟您请教一下 为什么我用效率加载器连接采样器直接生成图片没问题,一加上动态扩散加载器就无法生成图片呢,生成后全是噪点。
老师,想问我的随机种子设置为固定了,可好像加入animatediff之后还是生成了各种图,没有生成连续的图是为什么
效果强度可能太低了
如果您去演美國眾神,您一定是comfyui之神
哈哈,您这个比喻太夸张了!!
感謝🙏
感谢评论支持啦!🙏
很牛的视频教程,,,详细而强大
感谢支持啦😁
问个问题 我那个合并为视频不显示预览是什么原因啊 以及如果是SDXL 模组的话是否有animate diff 可以使用
欧阳老师,绘图时出现,Requested to load AutoencoderKL
Loading 1 new model
ERROR:asyncio:Exception in callback _ProactorBasePipeTransport,接着远程主机强迫关闭了一个现有的连接,这个错误如何修复呢?
非常给力
感谢支持😄
22:39 這個 animatediff controlnet checkpoint 模型在哪裡呀?
其實應該是 controlnet checkpoint
老师您好,请问提示词增加权重的按键操作是什么
Ctrl+ 光标上下
@@OUYCC 感谢老师,还想问问如果把模型换成sdxl的之后流程是否需要改变 ,系数放大之后的采样器会报错显示矩阵块不对应没办法相乘
@@杨一帆-l8h 大模型更换后,AD的动态模型和cn模型都需要换成支持SDXL的模型,还是有不少改动的
@@OUYCC后续老师能出这类进阶的视频嘛 有些时候还是需要一些sdxl的模型去做的 但是ad和cn的模型不知道去哪找和系数放大会出现问题 谢谢老师的耐心解答
老师,Mac studio M2芯片使用frame插件报错,要如何解决:
!!! Exception during processing !!!
Traceback (most recent call last):
/*
文件地址
*/
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
呃,抱歉。因为我使用的并不是这个配置,所以提供的建议可能没啥参考性,这方面问题还是要多多查阅下google相关内容了😂
M2 的GPU跟插件有问题,要分步渲染下,先用GPU渲染出图,然后用CPU计算插帧
解决了吗?
我就是广大小伙伴的 ----其中之一
这个视频的下一个是哪个?
欧阳老师你好,请问为什么我在comfyui里右键提示词调度(批次)后没有下面一系列的转换选项,包括转换批次大小为输入,这个选项也没看见,是因为我的节点没更新到最新版本吗?
相反是因为fizz更新了,更新的 提示词调度 节点把几个权重单独提升了,需要用int类似的节点将参数输入进去
@@OUYCC 谢谢欧阳老师的答疑,能否指导如何操作,是安装您提到的int类似的节点还是?在网上也查找了相关资料,似乎关于这方面的疑问很少,因为对comfyui还不是特别熟悉,谢谢
@@OUYCC 欧阳老师,我这边是其他节点右键也不见转换的选项,效率加速器也是没有
请问一下这个Frame-Interpolation补帧插件从manager安装失败,通过ULR克隆也失败,是哪里出问题了
应该是导入失败吧,这种情况要排查插件冲突问题,目前这中问题无解,只能重新配置环境了
为什么。我一生成就报错啊,换了好多 安装包了, 卸载安装重复好多遍了。。就是不行,欧阳,有什么办法吗? Error occurred when executing KSampler (Efficient):
module 'comfy.sample' has no attribute 'prepare_mask'
請教大佬,啓動時提示這個錯誤:
[VideoHelperSuite] - WARNING - Failed to import imageio_ffmpeg
[VideoHelperSuite] - ERROR - No valid ffmpeg found.
助手路徑沒有錯,添加節點也OK,合并視頻時會提示一串錯誤。
是要安裝ffmpeg 7.0嗎?
是的,环境缺少ffmpeg的库,需要安装下,很简单的。网上搜索一下安装方法就好!
为啥感觉最后的视频 看起来非常模糊 ,是因为前面512分辨率低问题吗
請問animatediff_controlnet_checkpoint.ckpt 這個要安裝在哪個資料夾?抱歉 我是新手
我已嘗試使用夸克 已經知道了 謝謝您 打擾
請問一下老師
我的Fizz節點一直無法成功安裝
不論怎麼做都是(IMPORT FAILED)
这个应该适合环境或系统冲突了,整体插件和环境。(IMPORT FAILED)的情况我目前也没找到明确的问题和更好的解决方案
@@OUYCC 好吧>
我的提示词调度只能生成“0”秒的内容,后面像变摩托的提示词根本不起作用,也不知道怎么回事,安装什么的也没有报错
我找到原因了,原来有两个提示词调度模块,一个标题后面备注了“批次”,一个没有,我先用了没有标注的。不过也因此有了新的疑问,不知道两者的区别在哪里,不知道是什么样的情况下另一个提示词调度才能起作用。
两套不同的流程,那个题词要用调度器去单独控制
請問老師,為什麼固定的種子只要經過animatediff loader就跟原本不同
本质来说加入animatediff以后的效果相当于是另一个模型的效果,和原模型就没有太大关系了
@@OUYCC 那這樣先抽卡固定種子還有意義嗎?
能公开一下你的电脑配置吗?
视频流程我主要还是服务器跑,本地配置比较慢
大哥,comfyui动态扩散加载器模型在哪个文件夹
你是说动态模型么?
@@OUYCC 谢谢,已解决
Pls can you turn on the English subtitles 😢 pls
I’ve added Chinese subtitles to the video, so you can enable RUclips’s auto-translate feature for subtitles to view real-time English translations. I hope this helps! If you have any questions, feel free to leave a comment at any time!😊
@@OUYCC @OUYCC thanks for the reply, but adding only Chinese subtitle doesn't make youtube translate to English subtitles but Chinese alone, pls add an English subtitle too, also add English subtitles to the recent animatediff video too pls
收索是什么快捷键
双击鼠标
k采样器的效率节点,怎么安装?
可以查看基础教程中的如何安装插件
@@OUYCC 已经安装好,谢谢
模特兒的右手變成腿那個奇怪畫面好像沒有辦法處理掉
确实如果偏差太大的话,还是无法完全修复😅
@@OUYCC 影片不能抹除掉嗎?圖片是可以抹除的
为啥一连接animatediff,出的图就花了啊
大概率是ad的设置不对
@@OUYCC ?
Error occurred when executing KSampler:
0
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI
odes.py", line 1373, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI
odes.py", line 1343, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 520, in motion_sample
latents = orig_comfy_sample(model, noise, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 111, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 801, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 703, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 690, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 669, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 574, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\.ext\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 771, in sample_ddpm
return generic_step_sampler(model, x, sigmas, extra_args, callback, disable, noise_sampler, DDPMSampler_step)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 760, in generic_step_sampler
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 297, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 656, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 659, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 625, in evolved_sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch_wrapper(model, [cond, uncond_], x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 904, in calc_cond_uncond_batch_wrapper
return comfy.samplers.calc_cond_batch(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 200, in calc_cond_batch
c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 696, in get_control_inject
return self.get_control_advanced(x_noisy, t, cond, batched_number)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced
return self.sliding_get_control(x_noisy, t, cond, batched_number)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 79, in sliding_get_control
return self.control_merge(None, control, control_prev, output_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 812, in control_merge_inject
x = control_output[i]
~~~~~~~~~~~~~~^^^
前面按老师的来全部过了,但加了高级contorlnet应用 后面那个K采样器后就爆上面的红字,请问是什么原因
欧阳威武
简单+简单=不懂了难度有点大,跟不上了
慢慢消化,这期为了满足大家需求内容有些跳级了,可以往后看😄
一步步搭 建得好好的,测试也能生成视频,然后手贱点一下升级,再重启就报错了,COMFYUI怎么老是这样,头都痛,
我是除了model夾不備份,其他都備起來,報錯就回複. 資料都不大.
@@kxc90611 有道理,我也整起来,谢谢。
暴露了 新疆的
呃?什么新疆?没明白你啥意思
@@OUYCC 开玩笑的 不要在意