@@ChoRay-8204 First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.
Hi Datou,在运行时出现 raise Exception("IPAdapter model not found.")的提示, 可我明明在models下新建了一个ipadapter文件夹,且在该文件夹中放置了ip-adapter-plus_sd15.safetensors 模型,这是什么状况呢,请教😂
Datou老师,请问出现以下错误是少了哪几个模型呀? Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 525, in load_models raise Exception("IPAdapter model not found.")
老师 我装完ollama 也可以在cmd窗口聊天了,但是运行comfyui就报错,是需要改名字吗?我还找不到它下载的模型在哪? Error occurred when executing OllamaVision: model 'llava:7b-v1.6-mistral-fp16' not found, try pulling it first
Yi Lin about 7 hours ago First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.
@@Datou1977 Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("ClipVision model not found.")
@@Datou1977 Exception during processing!!! ClipVision model not found. Traceback (most recent call last): File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("ClipVision model not found.") Exception: ClipVision model not found.
It's very carefully produced and it's often very long. Thank you for sharing. It inspires me a lot. Keep watching.
“哎,你还是要进步啊,同学”,哈哈,头哥幽默
试了一下ollamam vision使用llama3模型不能识别图像,改成WD 1.4 tagger识别特征后,再给Ollama重新编译,才走通了这个流程。
llama3是瞎子啊老哥,识别图像要用llava-phi3:3.8b-mini-fp16
奈何显存不多,Ollama前段时间确实装了,也可以运行,但是同时运行comfyui这貌似成了奢望。张学友那张动态的效果是怎么实现的呢?
动态效果看这个视频 ruclips.net/video/_dfb8qS_YnU/видео.html
把ollama节点换成gemini节点,就不占用本地算力了,模型能力还更强。实在不行去掉语言模型全自动生成提示词这一块,用wd14 tagger节点或者手动输入提示词。
@@Datou1977 好的大佬,这就去试试。哇哈哈。
见过最详细的工作流,都配有模型下载,和数值运用,大佬实在太周全了
喜欢大头哥的流和视频 赞
大头哥的流 我就是那个AUX的depth-anything每次都报错 不得不换成marigold
@@黄旻斐 marigold的深度图更精确和清晰,效果更好,就是速度慢点。
感谢大佬分享,经过好几天的折腾终于能出图了,但是出图效果跟openart上面的图片有差距,我应该如何调整那?
到底有什么差距我也不清楚啊,而且模型一致的话工作流运行下来就能得到一样的结果,不会差很多。
its very helpful
整了三天了,求老师解惑,同样是clip的报错。在IPAdapter style & Composition SDXL节点上:Error occurred when executing IPAdapterStyleComposition:
Missing CLIPVision model.
File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/Volumes/Lei’s2T/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 661, in apply_ipadapter
raise Exception("Missing CLIPVision model.")
CLIP_Vision下也拷贝了CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors和CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
@@ChoRay-8204 First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.
头哥 ,我这个显卡2070s 能带的动 llava:7b-v1.6-mistral-fp16 这个模型吗。
用这个模型,只需要2.3G ollama.com/library/llava-phi3
👍👍👍好棒
Error occurred when executing OllamaVision:
llama runner process has terminated: exit status 0xc0000005
Ollama已经安装也运行了,模型也都下载好了。为啥还出错啊,求博主解答下,就卡在这了
关注一下显存占用情况,或者先用cmd运行ollama模型看看能不能正常对话
Hi Datou,在运行时出现 raise Exception("IPAdapter model not found.")的提示, 可我明明在models下新建了一个ipadapter文件夹,且在该文件夹中放置了ip-adapter-plus_sd15.safetensors 模型,这是什么状况呢,请教😂
需要xl的ipa模型,理论上会自动下载啊。huggingface.co/h94/IP-Adapter/tree/main/sdxl_models
@@Datou1977 非常感谢,可以了,向你学习😁
老师,我运行的时候提示Error occurred when executing OllamaVision,该怎么调整,我在本地看ollama是可以正常运行的
要不换个vision模型试试,用llava-phi3:3.8b-mini-fp16
@@Datou1977 老师,ollama的模型我直接拉取运行的,所以vision模型填写的只有llava/llama3,这样有影响吗
@@justtomatoo 有影响,默认名字是量化后的小号模型,能力缩水了
ollama的节点和模型那里,如果本地没有部署,能调用某些平台的吗?
gemini节点可以高替
@@Datou1977 具体怎么操作呢?
@@binary6699 github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini 看这里的说明
Datou老师,请问出现以下错误是少了哪几个模型呀?
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 525, in load_models
raise Exception("IPAdapter model not found.")
你可能需要下载这个模型ip-adapter_sdxl_vit-h.safetensors,放到ComfyUI\models\ipadapter目录下
不知道名字那个好像是劳模姐
对,杰西卡·查斯坦
datou老师,sigmas节点报红,在manager也搜不出,请问是哪个插件的?我用的是整合包的Comfyui
comfyui本体自带的,把它升级到最新版就有了,你试试
老师这个用cmd下载的llama模型是不是默认都存C盘了,有没有办法找到后放到其他盘也能识别出来,C盘标红不够用了
ollama这个软件好像默认是往c盘装,它也没有任何设置界面,没得改。我是整块硬盘不分区,就一个c盘。你如果分区了,可以用分区软件调整一下c盘的大小(有点风险,小心)。
可以换的,从环境变量里改。有教程
x.com/xulzy_6/status/1787684486815396073 这里
老师 我装完ollama 也可以在cmd窗口聊天了,但是运行comfyui就报错,是需要改名字吗?我还找不到它下载的模型在哪?
Error occurred when executing OllamaVision:
model 'llava:7b-v1.6-mistral-fp16' not found, try pulling it first
应该是没有下载这个模型,cmd里面输入ollama run llava:7b-v1.6-mistral-fp16,下载成功后再用comfyui调用
16g 显存 comfyui能跑吗
如果把ollama节点换成gemini节点,不在本地跑语言模型,生成粘土图片只需要14G显存。如果用本地的模型,下载尺寸小一点的模型。
老师,请问提示找不到ClipVision如何解决
在哪个节点提示的?
Yi Lin
about 7 hours ago
First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.
@@Datou1977 Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models
raise Exception("ClipVision model not found.")
@@Datou1977 Exception during processing!!! ClipVision model not found.
Traceback (most recent call last):
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models
raise Exception("ClipVision model not found.")
Exception: ClipVision model not found.
@@zock-h8s CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors 放对位置了?重启comfyui了?
王哥牛掰
大头哥,你的机器配置是什么?
x.com/datou/status/1643088563855302656?s=46&t=q-CkUEteWAvmSTc8lwBgUA
人为怎么修改提示词
新建一个cr text节点,输入提示词,然后连到原来自动提示词的位置
大佬,3.0的关键词,在哪里可以下载啊?
3.0版的提示词在最新一期漫画转真人那个工作流里,3.0还是稳一些