在白板Windows上从零安装ComfyUI解决InstantID报错并运行工作流

Поделиться
HTML-код
  • Опубликовано: 11 май 2024
  • 发了工作流和视频后,收到最多的留言就是怎么安装运行,遇到问题怎么解决,今天我就从白板Windows开始,带大家一步步安装ComfyUI,搞定Ollama和InstantID,走上人生巅峰。
    ComfyUI安装教程文字版:openart.ai/workflows/datou/ma...
  • НаукаНаука

Комментарии • 39

  • @linkairq8400
    @linkairq8400 Месяц назад

    大佬这份教程对小白来说非常及时👍👍

  • @NeoAnifuture
    @NeoAnifuture Месяц назад

    都是很棒的功能啊 感谢分享 ollama这个很方便 感谢

  • @35wangfeng
    @35wangfeng Месяц назад

    太详细了 感谢大佬

  • @user-si4sd5pk7c
    @user-si4sd5pk7c 19 дней назад

    谢谢老师的课程

  • @MrShuangmu
    @MrShuangmu Месяц назад

    太强了

  • @user-zv4ui2uj6u
    @user-zv4ui2uj6u Месяц назад

    大佬终于发了,为了运行InstantID昨天折腾了一天没搞定,尤其是cuda环境部署,老是报错无法运行.....

  • @origeniuslaw3288
    @origeniuslaw3288 Месяц назад

    漂亮,记得b站也来一份。

  • @user-sm3yd7zr1c
    @user-sm3yd7zr1c Месяц назад

    你好大佬,我在虚拟环境 执行pip install insightface 的那步失败了 是什么原因呢

    • @Datou1977
      @Datou1977  Месяц назад

      装上vs了吗?

    • @user-sm3yd7zr1c
      @user-sm3yd7zr1c Месяц назад

      @@Datou1977 装了 已经

    • @user-sm3yd7zr1c
      @user-sm3yd7zr1c Месяц назад

      @@Datou1977 之前那个CMD 命令窗口我给关了 然后 ”cd .venv cd Scripts activate.bat “我运行了这个代码,进到虚拟环境里,然后再执行的pip install insightface ,显示权限不够

    • @user-sm3yd7zr1c
      @user-sm3yd7zr1c Месяц назад

      已经处理好了 嘿嘿。但是大头哥 ,我发现我显卡配置可能不得行,装ollama模型 那个llava-7那个可以用小一点的替代吗?

    • @Datou1977
      @Datou1977  Месяц назад

      @@user-sm3yd7zr1c 可以试试看,小一点会差一些,但差的不是特别多

  • @kong8865
    @kong8865 29 дней назад

    colab 上能安装么? 我的win有点太老了

    • @Datou1977
      @Datou1977  29 дней назад

      我不熟,没有发言权

    • @Datou1977
      @Datou1977  27 дней назад

      昨天见到了zho佬,他就是用colab,平时在ipad上跑

  • @user-sb5rt5bu3r
    @user-sb5rt5bu3r 25 дней назад

    安装了Ollama识别不到模型怎么办,提示ollama._types.RequestError: must provide a model

    • @Datou1977
      @Datou1977  24 дня назад

      windows?安装模型成功了吗?在命令行里能和模型聊天吗?

  • @user-bi3cd9kd4z
    @user-bi3cd9kd4z 26 дней назад

    显卡是AMD RX580是不是就不能玩了?

    • @Datou1977
      @Datou1977  5 дней назад

      8G显存,可能有点悬

  • @ziger96
    @ziger96 27 дней назад

    按流程走的,但是出现了这些问题,请问怎么解决
    Error occurred when executing OllamaVision:
    must provide a model
    File "J:\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "J:\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "J:\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision
    response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)
    File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 124, in generate
    raise RequestError('must provide a model')

    • @Datou1977
      @Datou1977  27 дней назад

      ”must provide a model“ 没有选中视觉模型?

    • @ziger96
      @ziger96 26 дней назад

      @@Datou1977
      之前是没有选择模型,选择选上了,但是出现了新的问题,视频我看了很多次,但不知道出在哪,所以再次打扰你
      Error occurred when executing OllamaVision:
      File "J:\ComfyUI\execution.py", line 151, in recursive_execute
      output_data, output_ui = get_output_data(obj, input_data_all)
      File "J:\ComfyUI\execution.py", line 81, in get_output_data
      return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
      File "J:\ComfyUI\execution.py", line 74, in map_node_over_list
      results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
      File "J:\ComfyUI\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision
      response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)
      File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 126, in generate
      return self._request_stream(
      File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 97, in _request_stream
      return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json()
      File "J:\ComfyUI\.venv\lib\site-packages\ollama\_client.py", line 73, in _request
      raise ResponseError(e.response.text, e.response.status_code) from None

  • @allenlu223
    @allenlu223 25 дней назад

    0.0 seconds (IMPORT FAILED): L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI
    0.0 seconds (IMPORT FAILED): L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_InstantID
    这两条始终装不上,ComfyUI_InstantID一直报安不上insightface,manager里面显示
    Native InstantID support for ComfyUI.
    This extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI.
    Please note this still could be considered beta stage, looking forward to your feedback.

    • @Datou1977
      @Datou1977  5 дней назад

      ruclips.net/video/96GNRCq4wxw/видео.html 从这个位置开始看

  • @user-uc6yo4qk7k
    @user-uc6yo4qk7k 29 дней назад

    A卡是不是就只能看看了。

    • @Datou1977
      @Datou1977  29 дней назад

      A卡能玩

    • @user-uc6yo4qk7k
      @user-uc6yo4qk7k 29 дней назад

      @@Datou1977 上次MJ也说最好N卡,可怜我的5700了,谢谢楼主了,我继续往下安装。

    • @Datou1977
      @Datou1977  27 дней назад

      @@user-uc6yo4qk7k amd可以跑,只是我没有,没办法演示

  • @zshyzeng4430
    @zshyzeng4430 Месяц назад

    大佬,提示缺少以下节点怎么办呢?
    MuseVPredictor V1 (comfyui_musev_evolved)
    MuseVImg2Vid V1 (comfyui_musev_evolved)

    • @Datou1977
      @Datou1977  29 дней назад

      我并么有用到这些节点啊🤨