File "/anaconda3/envs/grag/lib/python3.10/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 17, in import ollama ModuleNotFoundError: No module named 'ollama',请问这个问题该怎么处理呢
I tried but got the following errors: ⠸ GraphRAG Indexer ├── Loading Input (InputFileType.text) - 1 files loaded (1 filtered) ━ 100% … 0… ├── create_base_text_units ├── create_base_extracted_entities ├── create_summarized_entities └── create_base_entity_graph ❌ Errors occurred during the pipeline run, see logs for more details.
The error in the logs are: File "c:\Python311\Lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length raise ValueError("Columns must be same length as key") ValueError: Columns must be same length as key 01:43:25,416 graphrag.index.reporting.file_workflow_callbacks INFO Error executing verb "cluster_graph" in create_base_entity_graph: Columns must be same length as key details=None
hi minzhang, you may need the check the context size about ollama model setting. If you use the orignal ollama model (such as "ollama pull llama3"), it may cause this problem, so I suggest to register a model from the Modelfile as I mentioned in the video which can custimzer the context size longer. BTW, don't forget to change the model name in the embedding.py files if you don't use the same model as I did.
File "/anaconda3/envs/grag/lib/python3.10/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 17, in import ollama ModuleNotFoundError: No module named 'ollama',请问这个问题该怎么处理呢
@@志强陈-x1r You can verify whether you installed it with the command: "pip list", or go to your python installation directory, and find whether you have lib/site-packages/ollama subdirectory?
最近一直在看你的ai大模型系列的视频,虽然讲的懵懵懂懂,每个视频多看几遍还是可以跟着操作学会的。
谢谢你辛苦录制的这些视频。
很高兴能帮到你,如果有什么问题可以在b站或者GitHub给我留言,ytb我登的没那么勤。或者有什么想看的专题也可以给我留言
問答用了LLama3.1 8B,Embedding用了lm-studio的nomic,文本用的是GraphRAG的論文複製貼上.txt。
結果光是主旨都問不出來,瘋狂答非所問。看來開源模型做這件事是不用想了,純是大型垃圾。
但還是感謝主播分享
File "/anaconda3/envs/grag/lib/python3.10/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 17, in
import ollama
ModuleNotFoundError: No module named 'ollama',请问这个问题该怎么处理呢
请问我是在win11下安装的ollama 我有2块a6000并且使用nvlink连接 还有1块4090 在运行72b系列模型时观察任务管理器发现使用了4090大约23%的算力和其中一块a6000 41g的显存 请问这是正常的吗? 还有 使用ollama可以自动调用所有的显卡性能吗? 谢谢您
一般情况下我们组集群都会选用型号相同的机器,比如4090或者都是A6000, 不同设备型号会可能存在负载不均衡的问题的。因为我也没有使用nvlink 连接过,所以这个不能给出很中肯的回答。 ollama不会自动调用所有的显卡性能,而且ollama一般页不是做高并发而生的,如果你需要实现这个需求, 可以考虑vllm。
請問你72b的模型做多少的量化??Q4的話一張A6000應該就能跑了
老师,本地跑 graphrag 对电脑的要求是不是就是你选用的模型的要求?有额外的硬件配置要求吗?
是的, 主要是本地ollama启动模型的的性能要求。
I tried but got the following errors:
⠸ GraphRAG Indexer
├── Loading Input (InputFileType.text) - 1 files loaded (1 filtered) ━ 100% … 0…
├── create_base_text_units
├── create_base_extracted_entities
├── create_summarized_entities
└── create_base_entity_graph
❌ Errors occurred during the pipeline run, see logs for more details.
The error in the logs are:
File "c:\Python311\Lib\site-packages\pandas\core\indexers\utils.py", line 390, in check_key_length
raise ValueError("Columns must be same length as key")
ValueError: Columns must be same length as key
01:43:25,416 graphrag.index.reporting.file_workflow_callbacks INFO Error executing verb "cluster_graph" in create_base_entity_graph: Columns must be same length as key details=None
hi minzhang, you may need the check the context size about ollama model setting. If you use the orignal ollama model (such as "ollama pull llama3"), it may cause this problem, so I suggest to register a model from the Modelfile as I mentioned in the video which can custimzer the context size longer. BTW, don't forget to change the model name in the embedding.py files if you don't use the same model as I did.
@@echonoshy Using "ollama show" and ollmal create" to update the llama3 and run graphrag again. Will let you know if it fixes the issue.
File "/anaconda3/envs/grag/lib/python3.10/site-packages/graphrag/llm/openai/openai_embeddings_llm.py", line 17, in
import ollama
ModuleNotFoundError: No module named 'ollama',请问这个问题该怎么处理呢
"pip install ollama" will install python ollama package
@@minzhang877 可是我之前已经安装过ollama了
@@志强陈-x1r You can verify whether you installed it with the command: "pip list", or go to your python installation directory, and find whether you have lib/site-packages/ollama subdirectory?
I would suggest you use python3.11 because open-webui is only supported in 3.11. If you want to deploy the local RAG with open-webui/pipelines.
@@minzhang877 ok.谢谢,我试一下