Hey, thank you for the video! please, could you assist. On the final step of executing the command: python run_localGPT.py --device_type mps I see such a mistake: illegal hardware instruction python run_localGPT.py --device_type mps Could you share an idea, what might have go wrong?
Hi there, I have a little problem for step 6, which is that after i try to run run_localGPT, it shows I don't have access to the model, I tried all model and none of it works. If you can provide any additonal information on that it will be great. P.S. I have give my access token on hugging face to it.
my gui app doesn'tworks Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
on an M1-Pro, followed your instructions to the T. You forgot to tell how to install FULL ANACONDA. each question I get this: and response takes forever! /opt/anaconda3/envs/localGPT/lib/python3.10/site-packages/InstructorEmbedding/instructor.py:278: UserWarning: MPS: no support for int64 for sum_out_mps, downcasting to a smaller data type (int32/float32). Native support for int64 has been added in macOS 13.3. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:155.) assert torch.sum(attention_mask[local_idx]).item() >= context_masks[local_idx].item(),\
FYI - "ollama run mistral" is much faster and is an un-woke LLM, but adding private docs is harder. This is ok, but still not recommended for use. Tell me why I am wrong>
Thanks for this wonderful tutorial
You're welcome 😊
@@SimplifyAI4you
Great video
Hey, thank you for the video!
please, could you assist. On the final step of executing the command: python run_localGPT.py --device_type mps
I see such a mistake: illegal hardware instruction python run_localGPT.py --device_type mps
Could you share an idea, what might have go wrong?
can you inject data more then once? are duplicates detected and eliminated?
Collecting onnx (from unstructured[pdf]->-r requirements.txt (line 17))
Using cached onnx-1.15.0.tar.gz (12.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
whats up with this?
i had to pip install cmake - then it ran
Hi there, I have a little problem for step 6, which is that after i try to run run_localGPT, it shows I don't have access to the model, I tried all model and none of it works. If you can provide any additonal information on that it will be great. P.S. I have give my access token on hugging face to it.
Thanks for sharing this, we are coming with new video on LocalGPT at the end of this month. You can hit the bell 🔔 icon to get the notification.
my gui app doesn'tworks
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
Please share your machine configuration
on an M1-Pro, followed your instructions to the T. You forgot to tell how to install FULL ANACONDA.
each question I get this: and response takes forever!
/opt/anaconda3/envs/localGPT/lib/python3.10/site-packages/InstructorEmbedding/instructor.py:278: UserWarning: MPS: no support for int64 for sum_out_mps, downcasting to a smaller data type (int32/float32). Native support for int64 has been added in macOS 13.3. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:155.)
assert torch.sum(attention_mask[local_idx]).item() >= context_masks[local_idx].item(),\
Great vid, do you have a version for windows?
Yes, you can checkout our previous video on our RUclips channel
if i click on search
Please share your system hardware details
it works now thanks@@SimplifyAI4you
who has had success with this ? M1 mac usere here.
FYI - "ollama run mistral" is much faster and is an un-woke LLM, but adding private docs is harder. This is ok, but still not recommended for use.
Tell me why I am wrong>