How to Install and Run Local GPT to Your System Mac OS | Offline ChaGPT in Your System 100%

Поделиться
HTML-код
  • Опубликовано: 22 янв 2025

Комментарии • 20

  • @Pluto-x1t
    @Pluto-x1t Год назад +2

    Thanks for this wonderful tutorial

  • @VitalyLifanov
    @VitalyLifanov 9 месяцев назад

    Hey, thank you for the video!
    please, could you assist. On the final step of executing the command: python run_localGPT.py --device_type mps
    I see such a mistake: illegal hardware instruction python run_localGPT.py --device_type mps
    Could you share an idea, what might have go wrong?

  • @nakamoto-u7y
    @nakamoto-u7y 10 месяцев назад

    can you inject data more then once? are duplicates detected and eliminated?

  • @nakamoto-u7y
    @nakamoto-u7y 10 месяцев назад +1

    Collecting onnx (from unstructured[pdf]->-r requirements.txt (line 17))
    Using cached onnx-1.15.0.tar.gz (12.3 MB)
    Installing build dependencies ... done
    Getting requirements to build wheel ... error
    error: subprocess-exited-with-error
    whats up with this?

    • @nakamoto-u7y
      @nakamoto-u7y 10 месяцев назад +1

      i had to pip install cmake - then it ran

  • @snowstormnight4497
    @snowstormnight4497 7 месяцев назад

    Hi there, I have a little problem for step 6, which is that after i try to run run_localGPT, it shows I don't have access to the model, I tried all model and none of it works. If you can provide any additonal information on that it will be great. P.S. I have give my access token on hugging face to it.

    • @SimplifyAI4you
      @SimplifyAI4you  7 месяцев назад

      Thanks for sharing this, we are coming with new video on LocalGPT at the end of this month. You can hit the bell 🔔 icon to get the notification.

  • @fabriziocasula
    @fabriziocasula Год назад

    my gui app doesn'tworks
    Internal Server Error
    The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

  • @nakamoto-u7y
    @nakamoto-u7y 10 месяцев назад

    on an M1-Pro, followed your instructions to the T. You forgot to tell how to install FULL ANACONDA.
    each question I get this: and response takes forever!
    /opt/anaconda3/envs/localGPT/lib/python3.10/site-packages/InstructorEmbedding/instructor.py:278: UserWarning: MPS: no support for int64 for sum_out_mps, downcasting to a smaller data type (int32/float32). Native support for int64 has been added in macOS 13.3. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:155.)
    assert torch.sum(attention_mask[local_idx]).item() >= context_masks[local_idx].item(),\

  • @wojpaw5362
    @wojpaw5362 Год назад

    Great vid, do you have a version for windows?

    • @SimplifyAI4you
      @SimplifyAI4you  Год назад

      Yes, you can checkout our previous video on our RUclips channel

  • @fabriziocasula
    @fabriziocasula Год назад

    if i click on search

  • @nakamoto-u7y
    @nakamoto-u7y 10 месяцев назад +1

    who has had success with this ? M1 mac usere here.

  • @nakamoto-u7y
    @nakamoto-u7y 10 месяцев назад

    FYI - "ollama run mistral" is much faster and is an un-woke LLM, but adding private docs is harder. This is ok, but still not recommended for use.
    Tell me why I am wrong>