PrivateGPT 4.0 Windows Install Guide (Chat to Docs) Ollama & Mistral LLM Support!

Поделиться
HTML-код
  • Опубликовано: 20 авг 2024

Комментарии • 244

  • @Offsuit72
    @Offsuit72 4 месяца назад +12

    I cannot thank you enough. I've been struggling for several days on this, it turns out I was using outdated info and half installing the wrong versions of things. You made things so clear and I'm thrilled to be successful in this!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      You're welcome! I am glad to hear the video assisted! Thanks so much for reaching out.

  • @bananacomputer9351
    @bananacomputer9351 4 месяца назад +3

    after two hours of research,I start over with your tutorial, and finished in 10 minutes,thank you, thank you!!!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Glad it helped! Thanks for the feedback.

    • @RobertoDiaz-ry1pq
      @RobertoDiaz-ry1pq Месяц назад

      how did you do it ive been stuck for 2 days now watching this same video

  • @OmerAbdalla
    @OmerAbdalla 3 месяца назад +2

    This is a great installation guide. Precise and clear steps. I made one mistake when I tried to setup the environment variable in Anaconda Command Prompt instead of Powershell prompt and once I fixed my mistae I was able to complete the configuration successfully. Thank you very much.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      You're welcome! Thanks for reaching out. Glad the video helped.

  • @radudamian3473
    @radudamian3473 4 месяца назад +6

    Thank you. Liked and subscribed. I most appreciate your patience to give step by step, easy to understand and follow instructions. Helped me, a total noob...so hat off

  • @likanella
    @likanella 2 месяца назад +1

    Thank you, thank you so much. There were no detailed instructions anywhere. Everything worked out! You're great!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      You're welcome! Glad to hear you are up and running. Thanks for the feedback!

  • @christopherpenny6216
    @christopherpenny6216 2 месяца назад +1

    Thank you sir. This is incredibly clear. Many others make assumptions about what I know already - you covered everything. Great guide!

  • @curtisdevault6427
    @curtisdevault6427 2 месяца назад +1

    Thank you for this! I've been struggling with this for a few days now, you provided up to date and clear instructions that made it super simple!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Great to hear! Glad you are up and running. Thanks for the feedback.

  • @shailmatrix
    @shailmatrix Месяц назад +1

    Thanks for creating a clear and concise video to understand the process of running Private GPT.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Glad it was helpful! Thanks for the feedback, much appreciated.

  • @ScubaLife4Me
    @ScubaLife4Me 2 месяца назад +1

    Thank you for taking the time to make this video, it was just what I was looking for. 😎

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Glad it was helpful! Thanks for taking the time to reach out.

  • @chahrah.5209
    @chahrah.5209 3 месяца назад +1

    Huge thank for the video, AND for taking the time to help solve problems in the comments, it was just as helpful. Definitely subscribing.

  • @makin1408
    @makin1408 Месяц назад +1

    "Thank you so much! Finally got it working after trying a bunch of tutorials. Yours really did the trick- super helpful!

  • @JiuJitsuTech
    @JiuJitsuTech 3 месяца назад +1

    Thank you for this vid! I watched several others and this was the most straight forward approach. Super helpful !!

  • @Abhiram00
    @Abhiram00 Месяц назад +1

    it worked like a charm. thank you so much

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      You're welcome! Thanks for the feedback. Glad you are up and running.

  • @DeTruthful
    @DeTruthful 3 месяца назад +1

    Thanks man did a few other tutorials couldn't figure it out. This made it so simple. Subscribed!

  • @RyanHokie
    @RyanHokie 3 месяца назад +1

    Thank you for your detailed tutorial

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      You’re welcome 😊 Glad the video assisted. Thank you so much for the feedback.

  • @firewithcode
    @firewithcode Месяц назад +1

    Thank you very much. It is working for me.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      You're welcome! Glad video assisted and you are up and running. Thanks for reaching out.

    • @firewithcode
      @firewithcode Месяц назад

      @@stuffaboutstuff4045 Could you please also make a video about how to add gpu support on privateGPT?

  • @ilieschamkar6767
    @ilieschamkar6767 3 месяца назад +1

    It worked like a charm, thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Great to hear! Thanks for the feedback much appreciated.

  • @Lucas-iv6ld
    @Lucas-iv6ld 3 месяца назад +1

    It worked, thanks!

  • @Matthew-Peterson
    @Matthew-Peterson 4 месяца назад +1

    Brilliant Guide. Subscribed.

  • @PAPAGEORGIOUKONSTANTINOS
    @PAPAGEORGIOUKONSTANTINOS День назад

    REALLY THANK YOU. it's the first tutorial i've used and everything worked as a charm. Can you make another tutorial in windows enviroment where i could make the usage public? Like having an actual url to use it on my wordpress webpage? (running from my pc that i have set up in my house)

  • @aysberg9403
    @aysberg9403 4 месяца назад +1

    excellent explanation, thank you very much

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Pleasure! Glad the video assisted. Thanks for the feedback!

  • @feliphefaleiros9540
    @feliphefaleiros9540 2 месяца назад +1

    muito bem explicado, obrigado por pelos videos. Em todas versoes que explicou mostrou passo a passo você é foda
    very well explained, thank you for the videos. In all the verses you explained, you showed step by step you are awesome

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Thanks for the feedback. Glad you are up and running and the video assisted.

  • @rummankhan5499
    @rummankhan5499 2 месяца назад +1

    awesome ! best tutorial ever... can you please make a video on web deploy/upload of local/privategpt... without openai (if thats doable)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, thank you for the feedback! Noted on the video idea. Glad you are up and running.

  • @erxvlog
    @erxvlog 2 месяца назад +1

    This was excellent. One issue that did come up was uploading pdfs....there was an error related to "nomic". I signed up for nomic and installed it. PDFs seem to be working now.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Thanks for reaching out. Glad to hear you are up and running.

  • @cinchstik
    @cinchstik 4 месяца назад

    got it to run on virtual box. works great! Thanks

  • @nunomlucio5789
    @nunomlucio5789 4 месяца назад +1

    In terms of speed, I feel that the previous version is way faster than this one using Ollama, previous version I mean using CUDA and so.... in terms of answering and even loading documents

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад +1

      Agreed, just increases the build difficulty as bit. 👨‍💻 Thanks for reaching out.

  • @creamonmynutella2476
    @creamonmynutella2476 3 месяца назад +2

    is there a way to make this automatically start when the system is powered on?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Sure its possible with PowerShell scripts. Let me check it out and revert.

  • @fishingbeard2124
    @fishingbeard2124 3 месяца назад

    Can I suggest that next time you make a video like this you enlarge the window with the commands. 75% of your window in blank and the important text is small so I think it would be helpful to have less blank space and larger text. Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Thanks for the input, agreed I started to zoom in on the command prompts in newer vids. Thanks for reaching out and I hope the video helped.

  • @bobsteave1236
    @bobsteave1236 Месяц назад +1

    Wow , got it working both you 2.0 and 4.0 guides for this. Thanks you forever! but Now I want to change the model that is in the UI? How do I do this? The whole reason I did this was to have other models with ollama.. and the guide doesn't show this

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Hi, glad you are up and running. You can change the Ollama model. Download and install the model you want to use and change your config files. Check the Ollama example shown on the below link.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @Blazerboyk9
    @Blazerboyk9 4 дня назад

    Verified ensured path for poetry, when running the anaconda prompt I get 'poetry' is not recognized as an internal or external command,
    operable program or batch file.

  • @rchatterjee48
    @rchatterjee48 2 месяца назад +1

    Thank you very much it works

  • @maxxxxam00
    @maxxxxam00 4 месяца назад +1

    Excellent video, very clear step guides. Do you have or could you make a docker compose file that does all the steps in a docker environment?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, thank you so much for the feedback, let me look into it and I will revert soon! Thanks.

  • @ballmain5623
    @ballmain5623 Месяц назад +1

    This is great. Can you please also do a mac version?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Hi, thanks will do. I am looking at making updated video on PrivateGPT soon. Will update when I can get it out.

  • @Reality_Check_1984
    @Reality_Check_1984 4 месяца назад +1

    Looks like they released a 0.5.0 today. If you install this now and look at the version it will be 0.5.0. All of your install instructions still work as it wasn't a fundamental change like the last big update. They added pipeline ingestion which I hope fixes the slow ollama ingestion speed but so far I still think llama is faster.

    • @Reality_Check_1984
      @Reality_Check_1984 4 месяца назад +1

      so I ran it over night and ollama is still not performing well with ingestion. It definitely under utilizes the hardware for ingestion. Right now a lot of the local LLMs don't seem to leverage the hardware as well when it comes to ingestion. That is an improvement I would like to see in general. Not just of ollama or privateGPT. The ability to ingest faster through better hardware utilization/improved processing and storing ingest files long term on the drive along with the ability to query the drive and load relevant chunks into the vRAM would significantly expand the depth and breadth of what these tools can be used for. vRAM is never going to offer enough and constantly training models won't work either.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi thanks for the update. Had a bit of a scare with the update available moments after publishing this vid 😊. Thanks for the confirmation, I also checked and the install instructions remain intact. Appreciate the feedback. PS. I totally agree with the performance comment made.

  • @Stealthy_Sloth
    @Stealthy_Sloth 3 месяца назад +1

    Please do one for llama 3.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Thanks for the idea. If you want you can try and get it up and running. You can install the 8b model if you use Ollama. (ollama run llama3:8b) The link below has the example configs that would need to change. Thanks for reaching out and for the feedback, much appreciated. docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @likanella
    @likanella Месяц назад +1

    Hey there! I was wondering if you could help me out with something. I'd love to create a tutorial on adding Nvidia GPU support, but I can't seem to find any clear, helpful guides on the topic. I've tried a few times, but I'm still a bit lost. Would you be able to help me out? Thanks so much!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Hi, with Private GPT now offloading the processing to 3rd parties like Ollama you might want to check out the capabilities of chosen backend. Maybe have a look at my Ollama video on the channel for some ideas. Thanks for reaching out and for the feedback. Let me know if you come right with this.

  • @dauwswinnen2721
    @dauwswinnen2721 2 месяца назад +1

    I did everything but installed the wrong model. How can I change models after doing everything?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад +1

      Hi,
      If you are using Ollama you can just update your config file in your PrivateGPT folder to point to the model downloaded in Ollama. if you want multiple models on Ollama its fine. I use my Ollama to feed numerous AI frontends with multiple LLMs running.
      Check the link below for the defaults:
      Default Mistral 7b.
      On your Ollama box:
      Install the models to be used, the (default for PRIVATEGPT settings-ollama.yaml) is configured to use mistral 7b LLM (~4GB) and nomic-embed-text Embeddings (~275MB).
      Commands to run in CMD:
      ollama pull mistral
      ollama pull nomic-embed-text
      ollama serve
      docs.privategpt.dev/installation/getting-started/installation

  • @hasancifci1423
    @hasancifci1423 3 месяца назад +1

    Thanks! Do NOT start with the newest version of Python. It does not support. If you did, uninstall it. If you have a problem with pipx install poetry, delete the pipx folder.

  • @drmetroyt
    @drmetroyt 2 месяца назад

    Hope could install this as docker container

  • @lherediav
    @lherediav 4 месяца назад +2

    For some reason Anaconda doesnt recognize the CONDA command in my end doestn show (base) at the begining of the anaconda prompt, any solutions? i am stuck in that part 7:46 part

    • @lherediav
      @lherediav 4 месяца назад

      when i open anaconda prompt shows this: Failed to create temp directory "C:\Users\Neo Samurai\AppData\Local\Temp\conda-\"

    • @thehuskylovers1432
      @thehuskylovers1432 4 месяца назад

      Same issue Here i cannot pass this neither v2 or this version

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, just checking if you came right with this? When you open your Anaconda Prompt or Anaconda PowerShell prompt they must open and load and show (base). Is this not showing in both Anaconda Prompt or Anaconda PowerShell prompt? Did you try and open both in admin mode? It seems there is a problem with the anaconda install on the machine.

  • @drSchnegger
    @drSchnegger 4 месяца назад +1

    If I make a Prompt, I get an error: Collection make_this_parameterizable_per_api_call not found
    whenn i do another Prompt, i get the errror:
    NoneType' object has no attribute 'split

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад +1

      Hi, from what I can gather you will get this error if you prompt documents but no documents are loaded. Can you ensure you uploaded documents into PrivateGPT and selected them prior to prompt. Let me know if you come right with this. Thanks for reaching out! If problem persits check out these links and check if it helps, github.com/zylon-ai/private-gpt/issues/1334 , github.com/zylon-ai/private-gpt/issues/1566

    • @JiuJitsuTech
      @JiuJitsuTech 3 месяца назад +2

      From the git issues page, this resolved the issue for me. "This error occurs when using the Query Docs feature with no documents ingested. After the error occurs the first time, switching back to LLM Chat does not resolve the error -- the model needs to be restarted." Enter Ctrl-C in Powershell Prompt to stop the server and of course 'make run' to re-start.

  • @chjpiu
    @chjpiu 4 месяца назад +1

    Thanks a lot. Please let me know how to change the LLM model in privategpt? For me, the default model is mistral

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, sorry for the late reply. You can check out this page to change your LLM. Let me know if you came right with this. Thanks for reaching out! 🔗docs.privategpt.dev/manual/advanced-setup/llm-backends🔗

  • @LeapoldButtersStotch
    @LeapoldButtersStotch Месяц назад

    GREAT guide, it worked flawlessly the first time I ran through it! However when I powered my PC back on the second day and tried to start it back up I got lost. How do I start this up again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад +1

      Hi,
      You need to activate the conda environment again. Then just follow the last steps in the video again to startup the system. Let me know if you are up and running. Thanks for reaching out.

  • @vaibhavdivakar4653
    @vaibhavdivakar4653 4 месяца назад +1

    I followed te steps and for some reason when i do make run command, it is giving me "no Module called uvicorn".
    i installed the module using pip command it still says the same error..
    :(

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад +2

      Hi, does it not launch at all and stops with this error. Its seems its the webserver that needs to start. When you launch it I know it can display uvicorn.error message but when you open the browser you will see the site up and everything works.
      If you get this, uvicorn.error - Uvicorn running on http : // 0. 0. 0. 0 : 8001 then it works. But by the comment it sound like you have the whole module missing. PrivateGPT is a complicated build, but the steps in video are valid, I would suggest retracing the required SW and versions required like Python etc. and the setup steps just to make double sure no steps were missed. I also find more success running the terminals in admin mode to avoid issues. Let me know if you came right with this and thanks for making contact.

    • @SiddharthShukla987
      @SiddharthShukla987 3 месяца назад +2

      I also faced the same issue because I forgot to start the env. Check yours too

  • @farfaouimohamedamine3288
    @farfaouimohamedamine3288 3 месяца назад +1

    Hi, thank you for your tutorial i have followed the steps as u did but i get this error when i try to install the dependencies of the privateGpt :
    (privategpt) C:\pgpt\private-gpt>poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
    No Python at '"C:\Program Files\Python312\python.exe'
    NOTE : for the virtual envirement, i did not created inside the system32 directory, i did the creation on the pgpt directory

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, did you get this resolved. Yes, I also created a pgpt folder in the root of the drive. Just to confirm are you running Python 3.11.xx? Let me know if you came right with this.

  • @Isak_Isak
    @Isak_Isak Месяц назад

    Great tutorial, but I have a problem when I want to search or query the file I have imported. I have an error "Initial token count exceeds token limit". I already have increased the limit but nothing change, how can I solve the error?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Hi, Are you handing off to Open AI or Ollama? Have a look at the below link to check if same applies. Thanks for reaching out.
      github.com/zylon-ai/private-gpt/issues/1701

  • @tarandalinux8323
    @tarandalinux8323 2 месяца назад

    Thank you for the great video. I'm at 9:48 and the command $env:PGPT_PROFILES="ollama" gives me an error: The filename, directory name, or volum label syntax is incorrect.
    (privategpt) C:\gpt\private-gpt>$env:PGPT_PROFILES="ollama" (I don't get the colors you get)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, can you confirm you are running this in your Anaconda PowerShell terminal, Check the steps I use from about 9:20 in the video. Let me know if you are up and running.

  • @anjeel08
    @anjeel08 3 месяца назад

    This is simply superb. I could install it and run it with your clear step by step instructions. Thank you so very much. However I do notice that uploading the documents to be able to chat with my own set of data takes so long time. Is there a way we can tweak this and make uploading the document easier. I am only using 1 word doc of 30 pages with mainly text and one pdf document of size 88 pages with text and images. word doc was uploaded in 10 min but the pdf runs endlessly. Appreciate if you could make a video on how to use Open AI instead of one of the online providers to get speed (When confidentiality is not important). Thank you in advance for your tip.

    • @firatguven6592
      @firatguven6592 3 месяца назад

      I wrote also a comment, I am complaining about the same issue. I have also the version 2.0 also from him and as if it wasnt uploading slow enough, but in version 2.0 in any case the upload was considerably faster. I had during the upload 80% load on my 32 Thread cpu but now in 4.0 the cpu is just boring itself with 5%, which explains the slower upload. The parsing nodes are genereting the embeddings much slower. Since I have more than 10000 pdf files, it is unacceptable to wait endless during the uplad. Now I am waiting since 40 minutes only for 2 huge files with around 3000 pages, which took with the old one only 20 mins totally. I have no idea, how long it will take till end and we are talking only about 2 x files. The other 9998 Files will not even be uploaded in one year if the problem will not be solved. I am disappointed to lose time with 4.0.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @Quicksilver87878787
    @Quicksilver87878787 2 месяца назад

    Thanks! Is there any specific reason why you are using Conda as opposed to virtualenv?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, I use Anaconda for most of my AI environments when working on Windows. I find it easy to work with and install required SW etc. Thanks for reaching out.

  • @pranavmalhotra7635
    @pranavmalhotra7635 3 месяца назад +1

    ERROR: Could not find a version that satisfies the requirement pipx (from versions: none)
    ERROR: No matching distribution found for pipx
    I am receiving this error and hence I am unable to proceed with thwe installation any tips?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, can you confirm you are installing pipx in a normal admin mode command prompt. Just to check if you followed the steps from 6:30 in the video onwards. If still not working can you confirm you have Python 3.11.xx installed with the pip package that ships with. Let me know if you came right with this. Thanks for reaching out.

  • @msh3601
    @msh3601 13 дней назад

    Thanks!! so if we close everything, and wanting to run it again, do we open a powershell window in admin mode, activate conda env, go to directory, and then try "make run"?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  8 дней назад

      Hi, yes that is correct. Just do it all in an admin mode Anaconda PowerShell terminal and it should be fine. Hope you are up and running. Thanks for reaching out!

  • @quandoomniflunkusmoritati9359
    @quandoomniflunkusmoritati9359 Месяц назад

    Please address the issue of huggingface tokens and login for the install script. I have been all over the net and tried different solutions and script mods, including the huggingface CLI. However I have not been able to install a working copy yet (yes, I did accept the access the to Mistral repo on huggingface too. The python install script fails on Mistral and on the transformers and tokenizer. Shows message for gated repo but I have authenticated on the CLI and tried passing the token in the scripts. Still failing.... HELP!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Месяц назад

      Hi, are you using Ollama as backend? If not follow the steps in the Ollama video on the channel and just hook up your install to that. Otherwise test this on a hosted LLM like OpenAI. You should not struggle if you follow the steps exactly in the 2 videos. Let me know if you are up and running.

  • @guille8237
    @guille8237 3 месяца назад +1

    i got it running but I want to change the model to deep seek coder, how do I do it?never mind

  • @jcpamart83
    @jcpamart83 Месяц назад

    Yesterday I tried to install pgpt with the last versions, but it was a bad way. Now I installed all with the good versions, but in the installation of poetry, it answer me this :
    No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe'
    No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe'
    'poetry' already seems to be installed. Not modifying existing installation in 'C:\Users\MYPATH\pipx\venvs\poetry'.
    Pass '--force' to force installation.
    I force it, but anyway, mini conda is the old installation.
    But how can I do now ????
    Thanks for your help

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  22 дня назад

      Hi, Apologies missed this comment. Did you come right with this? Can you confirm Python version is 3.11.xx. It has to be in with 3.11 branch. Did you follow all the steps in the video from 3:35 onwards. Let me know if you got past this issue. PS Remember to add Python to your path when the installer starts. Thanks for reaching out.

  • @Whoisthelearner
    @Whoisthelearner 3 месяца назад

    Great thanks for the awesome video, i wonder whether you would know any similar setup fro the new llama3 llm? If yes, it would be great if you can make a new video about it!!!! Great thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, sure you can. You can install llama3 on Ollama. You would need to change the config files. The link below should assist until I can update this video. Thanks for the feedback and the video idea.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @Whoisthelearner
      @Whoisthelearner 3 месяца назад

      @@stuffaboutstuff4045 Great thanks for the prompt reply and the link. Looking forward to your new video as well!! You make very easy for beginner like me! Really appreciate your work

    • @Whoisthelearner
      @Whoisthelearner 3 месяца назад

      @@stuffaboutstuff4045 if you don't mind, allow me to ask a question, I am plannign to adopt the ollama approach but i don't know of what part of the video should i turn to the command PGPT_PROFILES=ollama make run. Great thanks!

  • @JeffreyMerilo
    @JeffreyMerilo 3 месяца назад +1

    Great video! Thank you so much! Got it to work with version 5. How can we increase the tokens? I get this error File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\chat_engine\context.py", line 204, in stream_chat
    all_messages = prefix_messages + self._memory.get(
    ^^^^^^^^^^^^^^^^^
    File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\memory\chat_memory_buffer.py", line 109, in get
    raise ValueError("Initial token count exceeds token limit")

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, can you have a look at this post and see if it helps. Let me know if you come right.
      github.com/zylon-ai/private-gpt/issues/1701

  • @SuffeteIfriqi
    @SuffeteIfriqi 2 месяца назад

    Such a great video, which in my case makes it even frustrating because I'm literally stuck at the last step.
    It says:
    make: *** Keine Regel, um "run" zu erstellen. Schluss.
    Which translates into:
    make: ***No Rule, to create "run". Stop.
    Any idea what this might be caused by? I've restarted the entire process twice, no luck...
    Thank you so much.

    • @SuffeteIfriqi
      @SuffeteIfriqi 2 месяца назад

      I suspect it might caused by Gnu's path, although I did include it in the env variables...

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, did you manage to resolve this issue. Please check from about 9:15 into the video. Are you completing these steps in an Admin Anaconda PowerShell with the environment activate and from the correct folder. Let me know if you came right. Thanks..

  • @vichondriasmaquilang4477
    @vichondriasmaquilang4477 2 месяца назад

    so confuse what is the purpose of install ms visual studio? you didnt use it

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, VS Studio components are used in the background for compilation and build of the programs. Hope the video helped and your PrivateGPT is up and running.

  • @claudioalvesmoura
    @claudioalvesmoura 14 дней назад

    First, I tried to install with Python 3.1.9, but it did not work. With 3.1.7, it went well...
    In minute 3.33, you check the operation of ollama, but the output is
    $ ollama serve
    Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
    Is that how it suppose to be? Thanks in advance..

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  8 дней назад

      Hi, just to confirm did you follow all the steps for Ollama using the video on the channel. Regarding the error either your Ollama web server is up and running already or something else is using port 11434. Did you open a browser and go to the address? Anything loading up on that port? Thanks for reaching out, let me know if you are up and running.
      ruclips.net/video/rIw2WkPXFr4/видео.htmlsi=kDCCeTRbC2BoTfl-

  • @RobertoDiaz-ry1pq
    @RobertoDiaz-ry1pq Месяц назад

    i am having so much trouble installing private gpt ive follow every step 10 times when back started the video over and over and over at this point i give up, i wish i was one of those people in the comments saying thanks, i mean im still thankful that i even got as far as i went.
    hopefull someone out there could lend a helping hand. im still knew to this coding and ai world but im so interested in this stuff.
    ps anyone out care to help "comment"

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  24 дня назад

      Hi, just checking in if you got PrivateGPT up and running. This is one of the more difficult builds and you needs to follow the steps in the video carefully. Let me know if you came right with this, thanks for reaching out.

  • @OscarPremium-ql5hh
    @OscarPremium-ql5hh 2 месяца назад

    How do I start it up again ones I finished all the steps in the video successfully? Just visit the browser domain again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, you will have to activate your conda environment. Make sure you are in the project folder and launch Anaconda PowerShell again. Check the steps from 9:24 in the video. Let me know if you are up and running.

    • @OscarPremium-ql5hh
      @OscarPremium-ql5hh 2 месяца назад

      @@stuffaboutstuff4045 Wow, Thanks for your answer! Just amazing!

  • @abhiudaychandra
    @abhiudaychandra 3 месяца назад

    Hi. Thanks for the great video, but the uploading of even just one document & answering is slow that I just cannot use it any further. Could you please tell me how to uninstall the privategpt, other applications I can of course uninstall, and is there some command i should enter to remove files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, yes its a bit slower against a local LLM dependent on the GPU available in the machine. Did you try and use Open AI or one of the online providers, if you want to have it super fast. If confidentiality is not your main concern maybe give it a go. If you want to remove just uninstall all SW and delete the project folder you built PrivateGPT in and you should be fine. Thanks for reaching out.

  • @BetterEveryDay947
    @BetterEveryDay947 4 месяца назад +1

    can you make a vs code version?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, thanks for the idea, they release new version so quickly I will check how I can incorporate in the next one. Thanks for reaching out.

  • @andybirder4970
    @andybirder4970 29 дней назад

    Is this one working? I checked tons of privategpt video for my win 11, didn't work 😢

  • @Xoresproyect
    @Xoresproyect 19 дней назад

    I have been trying this tutorial for three times now... I finnally got to minute 9.50 to the comand "make run" and get the "The term 'make' is not recognized as the name of a cmdlet, function" error. I have all the variables added as far as i know... Any hints on how to solve this?

    • @Xoresproyect
      @Xoresproyect 19 дней назад

      I finnally got this error bypassed just skipping the "make - run" command and getting straight to the " poetry run python -m private_gpt" from there everything went fine!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  9 дней назад

      Hi, apologies for the late reply, I was afk a bit. I am glad you got it up and running. My input on the original comment would have been, I take it you did follow the make portion from 5:28 . Did you add make to your system variables. From 9:20 make sure to use admin mode PowerShell to set env and launch with make run. 😉

  • @shashankshukla6691
    @shashankshukla6691 4 месяца назад

    thank you, but how can we make use of NVDIA gpu if we have one on our device, like i have NVDIA T600

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, if you build with Ollama it will offload to the GPU automatically (Nvidia or AMD). It does not hammer it to its full potential I have seen. This utilization will get better each evolution of the project. Let me know if you got the GPU to kick in when offloading.

  • @matteosalvatori2941
    @matteosalvatori2941 12 дней назад

    Hi, I state that I am not an expert. I followed all the steps and created my private page, but the mask to write does not appear. I tried to upload documents but it reports connection problems. I restarted the pc and now the page 127.0.0.1:8001 does not open at all. gives me connection problems. can you help me ?

    • @matteosalvatori2941
      @matteosalvatori2941 12 дней назад

      Hello, then I figured out how to get back into my personal page by entering ananconda powershell and repeating the steps in the video. I am left with the problem that I cannot communicate with AI even without uploading documents. I don't see the prompt to write. Help!

    • @matteosalvatori2941
      @matteosalvatori2941 11 дней назад +1

      So, I fixed the problem of the dialog box. The problem is chrome, using Edge the dialog box appears. The problem remains that if I try to chat with AI the waiting time is very long and then I get a connection error.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  8 дней назад

      Hi, glad to hear you got the system up and running. To offload to Ollama you need a reasonable GPU to handle the processing. If resources are an issue then maybe offload to OpenAI or Groq, but then money becomes a problem 😉. Glad to hear you can use the system. Thanks for reaching out.

  • @JanaFourie-cm5eh
    @JanaFourie-cm5eh 3 месяца назад

    Hi, when querying files only the sources appear after it stopped running (files ingestion seems to work fine). How can I fix this? Or is it still running but extremely slow...?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, did you come right with this. There are some good comments on this video on speeding up the install including working with large docs that slow down the system. Check the link below, maybe this can assist. Also check the terminal when this happens for any hints on what might be hanging up.
      docs.privategpt.dev/manual/document-management/ingestion#ingestion-speed

    • @JanaFourie-cm5eh
      @JanaFourie-cm5eh 2 месяца назад

      @@stuffaboutstuff4045 Thanks, how can I contact you? I noted you are South African through the accent!

  • @workmail6406
    @workmail6406 3 месяца назад

    Hello, I have managed to follow the instructions up until 9:50 for running the environment with make run, however when I iniate the command in an administrator anaconda powershell after locating it to my private-gpt folder I encounter the error "The term 'make' is not recognized as the name of a cmdlet, function". I have no idea how I can get Anaconda Powershell to recongnize the prompt to run on my Windows pc. What can I do to finally start the private gpt server?

    • @workmail6406
      @workmail6406 3 месяца назад

      Now that I installed gitbash from the makeforwindows website it works. However, I now run into this error when running make run:
      Traceback (most recent call last):
      File "", line 198, in _run_module_as_main
      File "", line 88, in _run_code
      File "C:
      gpt\private-gpt\private_gpt\__main__.py", line 5, in
      from private_gpt.main import app
      File "C:
      gpt\private-gpt\private_gpt\main.py", line 4, in
      from private_gpt.launcher import create_app
      File "C:
      gpt\private-gpt\private_gpt\launcher.py", line 12, in
      from private_gpt.server.chat.chat_router import chat_router
      File "C:
      gpt\private-gpt\private_gpt\server\chat\chat_router.py", line 7, in
      from private_gpt.open_ai.openai_models import (
      File "C:
      gpt\private-gpt\private_gpt\open_ai\openai_models.py", line 9, in
      from private_gpt.server.chunks.chunks_service import Chunk
      File "C:
      gpt\private-gpt\private_gpt\server\chunks\chunks_service.py", line 10, in
      from private_gpt.components.llm.llm_component import LLMComponent
      File "C:
      gpt\private-gpt\private_gpt\components\llm\llm_component.py", line 9, in
      from transformers import AutoTokenizer # type: ignore
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py", line 26, in
      from . import dependency_versions_check
      ImportError: cannot import name 'dependency_versions_check' from partially initialized module 'transformers' (most likely due to a circular import) (C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py)
      make: *** [run] Error 1
      Any Idea how I can resolve this?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, can you confirm loading all the required SW including all the Make steps I perform from 3:35 into the video. Let me know if you were able to resolve this. Also confirm you are running everything in the same terminals and admin mode where needed. Make sure you use Python within 3.11.xx in your Anaconda Environment.

  • @user-bg7zh7ub2h
    @user-bg7zh7ub2h 3 месяца назад

    i have a question how do i run it again if my system restarts what steps do i have to do again or command to run again can we set to autostart when my system starts

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, you can just run Anaconda PowerShell prompt again, activate the environment you created. Make sure you are in the project folder. Set the env variable you want to use and execute make run. Check the steps performed in the Anaconda PowerShell from 9:24 in the video. Let me know if you are up and running. Thanks for reaching out.

  • @mohith-qm9vf
    @mohith-qm9vf 3 месяца назад

    Hi, will this installation work for ubuntu? if not what changes do I need to make??? thanks a lot

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, just checking if you got this built on Ubuntu. If not you can follow the steps for Linux using the link below. Thanks for reaching out.
      docs.privategpt.dev/installation/getting-started/installation

    • @mohith-qm9vf
      @mohith-qm9vf 3 месяца назад

      @@stuffaboutstuff4045 thanks a lot!!

  • @cookiedufour
    @cookiedufour 3 месяца назад +1

    After executing "make run", i run into some problems : " ----LOGGING ERROR---- Traceback (most recent call last):
    File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]
    ~~~~~~~~~~~~~^^^^^
    KeyError: During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]"
    etc, there is still a lot. I followed every instruction carefully so, I don't know from where comes the problem... pls help

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, can you confirm that when you set the environment variable you are doing this in a Anaconda PowerShell prompt in admin mode. Make sure your environment is activated in the PowerShell terminal, these are the steps from 9:17 into the video. Let me know if you come right. Thanks for reaching out.

    • @cookiedufour
      @cookiedufour 3 месяца назад

      @@stuffaboutstuff4045 yes, I opened a new anaconda powershell prompt and ran it as admin. I am thinking of starting everything over again... What would you advise me to do ? uninstall everything and refollw the steps of your video ? Thanks for your answear !

  • @FunkyZangel
    @FunkyZangel 3 месяца назад

    Can I do this all completely offline? I have a computer that has no access to the internet. I want to see if i can download everything into a usb and then transfer it over to that computer. Can anyone help me please

    • @Whoisthelearner
      @Whoisthelearner 3 месяца назад

      I think you can once you have everything installed, at least that works for me

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, as noted below, that correct. Once installed you can disconnect the machine if you have the LLM local. Let me know if you come right.

    • @FunkyZangel
      @FunkyZangel 2 месяца назад

      @@stuffaboutstuff4045 Hi thanks for the reply. I am struggling a little understanding this. Do I have to download a portable version for everything or just a portable VSC? Meaning if I want the privategpt to work on another machine from the thumbdrive, do I just need to transfer the VSC files or must I transfer everything, such as git, anaconda, python etc?

  • @The_Gamer_Boi_2000
    @The_Gamer_Boi_2000 3 месяца назад

    whenever i try to install poetry on pipx it gives me this error "returned non-zero exit status 1."

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, just checking in if you resolved this issue? Just want to confirm that you are following the steps I use in the video to install poetry, please check from 6:28 in the video. I use a command prompt in admin mode to complete all these steps. From 7:36 we back in Anaconda and Anaconda PowerShell prompts. Also confirm you are using Python 3.11.xx for the Anaconda environment otherwise you will get a bunch of build errors and failures. Let me know and thanks for reaching out.

    • @The_Gamer_Boi_2000
      @The_Gamer_Boi_2000 3 месяца назад

      @@stuffaboutstuff4045 im pretty sure i was doing those steps but im using webui now instead cuz its easier to do

  • @alicelik77
    @alicelik77 4 месяца назад

    Time 9:21 you opened new anaconda powershell prompt. Why did you need new powershell prompt even you were working on a powershell prompt already

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад +1

      Hi, look carefully, I am in a normal Anaconda prompt at that stage and the next commands need to go into Anaconda PowerShell. 👨‍💻 Thanks for reaching out, hope the video helped..

  • @user-jw1mz4et1e
    @user-jw1mz4et1e 4 месяца назад

    i install it and works but is very very slow to answer is it possible to speed it up?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, it is not the fastest with Ollama, the upside is its relatively easy to get working. Should confidentiality not be an issue using the Open AI profile will increase speed exponentially. You could also build this local if you have a proper GPU but expect a more complicated install to follow. Thanks for reaching out.

  • @travisswiger9213
    @travisswiger9213 4 месяца назад

    how do i restart this? I've got it running a few times, but I if i restart i have a hell of a time getting it working again. can i make a bat file some how?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад +1

      Hi, when you launch it in the Anaconda PowerShell Prompt, just go back to that terminal when done and press "Control + C". This will shut it down. You can save the starting profile as a PowerShell script and start it or as bat if you use cmd. Thanks for making contact, let me know if you came right with this.

  • @user-vr5lg6mv2i
    @user-vr5lg6mv2i 3 месяца назад

    is python 3.12 will do the work or specifically i need 3.11 ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, You would need Python 3.11.xx. The code currently checks if the installed Python version is in that range. I got build errors with 3.12 installed in the environment. Let me know if you are up and running.

  • @firatguven6592
    @firatguven6592 3 месяца назад

    Thank you very much it works like your previous guide privateGPT2.0. But compared to the previous GPT2.0 this one is uploading the files much slower, as if it wasnt slow enough. with the 2.0 my all 32 Threads CPU was working under 80% load during the upload process, you could see that it is doing something important due to the load. But now is the CPU load only around 5%, which takes considerably more time, because I guess the parsing nodes are genereting now the embeddings much slower. This is unfortunately a deal braeaker for me. Since I have lots of huge pdf files which needs to be uploaded. I cannot wait 1 week or more just for upload. At the end a 4.0 version should be improvement but I cannot see any improvements here. Can somebody list a real improvement list please except the ollama, which is for me not a real improvement becaue the version 2.0 worked also very fine. I will switch back to 2.0, unless I can understand where is the failure?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @firatguven6592
      @firatguven6592 3 месяца назад

      @@stuffaboutstuff4045 Thanks for advice, if change anything in the backend, it comes to error, despite according to the official manual and your explanation. If I setup ofr both ollama then it works but as mentioned the file upload is extreme slow. Now I found a solution by installing from scratch according to version 2.0 with llmacpp with huggingface embeddings, whereas I changed the ingest_mode from single to parallel now it works much faster. There should be more options in order to increase the speed by increasing the bash size or worker counts. Since they did not work before, i will not change and corrupt the installation as long as you can provide a manual how to increase the embedding speed to maximum most probably with help of gpu like in chat. The GPU support in chat works good but during embedding the GPU is not being used

    • @firatguven6592
      @firatguven6592 3 месяца назад +1

      @@stuffaboutstuff4045 after changing to parallel the cpu utilization is at 100% and that explains the faster embedding. Since I have one of the fastest consumer cpus the result is now finally satisfying.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      @@firatguven6592 Awesome, glad you are running at acceptable speeds.

    • @firatguven6592
      @firatguven6592 3 месяца назад

      ​@stuffaboutstuff4045 in addition to that, i could change some paramters in settings.yaml with help of LLM. These are, batch size to 32 or 64, dimension from 384 to 512, device to cuda and ingest_mode: parallel, which gave the most improvement. Now the embeddings are really fast. Thank you very much. I would like to test once also the mode sagemaker, since I could not succeed that mode working. I will try it later again.

  • @patrickdarbeau1301
    @patrickdarbeau1301 4 месяца назад

    Hello, I got the following error message when running the command:
    " poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant" "
    " No module named 'build' "
    Can you help me ? Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, did you install all the required SW I install at the start.

    • @Matthew-Peterson
      @Matthew-Peterson 4 месяца назад

      Close both Anaconda Prompts and restart the process. Dont rebuild your project though. GPT4 says its a connection issue when creating and sometimes a computer restart sorts the issue. Worked for me.

    • @guille8237
      @guille8237 3 месяца назад

      open your tomfile and update the tom file with the correct Build version then update the lock file

  • @AstigsiPhilip
    @AstigsiPhilip 2 месяца назад

    Hi, is this privategpt can handle 70,000 pdf files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, I personally have not worked with massive datasets. I know some in the comments have. You might want to check out the link for bulk and batch ingestion.
      docs.privategpt.dev/manual/document-management/ingestion#bulk-local-ingestion

  • @anishkushwaha9973
    @anishkushwaha9973 4 месяца назад

    Not working it's showing error whatever prompt im giving

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, what error do you get? Let me know and maybe I can help you out.. Thanks!

    • @anishkushwaha9973
      @anishkushwaha9973 4 месяца назад

      ​@@stuffaboutstuff4045its showing Error Collection make_this_parameterizable_per_api_call not found

  • @Omnicypher001
    @Omnicypher001 4 месяца назад

    Using a Chrome browser to host a web app doesn't seem very private to me.

  • @mrxtreme005
    @mrxtreme005 4 месяца назад

    20gb space required?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, yes if you load all the required SW. This ensure you dont get errors if you build the other non Ollama options..

  • @reaperking537
    @reaperking537 3 месяца назад

    private-gpt answers me blank. any solution?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, can you confirm what LLM you sending it to? Ollama local like in the video? Are you getting no responses when you ingests docs and on the LLM Chat? Both not working? Anything happening in the terminal when it process in the Web UI? Let me know and we can hopefully get you up and running.

    • @reaperking537
      @reaperking537 3 месяца назад

      @@stuffaboutstuff4045 I have difficulty with PROFILES="ollama" (LLM: ollama | Model: mistral). I followed the same steps indicated in the video. LLM Chat (no file context) doesn't work, it gives me blank responses; and Query files doesn't work either, it also gives me blank responses. The error I get in the terminal is the following: [WARNING ] llama_index.core.chat_engine.types - Encountered exception writing response to history: timed out

    • @reaperking537
      @reaperking537 3 месяца назад

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

    • @reaperking537
      @reaperking537 3 месяца назад

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

  • @VaporFever
    @VaporFever 4 месяца назад

    How can I add llama3?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, if you are using Ollama you can install it and test it out, I am currently downloading it to test.
      8B Model can be installed on Ollama using - ollama run llama3:8b or you can install the 70B Model -ollama run llama3:70b. Let me know if you get it working.

  • @user-yr3xm1jk1q
    @user-yr3xm1jk1q 3 месяца назад +1

    Is it support Arabic language?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, you can have a look at these threads, you would need to have LLM support for the language. I hope they point you in the right direction.
      github.com/zylon-ai/private-gpt/issues/28
      github.com/zylon-ai/private-gpt/discussions/764

    • @user-yr3xm1jk1q
      @user-yr3xm1jk1q 3 месяца назад

      @@stuffaboutstuff4045 🙏 thx

  • @JiuJitsuTech
    @JiuJitsuTech 3 месяца назад

    To run git clone, from the Anaconda Prompt, I had to install "conda install -c anaconda git". I was then able to run "git clone ...". Else, the Prompt window was just hanging for me when I tried to git clone.

  • @SirajSherief
    @SirajSherief 3 месяца назад

    Can we do this for Ubuntu machine ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад +1

      Hi, yes you can, the packages and flow will be similar to the video but obviously following the Linux steps. You can check out what's involved in building on Linux by checking out the below link. Thanks for reaching out, let me know if you come right. 🔗docs.privategpt.dev/installation/getting-started/installation

    • @SirajSherief
      @SirajSherief 3 месяца назад

      Thanks for your kindly response. But now I'm facing a new problem while try to run the private_gpt module:
      "TypeError: BertModel.__init__() got an unexpected keyword argument 'safe_serialization'"
      Please convey me how to resolve this error?

  • @Cool_Monk-ey
    @Cool_Monk-ey 3 месяца назад +1

    In the last step --- Logging error ---
    Traceback (most recent call last):
    File "C:\Users\1ub48\anaconda3\envs\privavtegpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]
    ~~~~~~~~~~~~~^^^^^
    KeyError:
    I got this error and privategpt didn't show up pls help me someone

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, just checking if you came right with this. Sound like something is going wrong when you set PGPT Profile and execute make run. Can you confirm you are doing this in admin mode Anaconda PowerShell prompt. Ensure environment is active first and check steps 9:20 into the video. Let me know if resolved, thanks for reaching out.

    • @travs007
      @travs007 3 месяца назад

      @@stuffaboutstuff4045 I'm having the same problem access denied to mistral

  • @talatriaz
    @talatriaz 4 месяца назад +1

    Doesn't work for me - only difference is that I'm using Win 11. All versions of software installed are the same as in example except for updated pip ands poetry versions.
    Everything is smooth until I get to the very last step. After running make run I get the following output:
    poetry run python -m private_gpt
    Traceback (most recent call last):
    File "", line 198, in _run_module_as_main
    File "", line 88, in _run_code
    File "C:\pgpt\private-gpt\private_gpt\__main__.py", line 5, in
    from private_gpt.main import app
    File "C:\pgpt\private-gpt\private_gpt\main.py", line 3, in
    from private_gpt.di import global_injector
    File "C:\pgpt\private-gpt\private_gpt\di.py", line 3, in
    from private_gpt.settings.settings import Settings, unsafe_typed_settings
    File "C:\pgpt\private-gpt\private_gpt\settings\settings.py", line 5, in
    from private_gpt.settings.settings_loader import load_active_settings
    File "C:\pgpt\private-gpt\private_gpt\settings\settings_loader.py", line 9, in
    from pydantic.v1.utils import deep_update, unique_list
    ModuleNotFoundError: No module named 'pydantic.v1'
    make: *** [run] Error 1
    Seems like root cause is missing Pydantic v1 module. I have checked using pip list, and pydantic 1.10.7 is clearly present. Problem with make gnu???
    Has anyone else experienced this or is it just me?.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, did you come right with this? Just checking, you have the required SW, running everything in correct terminals (CMD, Anaconda, Anaconda PowerShell etc. I usually also ensure I run in admin mode terminal to avoid some issues. If you run make can you confirm its in the machines path? After adding to path ensure you open a new prompt windows so it loads the path. Let me know if the above helps..

    • @talatriaz
      @talatriaz 4 месяца назад

      @@stuffaboutstuff4045 Apparently the problem was windows 11. I repeated the exact same steps on a Win 10 system and it worked perfectly.

  • @thakurajay999
    @thakurajay999 3 месяца назад

    Error
    Collection make_this_parameterizable_per_api_call not found

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, this issue would usually arise when the system does not see documents selected or ingested. Have a look at the below two posts. Let me know if resolved.
      github.com/ollama/ollama/issues/3052
      github.com/zylon-ai/private-gpt/issues/1334

  • @adamseng8514
    @adamseng8514 2 месяца назад

    I am getting this error every time I try to upload a file to ingest or when I type a message. Everything along the way installed normally and went well so far
    HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад +1

      Hi, just checking if you resolved this issue? Looks like it cannot talk to the backend you sending the requests to. Are you using Ollama on a different server? If so make sure you open its webserver to support more than localhost connections.

    • @adamseng8514
      @adamseng8514 Месяц назад

      @@stuffaboutstuff4045 I've changed it to totally local with $env:PGPT_PROFILES="local" and everything seems to work fine there, just not with anything else but that is also alright for me as I am still tinkering around with this. Can't thank you enough for this video and how easy it is to follow anyway.

  • @methodssss
    @methodssss 2 месяца назад

    running into the issue with chatting with private gpt in browser.
    Error
    HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

    • @methodssss
      @methodssss 2 месяца назад

      I am sorry, that was the error for querying file. the error I get when trying to use the LLM Chat is "NoneType' object has no attribute 'split"

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 месяца назад

      Hi, this is usually a document ingestion or documents not selected issue. Can you check out the below link and restart the model. Let me know if you came right with this.
      github.com/zylon-ai/private-gpt/issues/1566

  • @KrunalKshatriya
    @KrunalKshatriya Месяц назад

    Have been getting this error after "make run" command
    ********************
    None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
    --- Logging error ---
    Traceback (most recent call last):
    File "C:\Users\KRUNAL\miniconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get
    return self._context[key]
    ~~~~~~~~~~~~~^^^^^
    KeyError:
    ...
    ******************************

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  24 дня назад

      Hi, just checking in if you came right with this. Are you offloading to Ollama? The top bit is expected then but the KeyError not😢. Make sure you run this in your activated Private GPT Anaconda environment and ensure you use admin mode Anaconda PowerShell prompt to load the environment variable (PGPT_Profile as Ollama). Let me know if you got this resolved. Thanks for reaching out. You can check if the below applies
      github.com/zylon-ai/private-gpt/issues/1225
      github.com/zylon-ai/private-gpt/issues/1184

  • @zackmathieu4829
    @zackmathieu4829 3 месяца назад +1

    I get and error after executing make run: KeyError:

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, can you confirm you are setting the environment variable and using make run in a admin mode Anaconda PowerShell prompt. Check 9:16 onwards in the video. If error persist please confirm if your are using Ollama like I do in the video or building for local Lllama. Thanks for reaching out!

    • @zackmathieu4829
      @zackmathieu4829 3 месяца назад

      @@stuffaboutstuff4045 I've followed all of those steps correctly but I have found that after I run, poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant", setuptools never completes installation no matter how long i wait.
      Also even though I have completely uninstalled python and then reinstalled only python 3.11.0, when I check the version in Anaconda Prompt is returns 3.11.9

  • @navaneethk7798
    @navaneethk7798 3 месяца назад

    (base) PS C:\WINDOWS\system32> conda activate privategpt
    (privategpt) PS C:\WINDOWS\system32> cd .\pgpt\
    (privategpt) PS C:\WINDOWS\system32\pgpt> cd .\private-gpt\
    (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> $env:PGPT_PROFILES="ollama"
    (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> make run
    make: *** No rule to make target 'run'. Stop. 🤥

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 месяца назад

      Hi, just checking if you managed to resolve this? Did you follow all the make steps I use from about 5:25 in the video. Also just check the steps from about 8:05, I create my folder for the software in the root of the drive i.e. c:\pgpt. Just check those steps and confirm SW install location. Lastly make sure you load the $env variables in Admin mode Anaconda PowerShell prompt. Let me know if you were able to resolve..

  • @Cashemacom-ud8xb
    @Cashemacom-ud8xb 4 месяца назад

    How do I enable GPU for this?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 месяца назад

      Hi, if you follow the instructions and config for Ollama, Ollama will handle the GPU offload. Otherwise you have to build it for full local for Llama-CPP support. Let me know if you come right..