Private Chat with your Documents with Ollama and PrivateGPT | Use Case | Easy Set up

Поделиться
HTML-код
  • Опубликовано: 31 дек 2024

Комментарии • 462

  • @mjackstewart
    @mjackstewart 9 месяцев назад +4

    Dude … This is AMAZING! I was just looking for pushes in the right direction, but this actually does exactly what I was attempting to do! Thank you!

  • @enceladus96
    @enceladus96 Год назад +21

    You've saved me from going down my RAG rabbit hole. The code is extremely detailed, clean, and easy to understand too. God bless.

  • @Paul-gg3cr
    @Paul-gg3cr Год назад +6

    I've been looking for this for months. Thank you alot, dude! Subscribed :)

  • @justhuman9551
    @justhuman9551 Год назад +4

    Just found out your channel and subbed after watching this video.
    Very good quality video! Keep up the great content creation!
    I am impressed with your motivation to answer the questions from your comment section.
    Not every youtube channel cares about answering subscriber questions and doing content around what people comment , so very good job!

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Thank you. It's my pleasure to be talking with my viewers..

  • @吳小吳-j3b
    @吳小吳-j3b 10 месяцев назад +8

    i use "pip install tqdm ",The system prompts me: ModuleNotFoundError: No module named 'tqdm'

    • @natehedgeman
      @natehedgeman 9 месяцев назад

      Make sure you have installed all the frameworks listed in the requirements.txt file. Rewind the video, he explains how to do it all at once. pip install -r requirements.txt
      If you have done that, make sure you are working in the correct environment. The same environment you installed the python frameworks in. He explains that as well.

    • @raminderpalsingh123
      @raminderpalsingh123 5 месяцев назад

      I get the same error. Everything installed successfully, and I'm in the same environment ... :0)

    • @macx75
      @macx75 Месяц назад

      same here installed all the required frameworks but still got the error , what solved it

  • @joseffb7821
    @joseffb7821 11 месяцев назад +4

    Can you still use the ollama API to search your documents? or does it need to be via the console?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад +11

    this is really high quality content, especially given the effort made in editing. the subtitle is a nice addition also.

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Thank You for noticing the efforts... 😍 -- With Love (Prompt Engineer)
      However, more than the subtitles, I want the main content to be more engaging.

  • @BeyondTheSide
    @BeyondTheSide 8 месяцев назад +1

    I will like to say that this works just as well even today. Much thanks to the prompt engineer, you have made my life and others a lot easier.

  • @tier1recon836
    @tier1recon836 Год назад +2

    Would like to see ollama with openai assistant or similar that can use a file and have assistants do action to the file such as execute code or clean up data etc.

  • @mohamedsabirudeen9249
    @mohamedsabirudeen9249 10 месяцев назад +2

    If I give ollama pull mistral
    I'm getting error that says could not connect to ollama app, is it running ?
    Please give me a solution for it

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад +1

      simple, before typing ollama pull mistral, write ollama serve..

    • @mohamedsabirudeen9249
      @mohamedsabirudeen9249 9 месяцев назад

      Actually using it in linux platform but still I gave the command ollama serve, I'm getting no GPU detected, Please give solution for this !
      @@PromptEngineer48

  • @MA_808
    @MA_808 10 месяцев назад +1

    Thanks!

  • @donniealfonso7100
    @donniealfonso7100 11 месяцев назад +2

    Followed your instructions here and installed on Raspberry Pi4. Works but of course painfully slow and chip approach 145 degrees F which slows things down as well. But it works and may try on Pi5. Was using a pdf manual for Viking drill press for document. Have to try something with just text.

  • @frankbradford2869
    @frankbradford2869 10 месяцев назад +2

    How do I remove the embedded Think and grow rich pdf file. I say this because when I add other file the query goes back to this pdf file and quotes it

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      delete the db and cache folders

    • @frankbradford2869
      @frankbradford2869 10 месяцев назад

      Thank you, forgive me for sounding slow. You do mean remove or delete the chroma.sqlite3 file?@@PromptEngineer48

    • @frankbradford2869
      @frankbradford2869 10 месяцев назад

      If I delete the cache folder name db , will this effect how the program will ingest the files I supply or will create a new db and then ingest the new files?

  • @arvindelayappan3266
    @arvindelayappan3266 Год назад +3

    what is the system configuration that you are using and what is the response time for the query

  • @nufh
    @nufh Год назад +3

    About the context window. I have noticed that it cannot exceed over 2k tokens even though Mistral can support up to 8k. From what I have tested so for, it is like the bot identify itself as a GPT-3, is it because of the openai library?

    • @PromptEngineer48
      @PromptEngineer48  Год назад +2

      yes. That is because we start everything compatible with OpenAI API. then we shift to opensource APIs. We could instead work for OpenSource APIs for the start. 😁

    • @nufh
      @nufh Год назад +1

      @@PromptEngineer48 So for this Ollama, the context windows will not be limited with 2k only right? It will be scale based on the model capability.

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Yes.

    • @nufh
      @nufh Год назад +1

      @@PromptEngineer48 I wished that I could test it right now. Windows user need to wait.

    • @PromptEngineer48
      @PromptEngineer48  Год назад +2

      We can try LocalAI. I will come up with a video

  • @BexNoel
    @BexNoel 6 месяцев назад +2

    My brain is exploding because I was just using your code repository earlier today and then closed everything out. I'm trying to run it again and I'm now getting a bunch of depreciation LangChain warnings. I still get a response but the response is no longer referencing or citing the documents I placed in the folder. Any idea why this would happen?

    • @PromptEngineer48
      @PromptEngineer48  6 месяцев назад

      my bad. i should have freezed the dependencies.
      Fear not. I will create another video with fresh codes.

  • @gabrielalejandroverapinto1974
    @gabrielalejandroverapinto1974 9 месяцев назад +2

    This is great, can you add or show how we could add gpu integration even better if it is over a GUI with privategpt 2.0?

  • @Andreas-r2f
    @Andreas-r2f Год назад +2

    Great video!! However if i follow the instructions my results are different. I created the source_documents folder an put in another pdf-file. When i then execute the "python3 ingest.py" the ingestion seems to work fine. But when i afterwards exceute the privateGPT.py and start to interact with the llm it still responses to "Think and Grow Rich"-Book.

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Delete the db and cache folder

    • @arvindelayappan3266
      @arvindelayappan3266 Год назад +1

      @@PromptEngineer48 can we not append the pdf files, do we have to keep removing them. when a new file is added and ingested, it should add the document into its cache and should be able to response from both the document isnt it

    • @kamcarlson1413
      @kamcarlson1413 11 месяцев назад

      @@arvindelayappan3266 did you ever figure this out?

  • @r0ntuber
    @r0ntuber 8 месяцев назад +1

    Thanks for doing this: It seems like when one clones the repository, you need to delete everything in the db folder or it will mess up the results of the information you are trying input yourself.

  • @charlesbiggs7735
    @charlesbiggs7735 8 месяцев назад +2

    Awesome effort!! Your code worked right off the bat. Thanks for saving me a LOT of time.

  • @prasadwtai
    @prasadwtai Месяц назад +1

    how can i get a good looking ui like a chatbot ? not on the command prompt like ui, instead , would like to see a web based ui with a send button

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      u can use streamlit, gradio, ollama web ui and so many other options

  • @rajayogan8884
    @rajayogan8884 Год назад +1

    Does anyone get this warning - UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown (and subsequent crash)

  • @weihe2047
    @weihe2047 9 месяцев назад +1

    Thank you very much, all the videos you have made are great! However, when I was building the local llm, I found that there are multiple frameworks, such as privateGPT, localGPT, langchain, etc. Similarly, I found that there are very many choices for the llm as well as the vector database (e.g., hugging face vs. ollama), which gave me a big headache, and I was wondering if you could make a I was wondering if you could make a video that explains your recommendations for each part of the process of building a rag-based personal local document chat llm?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      That would be a very good video but very less relevant as everyday we have so many updates. But this is something I can create. thanks for the idea. Definelty will work on that

    • @weihe2047
      @weihe2047 9 месяцев назад +1

      @@PromptEngineer48 Thank you for your response! It's true that, as you say, the various programs are moving so fast. Since I'm hoping to be able to build something myself via langchain, I'm starting to work based on your github project, and some of the other out-of-the-box projects (e.g. open-webui, privateGPT, etc.) are just too heavy for me to get into and modify.

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      Cool.

  • @Alexiy25raffasan
    @Alexiy25raffasan 9 месяцев назад +1

    It would be great to feed local AI with project code or framework, and be able to ask it questions about the code.

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      nice idea. i will try to implement the same.

  • @justinln6019
    @justinln6019 Месяц назад +1

    Hi I am connecting from another computer. I have my Ollama in AWS cloud. How do I make it where I can train it like what you did here?

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      there was no training just injest and spit.. if you have ollama in AWS cloud, you need to somehow use that via api calls.

  • @ganeshnayak9459
    @ganeshnayak9459 7 месяцев назад +1

    Great Tutorial, Why is it saying loading 235 new documents when it has only one in the source_documents folder. I had 2 in mine and it said 8, wondering why.

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      It's because of the chunking.. I had put only one document but it was chunked into many pieces.

  • @amadmalik
    @amadmalik 4 месяца назад +1

    hi, can you update this so we can use LLama 3.1 instead, please provide a version that works with Apple silicon as this one fails on my M3 Mac

  • @fabriziocasula
    @fabriziocasula Год назад +2

    thank you, sorry but i don't see the old chat interface :-) i have 2 questions
    how can i remove a ingested document that i don't need?
    ist it possible to chat with the docker interface ? or is it only for terminal??

    • @PromptEngineer48
      @PromptEngineer48  Год назад +2

      1. U can remove the file. Then delete the db folder and __pycache__ folder.. then run python ingest.py again and python privateGPT.py again.
      2. Web UI interface is not integrated here right now. But that is some ok my pipe line as well. I am working on that.

  • @davidaliaga4708
    @davidaliaga4708 7 месяцев назад +1

    Do you have the pdf document you tried? Would like to try it myself

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      It's the think and grow rich book. Just search for the book on the internet

  • @frankbradford2869
    @frankbradford2869 10 месяцев назад +1

    This work very well but it has issues ingesting docx, pptx and ods files without a pip python install

  • @RamondeBruyn
    @RamondeBruyn Год назад +2

    Thank you for this great content! I was able to get this working on my M1 Mac. I was able to run `python ingest.py` and the `python privtagpt.py` commands. However, when I asked it to summarize the document I had uploaded, it referenced the "Think and Grow Rich" document that you showed in the video, rather than the test document I uploaded to the source_documents folder. How do I clear out the embeddings from the "Think and Grow Rich" document, or clear the chroma db database embeddings completely before running the ingest.py command again?

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Delete the db and cache folder

    • @RamondeBruyn
      @RamondeBruyn Год назад +1

      @@PromptEngineer48 Thank you! Got it to work exactly as expected! Thank you for all the great content!

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Welcome

  • @AbhishekThakur-yh9bd
    @AbhishekThakur-yh9bd 6 месяцев назад +1

    This is awesome, but still not bale to chnage the base url for ollama, is there any way i can change that?

  • @varun_tech7
    @varun_tech7 9 месяцев назад +1

    ModuleNotFoundError: No module named 'tqdm'
    even after installing the library properly. Any fix ?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад +1

      Were you about to solve?

    • @varun_tech7
      @varun_tech7 9 месяцев назад +1

      @@PromptEngineer48 yeah ..updating the package fixed it for me. Thanks again for this awesome tutorial

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад +1

      @@varun_tech7You solved the problem yourself. Congrats.

    • @raminderpalsingh123
      @raminderpalsingh123 5 месяцев назад

      @@varun_tech7 which package did you update? thx

    • @varun_tech7
      @varun_tech7 5 месяцев назад

      @@raminderpalsingh123 I don't exactly remember, just make sure all package is up to date

  • @drkvaladao776
    @drkvaladao776 8 месяцев назад +2

    Hi, while setting up Virtual environment I'm getting an error, 7:39 what programs do I need? I have installed Miniconda and it's still no running the line. Thks

  • @betagroobox
    @betagroobox 10 месяцев назад +1

    Wonderful, thank you! My dream would be to feed my local model with all my books in epub or pdf format just once and the model will keep a memory of those. Then from there I have so many ideas but not sure if feasible, maybe someone can help? 1) for each book create a mind map of concepts 2) a diagram of how each book is related to the others (citations, same authors, same topic, related concepts) 3) given a question or a topic the system can point me to which book is better to read. Probably impossible at the moment, isn't it?

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад +1

      Wonderful idea. I will dedicate time for a POC

    • @betagroobox
      @betagroobox 10 месяцев назад

      @@PromptEngineer48 Awesome! Another one could be to automatically find the book category. Since the system has already ingested all the books it already knows the discussed topics in each book and from there it can assign each to an ontology of category and subcategories. Like non-fiction/self-help, fiction/novel, non-fiction/self-help/personal growth, and so on. I have other 100 of these ideas, ping me if you need more :D

    • @SujithAbraham
      @SujithAbraham 10 месяцев назад

      I would also be interested in something like this if possible as it would be amazing to do this in a repository of books that you know. If you could do this with a non-trivial number of books, say, 100-150 (from Project Gutenberg), it would be a great application of local LLMs.

  • @hottingracer8575
    @hottingracer8575 Месяц назад +1

    how can we get chatGPT alike local LLm, i want to use it for Research and forecasting and predictions

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      there are research going on. you need to find it from huggingface or ollama .. see the benchmarks and decide which model you want to use. since we have so many options, I cannot name one. but it's a case to case basis.

  • @ryanwales9399
    @ryanwales9399 9 месяцев назад +1

    Keep getting a error when doing python3 ingest.py says line 8 no module named langchain

  • @BetterEveryDay947
    @BetterEveryDay947 8 месяцев назад +1

    Can you tell, how to use other models like llama3, without using mistral?

  • @harishhari605
    @harishhari605 8 месяцев назад +1

    Hi, can you create a video on how to clean our own data which is in my CSV file which is best to answer our query very effectively?

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      Yes I can. but to be more clear.. you want to use a data cleaner llm which will give clean you csv file?

    • @harishhari605
      @harishhari605 8 месяцев назад +1

      @@PromptEngineer48 Okay pls go ahead

  • @pankajagarwal1980
    @pankajagarwal1980 8 месяцев назад +1

    Well explained. Can you suggest if we want to pass a onenote how we can pass it.

  • @_Rithika-xh7hn
    @_Rithika-xh7hn 7 месяцев назад +1

    I am getting answers from out of pdf also.How to restrict that to only pdf specific?

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      Ingest only the pdf.

    • @_Rithika-xh7hn
      @_Rithika-xh7hn 7 месяцев назад

      @@PromptEngineer48 even after ingesting the PDFs,I am getting answers for some questions that are not in pdf.Is it because of the already trained model?

  • @harishhari605
    @harishhari605 9 месяцев назад +1

    Could you provide suggestions on how to enhance the conversational capabilities of this bot?

  • @yashyaadav
    @yashyaadav 7 месяцев назад +1

    Are we using poetry here or not? because that part was not there in the video.

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      Yes.

    • @yashyaadav
      @yashyaadav 7 месяцев назад +1

      Is the full code available on the GitHub repository or are their some scripts missing using git ignore?

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      No. Everything in GitHub repo

  • @randomscandinavian6094
    @randomscandinavian6094 9 месяцев назад +1

    I'm getting CondaError: Run 'conda init' before 'conda activate' during my installation. I did try conda init but then it says "no action taken". As usual I can't get a step-by-step tutorial to work.

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      So you were able to create a conda environment?? Using conda create -n your-name python=3.11??

    • @randomscandinavian6094
      @randomscandinavian6094 9 месяцев назад +1

      Yes. Followed everything up until the activation part

    • @randomscandinavian6094
      @randomscandinavian6094 9 месяцев назад +1

      Although I don't get the (base) in front of my path like you did after
      Preparing transaction: done
      Verifying transaction: done
      Executing transaction: done

  • @Elrevisor2k
    @Elrevisor2k Год назад +1

    Where the knowledge base is stored? It keeps track of all pdfs already processed?

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      The are two folders created automatically names db and cache.

  • @testtest-n9b
    @testtest-n9b 11 месяцев назад +1

    Hello, i keep getting error ERROR: Could not find a version that satisfies the requirement onnxruntime>=1.14.1 (from chromadb) (from versions: none)
    ERROR: No matching distribution found for onnxruntime>=1.14.1

    • @JDSchuitemaker
      @JDSchuitemaker 11 месяцев назад +1

      I had the error for ChromaDB too. If you Google for them you will probably find an answer. For ChromaDB this solved it for me:
      - sudo apt install python3-dev
      - sudo apt-get install build-essential -y

  • @jorgitozor
    @jorgitozor 8 месяцев назад +1

    Nice video, very informative! What do you use to generate subtitles? thanks

  • @DeepakItkar-p9n
    @DeepakItkar-p9n 6 месяцев назад +1

    Thanks for your tutorial. I am trying this on Windows pc. I am in anaconda prompt. I am stuck at this error "Error: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based project". I tried installing installing visual studio build tools but the error still persists. Any idea?

    • @PromptEngineer48
      @PromptEngineer48  6 месяцев назад

      I believe we need .NET SDK, .NET Framework and other tools using the build installer

    • @DeepakItkar-p9n
      @DeepakItkar-p9n 6 месяцев назад

      @@PromptEngineer48 all installed but still wont run.

    • @Validity_TN
      @Validity_TN 3 месяца назад

      got the same error, did you manage it to run?

  • @艾曦-e4g
    @艾曦-e4g Год назад +1

    I can not update the db, when I ask the agent about the new added document, it still gives answers about the document in this video. It is kind of confusing.

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Delete the db and cache folder.. then upload your own document.

    • @艾曦-e4g
      @艾曦-e4g Год назад

      Thank you, you are right, and really helpful!@@PromptEngineer48

  • @Clammer999
    @Clammer999 8 месяцев назад +1

    I couldn’t get Conda to work after installing Conda . The installed files are in /opt/miniconda3 but whenever I run Conda, it’s says Command Not Found

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      www.anaconda.com/download
      You need to install anaconda. however, you could use the .venv instead of conda. We just need a virtual environment.

    • @Clammer999
      @Clammer999 8 месяцев назад +1

      @@PromptEngineer48ok managed to get Conda working. However when I run the python3 ingest.py, I got an error: line6, in from tqdm import tqdm. ModuleNotFoundError: No module named ‘tqdm’

    • @Clammer999
      @Clammer999 8 месяцев назад +1

      Ok made more progress but now stuck in pymupd. Tried installing it but keep getting message Requirement already satisfied: /opt/anaconda3/envs/privategpt/lib/python3.11/site-packages

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      pip install tqdm should have worked

  • @Paulo-ut1li
    @Paulo-ut1li Год назад +3

    Thanks, that’s a great video! I’m testing privategpt for some time and I would love to know if you’re experiencing hallucinations from the chat? And yes mistral seems to be a good model but Zephyr and Dolphin seems to give better answers with a little less performance, depending on the context. Therefore, I couldn’t Get rid of some hallucinations, I would say the reliability of information is 45-65%

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Yes. Hallucinations are there. Need a good model in future

  • @ChronodeAi
    @ChronodeAi Год назад +1

    Hey! Are you able to show how to use auto-mem-local using Ollama? Thanks!

  • @kusumahaja
    @kusumahaja 7 месяцев назад +1

    Hello @PromptEngineer48 , I new to python and want to learn this. I followed your instruction in your great video, but had many errors when installing modules in requirements.txt. Any update?

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад +1

      why dont i come up with an updated video.. please give me like a week or so.

    • @kusumahaja
      @kusumahaja 7 месяцев назад

      @@PromptEngineer48 very nice... thank you sooo much....

  • @panfeng2879
    @panfeng2879 8 месяцев назад +1

    Is there a limitation on the max number of personal documents that I can upload to PrivateGPT?

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      No. but then the vectorstore gets confused and not able to get the relevant chunks.

  • @cuoi123
    @cuoi123 11 месяцев назад +1

    Hi, Ollama is running, I input query but nothing receive answer, termial is blank. What should I do?

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      Try with different LLMs. a smaller version please

  • @RealBassPhat
    @RealBassPhat 8 месяцев назад +1

    Very interesting, easy to follow. I tested this with a music instrument manual, and it wasn't giving accurate answers at all. Any ideas on how to improve this? It's unusable for this type of document. Makes me wonder how accurate it would be with any content. Thank you!

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      That was pretty old stuff. Please watch the recent videos on my channel.

  • @gabriel-gr
    @gabriel-gr 10 месяцев назад +1

    This has been very instructive, thanks!
    Is there an LLM that's better than Mistral at working with very technical documents, i.e. lengthy API implementation documents? I set up my environment exactly as instructed, got my docs indexes and could get some answers on them. But things get murky when I go very specific, with incorrect or incomplete answers.

  • @Al-sd5pg
    @Al-sd5pg 10 месяцев назад +1

    Great tutorial, one of the best on the web!! Thanks for your time and effort! Upvoted 👍

  • @heinzpeterklein9383
    @heinzpeterklein9383 Год назад +4

    Awesome idea. Now use Streamlit or Flask as GUI and the solution is perfect. Thanks for the inspiration. Questions: 1. which OS are you using? 2. python version? 3. do you rather use CPU or GPU? Would an M3 with 128 GB also be sufficient for a quick training / fine tuning of hugging face models up to 20B? Thanks for the answer in advance.
    Hp

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      1. Mac OS. Pretty basic. Macbook Air M2, 8 GB
      2. python version 3.9 ++ version
      3. M3 with 128 GB.. Hmm when you say fine tuning it depends on the model. But here is a rough calculation. If you have a 20B parameter model in 32 bits, then you need 20x32/8 = 80 GB GPU. your system should be able to do the fine tuning. Else please go for 4 bit quantization, then the requirement will reduce by 8 times, now requiring only 80/8 = 10 GB of GPU.

    • @MatthewTrevathan
      @MatthewTrevathan Месяц назад

      I'm using streamlit for a great little interface.

  • @BetterThanTV888
    @BetterThanTV888 Год назад +2

    How would you do this if you have ollama in docker? Or even cloud gpu like runpod? Or Linode? Seems like a good video for the future as you explain,and teach better than a majority of the creators.

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Okay. Will create a video soon on the specific use case that you have mentioned.. I mean by hosting on docker and chatting with docs. Thanks for the comment

    • @AlperYilmaz1
      @AlperYilmaz1 Год назад +1

      same here, I'm using dockerized ollama.. would be great to have privategpt with dockerized ollama..

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Okay. Got the requirements
      Now it's my turn to create that. 😄

  • @adityadeshmukh
    @adityadeshmukh 10 месяцев назад +1

    can you provide a similar use case setup for windows as well, now that Ollama is available on windows

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      No difference. Just use the code on windows. Make sure to install Ollama on windows.

  • @hyde8118
    @hyde8118 10 месяцев назад +1

    Interesting idea of integration. But i think that since you have no bugs in this process, it must be automated. Also, ollama is nothing more than click-to-run tool to download and deploy different sorts of AI models, so in fact you don't really need it to run Mistral with PrivateGPT. Or am i wrong?

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      Now with more interesting integrations, we can scrape privateGPT itself and use Ollama to code up our projects natively. Yes you are right

  • @MerguVinay
    @MerguVinay 7 месяцев назад +1

    i am facing thiis error
    conda : The term 'conda' is not recognized as the name of a
    cmdlet, function, script file, or operable program. Check the
    spelling of the name, or if a path was included, verify that the
    path is correct and try again.
    At line:1 char:1
    + conda create -n private1 python=3.11
    + ~~~~~
    + CategoryInfo : ObjectNotFound: (conda:String) [],
    CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      docs.anaconda.com/free/anaconda/install/windows/
      install the anaconda

  • @AANMAN2008
    @AANMAN2008 2 месяца назад +1

    how is this approach different from RAG, can you elaborate please?

    • @PromptEngineer48
      @PromptEngineer48  2 месяца назад

      It is RAG

    • @AANMAN2008
      @AANMAN2008 2 месяца назад +1

      @@PromptEngineer48 but i'm confused you replied to the message just below for his question which assumes it is not RAG?

    • @PromptEngineer48
      @PromptEngineer48  2 месяца назад

      I am sorry for any confusion. It is a RAG system. any use input of pdfs etc will be used to reply to the user's questions.

  • @bx1803
    @bx1803 9 месяцев назад +1

    I want to try to give it some ability to troubleshoot for me, like conduct pings and traceroutes.

  • @darshanpatil1663
    @darshanpatil1663 9 месяцев назад +1

    I am getting the sqlite3 error of using a unspported version, even the link specified does not solve my error can, I get the error when I run the ingest.py file

  • @LumpBrady0
    @LumpBrady0 6 месяцев назад +1

    Hello, I am trying to follow your youtube video about "Private Chat with your Documents with Ollama and PrivateGPT" but when I type in my query after running the 'python privateGPT.py' part. I get the following error message "ValueError: Ollama call failed with status code 404. Details: model '7560' not found, try pulling it first". I'm not sure what this is talking about as I do the 'Ollama pull mistral' command before running the rest of the code. Any idea how to fix this?

    • @PromptEngineer48
      @PromptEngineer48  6 месяцев назад

      try this ollama run mistral:latest

    • @LumpBrady0
      @LumpBrady0 6 месяцев назад

      @@PromptEngineer48 It let's me run mistral:latest but how will this fix my error above? Do I have to add that to the python code somewhere?

  • @roaming934
    @roaming934 10 месяцев назад +1

    Are you using the code from private gpt’s primordial version? What a great work! By the way, now they officially support integration with ollama. You probably wanna make a video about how to set this up.

  • @JohnDo-ntchaknow
    @JohnDo-ntchaknow 7 месяцев назад +1

    If my company has a pre-existing Data Dictionary, is there a way to allow Ollama to integrate it so that it better understands the data I am working with?

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад +1

      yes. that could be included technically.

  • @salahdinwaji7498
    @salahdinwaji7498 8 месяцев назад +1

    Thank you for the amazing video! a quick question, are these local llms safe to use with private data? I want to use it for work but idk if the info will be shared with meta.

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      U can switch off internet. Safe 🔐 or not, we cannot guarantee, it may so happen that once u connect to internet the data may be transferred.

    • @salahdinwaji7498
      @salahdinwaji7498 8 месяцев назад

      @@PromptEngineer48 okay, so the advantage of running an llm locally is just to save some $$ from API calls?

  • @swapnil0402
    @swapnil0402 10 месяцев назад +1

    Hi, thanks for the tutorial. I am able to run the model using Ollama on windows but after everything the project runs and Asks for the Enter a query and if I add the questions it stucks there. In problems I am getting bekow issues such as
    Import "langchain.chains" could not be resolvedPylance
    Import "langchain.embeddings" could not be resolvedPylance
    Import "langchain.callbacks.streaming_stdout" could not be resolvedPylance
    etc
    Can you please help me out in understanding this issue and resolving it.
    Thanks.

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      There are bugs in windows.. you can try with Linux on windows.

  • @DiminencoIulian
    @DiminencoIulian 9 месяцев назад +1

    is there the posibility to take responses only from your documents?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      yes. a good prompt. in the begining of the prompt mention that you are a chatbot and answer based on these documents only.... some modifications. but this works for openai api. i have tested in my current project that I am doing on some RAG applications.

  • @davidpe76
    @davidpe76 Год назад +1

    Great video, took me a few tries getting Ubuntu configured (using wsl under windows) and updated before it would build the scripts, but I am very impressed. Thanks for all the effort you put into these videos 😁

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      🤗 welcome. Trying to bring the best.

    • @waynesbigw2305
      @waynesbigw2305 10 месяцев назад

      what is "wsl under windows?" I'm running Linux. No Windows here. His instructions in the video don't work for my system at all.

  • @macx75
    @macx75 Месяц назад +1

    line 6, in
    from tqdm import tqdm
    ModuleNotFoundError: No module named 'tqdm'

    • @macx75
      @macx75 Месяц назад +1

      gets stuck at this

    • @PromptEngineer48
      @PromptEngineer48  Месяц назад

      pip install tqdm should solve the issue.

  • @fabriziocasula
    @fabriziocasula Год назад +1

    can you help me? after the python ingest.py i receive a error.. :-(

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      please mention the error !!

    • @fabriziocasula
      @fabriziocasula Год назад +1

      i try to copy here the error but it does't accept @@PromptEngineer48

    • @fabriziocasula
      @fabriziocasula Год назад +1

      i have a new Mac with chip M2

    • @fabriziocasula
      @fabriziocasula Год назад +1

      the problem is the hnswlib library

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      in Mac you should do python3 instead of python.. so please type python3 ingest.py

  • @samshosho
    @samshosho 10 месяцев назад +1

    Thanks for the great effort.
    I just have a question. When a file is ingested, then i want to ingest different file, should i delete the db folder first? so not to mix older ingested files with the current one i want to ingest.
    Also, after ingesting one of my csv files, i asked few questions. Then the answers i was getting were far off and was actually from another source, which i didn't provide, it was from a pdf book about getting rich or something. When i only ingested a csv file with numbers!

  • @hpsfresh
    @hpsfresh 9 месяцев назад +1

    How does it knows it should use mistral if I have several models downloaded?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      that is hardcoded

    • @hpsfresh
      @hpsfresh 9 месяцев назад +1

      @@PromptEngineer48 actually not. This written in config. Take a look

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад +1

    do you know how to stop Ollama afterwards? it continues to run in the background even after trying to end the process multiple times.

  • @techietoons
    @techietoons 5 месяцев назад +1

    Will it recalculate embeddings everytime I add more pdf documents?

    • @PromptEngineer48
      @PromptEngineer48  5 месяцев назад

      yes

    • @techietoons
      @techietoons 5 месяцев назад +1

      @@PromptEngineer48 I mean it should only compute embeddings for the new documents only, not for entire set.

  • @omsen2805
    @omsen2805 8 месяцев назад +1

    Great video! Can I connect Langchain with it? Or is it included. Im a newbie on it :D

  • @petergab734
    @petergab734 9 месяцев назад +1

    I got an error when I typed ollama run mistral. Got a a message saying that ollama command not found. I get this from within the terminal of Visual studio code. But I can run Ollama from the mac's terminal window no problem. Did I forget to do something? Thank you!!!

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      try to change the type of terminal that you are using. i mean zsh or command prompt.

  • @superfreiheit1
    @superfreiheit1 Месяц назад

    Did not work. Errors. If execute Python ingest.

  • @fabriziocasula
    @fabriziocasula Год назад +1

    I've tried everything but it doesn't work for me...:-(
    ImportError: `PyMuPDF` package not found, please install it with `pip install pymupdf`
    I tried to install PyMuPDF but nothing changes

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Just copied this from google, try this.
      "For the developers who are facing this issue on macOS, you need to install pip install PyMuPDF==1.20. 0 as PaddleOCR requires PyMuPDF

    • @fabriziocasula
      @fabriziocasula Год назад +1

      thanks, i try it now @@PromptEngineer48

    • @fabriziocasula
      @fabriziocasula Год назад +1

      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for PyMuPDF
      Running setup.py clean for PyMuPDF
      Failed to build PyMuPDF
      ERROR: Could not build wheels for PyMuPDF, which is required to install pyproject.toml-based projects
      @@PromptEngineer48

    • @fabriziocasula
      @fabriziocasula Год назад +1

      on my mac it ist not possible to install a old PyMuPDF 😞@@PromptEngineer48

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      😢

  • @kashifrit
    @kashifrit 9 месяцев назад +1

    Can private GPT be run for a web type interface similar to your previous video ?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад +1

      Is there anything which is not possible. 😅😀

  • @veniagl3984
    @veniagl3984 8 месяцев назад +1

    can I use this to extract pdf information from 100 pdfs? I need the same information extracted from each pdf and store it in rows, so I need a table of 100x (items to extract), i.e. extract Total Assets from a Balance sheet (will be my 1st column) and I need to do this task for 100 companies. can i use this code to do that? I feel that its more like a many to one thing this one, rather than many to many. Thanks so much for your content!

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      If I understand that correctly, that could be hard coded. I think we don't need an LLM here

  • @7ali1124
    @7ali1124 10 месяцев назад +1

    great video! it is very cool. I noticed that you implemented the Ollama on your mac, but can you update this to install in a server or the cloud that can provide this service to your friends? that would be helpful for your friends

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      Yes you can! I will try to bring in a video

  • @batzizou
    @batzizou 7 месяцев назад +2

    Good work!
    I found your video well done!

  • @williamwong8424
    @williamwong8424 Год назад +1

    great video. now can u do it in streamlit so there's user interface to chat and how can we host it online? like render?

    • @PromptEngineer48
      @PromptEngineer48  Год назад

      Okay. Streamlit and render integration got it. Will do that.

  • @evelbsstudio
    @evelbsstudio 9 месяцев назад +1

    How do you turn sources off? Just get the answer?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      python privateGPT.py --hide-source
      Try this

  • @MatiasFedericoWolters
    @MatiasFedericoWolters 8 месяцев назад +1

    hi, ask a question. How can I change the model for example to llama3 with model 8b-instruct-q6_K?? please

    • @PromptEngineer48
      @PromptEngineer48  8 месяцев назад

      go to line 12 of the privateGPT.py file and change the mistral to whatever model your heart desire.

  • @DataTheory92
    @DataTheory92 10 месяцев назад +1

    which is the best vision model to extract entities from complex invoices ?

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      If you are talking about open source, then I have to go with ollama.com/library/llava
      If closed source, then openai

  • @TheArchitect101
    @TheArchitect101 9 месяцев назад +1

    The response speed is slow on MacBook Air

  • @harishhari605
    @harishhari605 9 месяцев назад +1

    can you make a video for the front end as well?

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад

      Front end. okay. which interface would you like.. like gradio, streamlit etc. any specific requirements. my default would be gradio. would that work?

    • @harishhari605
      @harishhari605 9 месяцев назад +1

      Yes, Gradio will be suitable! Thank you for your quick response. I appreciate your efforts. I look forward to watching your video

    • @PromptEngineer48
      @PromptEngineer48  9 месяцев назад +1

      Okay. On it

    • @harishhari605
      @harishhari605 9 месяцев назад +1

      Thank you very much t @PromptEngineer48.Looking forward to it.

  • @davidaliaga4708
    @davidaliaga4708 7 месяцев назад +1

    Fantastic! Unfortunately it doesnt work :( When doing python ingest,py we get Your system has an unsupported version of sqlite3. Chroma requirwes sglite3 >= 3.35.0

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      My bad.i have committed a sin by not freezing the library versions

    • @davidaliaga4708
      @davidaliaga4708 7 месяцев назад +1

      @@PromptEngineer48 is there a way to correct it. I would really like to try your version. (I think I made it worse because after that I sudo apt get sqlite!)

    • @PromptEngineer48
      @PromptEngineer48  7 месяцев назад

      I will have to search my old files.

  • @SamdarshiPali
    @SamdarshiPali Год назад +1

    Is similar solution possible using 'LM Studio' for users having Windows machines?

    • @PromptEngineer48
      @PromptEngineer48  Год назад +1

      Not right now. But if u follow the discord of lmstudio. They have plans

    • @DustinHare
      @DustinHare Год назад

      @@PromptEngineer48 i installed wsl in my windows machine and followed your tutorial, everyhting worked fine. Great tutorial btw, thank you :)

  • @varun_tech7
    @varun_tech7 9 месяцев назад +1

    is there a way to view the actual embeddings value from ChromaDB?

  • @spiazzigiovanni7330
    @spiazzigiovanni7330 Год назад +1

    It Is possibile to load legacy code (i.e vb6 )and database schema and query how this code does?

  • @frankbradford2869
    @frankbradford2869 10 месяцев назад +1

    Hi, I did what you said with some hesitation but it worked as you said . This is one good program to use to have a good look at a documents content and meaning. Thanks. BTW is there way to let the program give a full response with out telling it to continue with its explanation?

    • @PromptEngineer48
      @PromptEngineer48  10 месяцев назад

      I think there should be a verbose flag, which you can set to False.

  • @ahmadsiddiqui7998
    @ahmadsiddiqui7998 9 месяцев назад +1

    @promtEngineer, can you host it with basic UI interface, like people could upload their docs and ask questions, without doing all of this hardwork🙈, and also dont keep anyones personal documents with you

  • @ahmedsayed7138
    @ahmedsayed7138 3 месяца назад +1

    You're a life SAVER... many thanks

    • @PromptEngineer48
      @PromptEngineer48  3 месяца назад

      Welcome

    • @ahmedsayed7138
      @ahmedsayed7138 3 месяца назад

      @@PromptEngineer48 can i apply this inside streamlit web app so that users ask and get the answer on ui? Can these models be deployed ?