There are already hundreds of this same video subject using PrivateGPT floating around YT with different talking heads explaining it a little differently. What you and the other AI tinkerers do not fully disclose is how long the ingestion time takes for most users with common computers when they finally get this all setup. PrivateGPT doesn't use GPU, only CPU. That being said, you will need a very good CPU to process the info during ingesting files and query information. Be prepared for long wait times. I'm sure the script will become more polished over time from the dev and/or from a fork dev. This has lots of potential but it's definitely not ready for any real production environment without some tweaks and optimization.
I was waiting for a similar program called Alpaka Electron to show up something after my query until I found your comment hahahah 😂 I knew that but I was just waiting a confirmation and here you go
Would an individual be able to make the tweaks/optimization to make it ready for a real production environment or would it be the team behind privateGPT only? If an individual could improve it, what would need to be changed?
In addition to the very informative and actionable content, I really appreciate the time-saving concision of Liam's presentations. That really separates him from the AI/GPT videos that now crowd that subject-area. (Couldn't be more concise if he really were an avatar of a LiamGPT chatbot.)
I was just thinking the opposite.. Clicked on how to install and he still waffles on, then leaps into an install without explaining about using my own IDE or something? Wut? Useless video
I received error like this: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Wow, this is just great! Have you thought about bringing imartinez (the author of the repo) onto your show to chat about it more? It would be very interesting to see what is in the works!!
Yes I’ve been looking for something like this for a while, for my use case it would be awesome to be able to create a web interface so let’s say coworkers or friends can can use access it via the internet, I thought about using cloud flare tunnels to connect to my local PC, but I’ll need to figure out a way to connect the interface to Google sites maybe? We’ll see. Awesome content!
@liam Ottley -- unfortunately they updated the repo so there is no longer a link to one of the models... can you add a download link in your description? not sure if there are other changes to the file as well...
Just follow the steps in the README of the repo, and you'll be able to make it work. You no longer need two models. And there's no need to modify the code like Liam did; by default, it already reads PDFs 👌
hello sir. Now some new changes are made to the repo and the model is not training and saying that model not found ,but i placed my downloaded models in a created direcotry of models . Please do a video on this new repo of PrivateGPT. Thanking you and hoping to see a video on this !
In the next few days, I'll be sharing a project that I've been working on for a few weeks now. It's a piece of software that runs completely offline (unless you enable web for in-depth current responses.) But I was unhappy with the vector database solutions available, so I rolled my own into the software. It is able to chunk, vectorize, and query 5000x faster than langchain, even on miniscule resources 🧙♂️ I'll reach out upon release so you can demo and provide feedback. No python required 🤗 everything is standalone compiled. (Windows, Mac, Linux(Inc RPi) systems)
man, I have no idea of coding, but the really interesting idea is to be able to choose privately our own and large collection of data. if we users can access that, we will be able to offer our private collections of data to others so knowledge will spread easily. of course it's all about the correct code to be able to load as large data as possible about a theme and the good quality of the data choosen. thanks for your willingness to make this happen.
The example document "state of the union speech" is around 30 pages / 6,500 words. What is the limit to the number of pages/words/number of documents/size of documents that can be added to the document folder to be converted to embedings then queried? I want to use it for database research of a local PC database that is around 1GB of various .csv and PDF files. If this isn't possible is there another method or can only a few of them be queried at a time? Or is it possible if only a few were converted to embeddings at a time until the full database could be complete? (in theory) Thanks for the amazing tutorial this is incredibly useful the main issue is if it is scalable and if not how to make it scalable while still being localized/secure/free etc....
I've been messing with this for a couple days but because of other priorities only got it to run today. I trained it on 133 MB of pdf documents. I noticed that it uses the CPU, which worked fine for the ingest, but then when I asked it my first question, it took over five minutes to answer and it was suboptimal. No biggie on how good the answer was, because my source was probably not optimal, but the time it took to answer was crazy. I noticed it used the CPU for that too. Digging around I saw a reference to adding GPU to a method, but that method doesn't exist and my Python skills aren't up to fixing it. Just my thoughts on this. (Not complaining here, just pointing something out.)
what python version should I use because I want to create an environment before installing the requirements.txt conda create -n test_env python= ????? what python version should I type in the place of question marks .
Nice video, thanks. Is it possible you make a video creating a nice UI instead of using the cmd window? If you show us how to do it with chatgpt would be a plus.
hi, this looks great. I am getting the error below. Any ideas? Building wheels for collected packages: llama-cpp-python, hnswlib Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1
Could you in the future always consider to instal, configure and run the apps in a virtual environment (venv) as a good practice? Thanks for the nice video.
@@satanael387 Just so if you transfer you working folder, you have every resources needed to have you program running. And it is easier to share to someone.
I just tried to install it, I cannot. Basically, I get into some kind of closed circle between "poetry install --extras vector-stores-qdrant" and "poetry install --extras ui". No idea what it is exactly that it wants and no idea what exactly it is doing. It's not supposed to require so much stuff. I installed the privateGPT-app from github and it works. SLOWLY, but works. Thanks for the video anyway, now I finally have my dream code for parsing and questioning pdfs. Pretty awesome.
LOL! The first of the 13 requirements takes an hour to download (for me anyway). I have no idea how long the rest will take. PrivateGPT in Minutes! My extensive background in programming on an Apple IIe in 1984 is finally paying off.
Hi, can you pls post link for 13b model as i am looking over internet getting confused between 4bit converted model etc. Pls share link for 13b models also.
@Liam Ottley Thanks a lot! can you elaborate why you don't llama-index loader instead? is there any special reason? although llama-index use longchain..
How can we built this with Streamlit...basically the use can upload their docs from the Streamlit UI...and then go ahead and ask questions from the index...but like this using PrivateGPT instead of OpenAI and running it locally....thanks a ton....
Mac M1 here. I got this error while installing requirements: Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
what fixed this for me is installing Visual Studio tools. google this error and you'll find a Stack Overflow thread suggesting to install VS tools. then you will need to modify the installation by adding another module (can't remember right now). once this is done, restart your PC and repeat the installation. make sure you have the models in their directory before you run the requirements.txt installation!!!
@@shacharbard1613 "then you will need to modify the installation by adding another module (can't remember right now)." well that's not very helpful advice😆 . I googled it many times and couldn't find any good solution... 😭
The response time I'm getting is terribly slow. The only time the reponse is quicker is when I hit Control + C after a few seconds. But even then, the response is not usable. Any help?
hi please can you make the video on this project which link you have shared of github name :"privateGPT" in this video description how i can run? I am not able to understand. please help me in. the project you showed in video this is not in github link now. I have to create project chat with pdf offline.
Can this software ask more complex questions like "mark hate speech in document and export the result in this json format...", or "summarize the news article in 3 paragraphs"?
Thank you so much - I am running macOS Ventura and keep getting this error "unsupported argument 'native' to option '-march= / ERROR: Failed building wheel for hnswlib" from "pip install -r requirements.txt" - I appreciate some guidance
Please make a Video about HormoziGPT :-) I have a question about this one des this AI just contain information from Hormonzi or can you use it like chatgpt 3.5 and 4?
Very informative video, I tried to implement the same , Have reached to stage where it ask for Enter Question. But then it throws error "AttributeError: 'DualStreamProcessor' object has no attribute 'flush'" when i ask questions. I am using windows Server 2012 R ,
great video...can this be put into a Streamlit app as well...the ability to upload the docs and use the chat bot in the actual Streamlit app....thoughts?
Everyone seems excited about this PrivateGPT. I've installed it and uploaded several PDF files. Am I the only one getting incorrect responses from the bot? It doesn't seem all that phenomenal to me, to be honest.
Getting the below error C:\Users\babas\AppData\Local\Temp\pip-install-ugk0x7xu\llama-cpp-python_a827470d54a74b4d954d18497e059ba1\_skbuild\win-amd64-3.9\cmake-build Please see CMake's output for more information. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Can anyone point me to an install video for privateGPT that starts from scratch? I mean a bare bones visual studio code without C++ compiler and whatever dependencies are needed to get this working. I am getting errors on the requirements.txt and nothing I am doing is working to resolve. I have the latest git clone from today as well.
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [308 lines of output] Failed to build llama-cpp-python hnswlib ERROR: Could not build wheels for llama-cpp-python, hnswlib, which is required to install pyproject.toml-based projects I have this problem when I try to install the preferences. How do I fix?
The error message "Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. exit code: 1" indicates that the process of building the "wheel" for the llama-cpp-python package was not successful. During the wheel building process, an error with exit code 1 occurred. Additionally, the error message mentions a dependency called hnswlib, which also failed to build. Here are some common causes of this error message and possible solutions: Unsatisfied dependencies: The package dependencies may require additional dependencies that are not installed or have incompatible versions. Make sure you have the required versions and try installing those dependencies manually before attempting to build the package. Platform compatibility: The package you are trying to build may have specific platform dependencies or requirements. Check the package or project documentation to ensure the correct platform compatibility. Incorrect environment configuration: Ensure that your development environment is properly configured, including the correct Python version, development tools such as a C++ compiler, and any required environment variables. Internal package issues: There might be internal issues with the package itself that require fixes or updates. You can search for more information about the issue in the project's source or report it to the package's developers for further assistance. If the error message doesn't provide enough information to resolve the problem, it's recommended to consult the official documentation, project repositories, or community resources related to the package or project you're trying to build. In some cases, it may also be helpful to reach out to the package's developers for further support. I got this form chat GPT 3,5 turbo :) lol
This is same as all available chat pdf application out there. Althought accurate this solution is not feasible as the inferencing process for 1 query take up to 5 minutes. Unless we can integrate with GPU which is impossible fot gpt4all
What I need is an AI or set of AI tools that I can use offline to scan physical documents and convert pictures of text into text documents, file them into various folders and subfolders based on content and answer questions about the information in the collection of documents as a whole... And I need to do it for free on. $300 tablet. It's November 2023 now... I'll wait a year.
@@LiamOttley I think that there was an addition to the code to enable this. now it's: from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader in any case, I added PyPDFLoader
Leave your questions below! 😎
📚 My Free Skool Community: bit.ly/3uRIRB3
🤝 Work With Me: www.morningside.ai/
📈 My AI Agency Accelerator: bit.ly/3wxLubP
There are already hundreds of this same video subject using PrivateGPT floating around YT with different talking heads explaining it a little differently. What you and the other AI tinkerers do not fully disclose is how long the ingestion time takes for most users with common computers when they finally get this all setup. PrivateGPT doesn't use GPU, only CPU. That being said, you will need a very good CPU to process the info during ingesting files and query information. Be prepared for long wait times. I'm sure the script will become more polished over time from the dev and/or from a fork dev. This has lots of potential but it's definitely not ready for any real production environment without some tweaks and optimization.
thanks :)
I was waiting for a similar program called Alpaka Electron to show up something after my query until I found your comment hahahah 😂 I knew that but I was just waiting a confirmation and here you go
Would an individual be able to make the tweaks/optimization to make it ready for a real production environment or would it be the team behind privateGPT only? If an individual could improve it, what would need to be changed?
Bro! I was literally just looking for this when I got up this morning.
💪🏼
In addition to the very informative and actionable content, I really appreciate the time-saving concision of Liam's presentations. That really separates him from the AI/GPT videos that now crowd that subject-area. (Couldn't be more concise if he really were an avatar of a LiamGPT chatbot.)
My pleasure mate, thank you for your kind words. This is the main goal with my videos, I know you’re all busy and don’t have time for nonsense!
@@LiamOttley Hi, ggml-model-q4_0.bin is not in repository, where can I download it from?
I was just thinking the opposite.. Clicked on how to install and he still waffles on, then leaps into an install without explaining about using my own IDE or something? Wut? Useless video
I need the ggml-model-q4_0.bin too. The project page has been updated and the link to it has been removed.
I received error like this: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
How does PrivateGPT read the pdfs? Does it extract the text from it?
Wow, this is just great! Have you thought about bringing imartinez (the author of the repo) onto your show to chat about it more? It would be very interesting to see what is in the works!!
Yes I’ve been looking for something like this for a while, for my use case it would be awesome to be able to create a web interface so let’s say coworkers or friends can can use access it via the internet, I thought about using cloud flare tunnels to connect to my local PC, but I’ll need to figure out a way to connect the interface to Google sites maybe? We’ll see. Awesome content!
Create it with a Virtual Machine. Put it in the cloud.
Gpt4all
@liam Ottley -- unfortunately they updated the repo so there is no longer a link to one of the models... can you add a download link in your description? not sure if there are other changes to the file as well...
Just follow the steps in the README of the repo, and you'll be able to make it work. You no longer need two models. And there's no need to modify the code like Liam did; by default, it already reads PDFs 👌
you are our chatbot hero Liam :D! thanks for another video - very helpful!
Is It possible use other models? Like mpt 7b 4bit 128g
hello sir. Now some new changes are made to the repo and the model is not training and saying that model not found ,but i placed my downloaded models in a created direcotry of models . Please do a video on this new repo of PrivateGPT. Thanking you and hoping to see a video on this !
great work , please I can't find the link of the Embiding model you used on git description if you can give it te me
Liam, thanks for the post. Is there any way to harness the power of the GPU in these codes? especially in the digest process?
5:00 is it possible to load dot sql files?
where can we download the models shown in 3:30 from? Specially the second one of 4.21 gb ? Thanks
at this moment you no longer need two models, just follow the steps of the readme of the repo and it will work
ty, I needed this library, my implementation for the pdf part uses ocr and pystesseract to additionally extract text from images
I'm totally a fan of your work, ty!
In the next few days, I'll be sharing a project that I've been working on for a few weeks now. It's a piece of software that runs completely offline (unless you enable web for in-depth current responses.) But I was unhappy with the vector database solutions available, so I rolled my own into the software. It is able to chunk, vectorize, and query 5000x faster than langchain, even on miniscule resources 🧙♂️ I'll reach out upon release so you can demo and provide feedback. No python required 🤗 everything is standalone compiled. (Windows, Mac, Linux(Inc RPi) systems)
man, I have no idea of coding, but the really interesting idea is to be able to choose privately our own and large collection of data. if we users can access that, we will be able to offer our private collections of data to others so knowledge will spread easily. of course it's all about the correct code to be able to load as large data as possible about a theme and the good quality of the data choosen. thanks for your willingness to make this happen.
Awesome...looking forward to see your demo...thanks a ton..
Post the link here once it's complete
Plz, let me know how to access/$$. I need to summarize pdf's and not at all a coder. I tried the above and got stuck.
interested
I can't find the embeding model on the github repo. Can someone please send a link where I can download it or other working modells?
The example document "state of the union speech" is around 30 pages / 6,500 words. What is the limit to the number of pages/words/number of documents/size of documents that can be added to the document folder to be converted to embedings then queried? I want to use it for database research of a local PC database that is around 1GB of various .csv and PDF files. If this isn't possible is there another method or can only a few of them be queried at a time? Or is it possible if only a few were converted to embeddings at a time until the full database could be complete? (in theory) Thanks for the amazing tutorial this is incredibly useful the main issue is if it is scalable and if not how to make it scalable while still being localized/secure/free etc....
Love to know this too
pls any one can share the github link, given link is not this, Thats different. Please this is VERY VERY Important
I've been messing with this for a couple days but because of other priorities only got it to run today. I trained it on 133 MB of pdf documents. I noticed that it uses the CPU, which worked fine for the ingest, but then when I asked it my first question, it took over five minutes to answer and it was suboptimal. No biggie on how good the answer was, because my source was probably not optimal, but the time it took to answer was crazy. I noticed it used the CPU for that too. Digging around I saw a reference to adding GPU to a method, but that method doesn't exist and my Python skills aren't up to fixing it. Just my thoughts on this. (Not complaining here, just pointing something out.)
what python version should I use because I want to create an environment before installing the requirements.txt
conda create -n test_env python= ?????
what python version should I type in the place of question marks .
Nice video, thanks. Is it possible you make a video creating a nice UI instead of using the cmd window? If you show us how to do it with chatgpt would be a plus.
Thank you so much for the video.
Can we ask questions from more than one pdf file using this method?
Thanks man. I love your work.
Is there a token limit on queries here? If I’m processing the LLM locally, I wouldn’t mind using more processing power for tokens.
It is taking a lot of time to digest the pdf doc for me is there a way we can make this faster?
hi, this looks great. I am getting the error below. Any ideas?
Building wheels for collected packages: llama-cpp-python, hnswlib
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
Could you in the future always consider to instal, configure and run the apps in a virtual environment (venv) as a good practice? Thanks for the nice video.
for a newb can you explain why?
@@satanael387 Just so if you transfer you working folder, you have every resources needed to have you program running. And it is easier to share to someone.
I just tried to install it, I cannot. Basically, I get into some kind of closed circle between "poetry install --extras vector-stores-qdrant" and "poetry install --extras ui". No idea what it is exactly that it wants and no idea what exactly it is doing. It's not supposed to require so much stuff. I installed the privateGPT-app from github and it works. SLOWLY, but works. Thanks for the video anyway, now I finally have my dream code for parsing and questioning pdfs. Pretty awesome.
LOL! The first of the 13 requirements takes an hour to download (for me anyway). I have no idea how long the rest will take. PrivateGPT in Minutes! My extensive background in programming on an Apple IIe in 1984 is finally paying off.
Hi, can you pls post link for 13b model as i am looking over internet getting confused between 4bit converted model etc. Pls share link for 13b models also.
You are providing an interface to interact with an llm but are all llms the same?
commodo firewall has flagged the pip3.exe as trojan during installing dependencies. What should I do?
@Liam Ottley Thanks a lot! can you elaborate why you don't llama-index loader instead? is there any special reason? although llama-index use longchain..
How long did it take to ingest and get the sample "state of union" file ingested...
we do not need the openai api key to use this right???
It's very cool! Tried it and made it work. My only issue is that it's really really slow :(
How can we built this with Streamlit...basically the use can upload their docs from the Streamlit UI...and then go ahead and ask questions from the index...but like this using PrivateGPT instead of OpenAI and running it locally....thanks a ton....
Can the installation be done with Powershell instead of Visual Studio Code?
Mac M1 here. I got this error while installing requirements: Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
what fixed this for me is installing Visual Studio tools. google this error and you'll find a Stack Overflow thread suggesting to install VS tools. then you will need to modify the installation by adding another module (can't remember right now).
once this is done, restart your PC and repeat the installation.
make sure you have the models in their directory before you run the requirements.txt installation!!!
@@shacharbard1613 "then you will need to modify the installation by adding another module (can't remember right now)." well that's not very helpful advice😆 . I googled it many times and couldn't find any good solution... 😭
Can you personalize the chatbot with this? To instruct how it should answer etc.? If yes, how?
yo ive been creating plugins applied for the access im on the waitlist adk how im supposed to test this any ideas?
The response time I'm getting is terribly slow. The only time the reponse is quicker is when I hit Control + C after a few seconds. But even then, the response is not usable. Any help?
Hey Liam seems the Visual studio information has changed is it possible to update video to align with new information on Github?
are there any technical or practical limits to document size?
Thanks Liam Ottley, that was a great tutorial. However, I think it would be great if you can find some open source alternatives for Auto-GPT also.
Is auto not opensource?
@@davidoffberlin Yes, Auto-Llama-cpp
hi please can you make the video on this project
which link you have shared of github name :"privateGPT" in this video description
how i can run? I am not able to understand. please help me in.
the project you showed in video this is not in github link now.
I have to create project chat with pdf offline.
Can this software ask more complex questions like "mark hate speech in document and export the result in this json format...", or "summarize the news article in 3 paragraphs"?
Does this work for other languages as well?
can we host this on cloud and make a use of it ?
Where is your .env file? doesn't work on my Mac however your tips helped out utilizing python3.
Great job!!! good information
.env is a hidden file in the privateGPT directory.
I am getting an error after using the pip command in visual code studio. I have Pyton installed. What am i doing wrong?
What are the limits on length? Can I feed it an entire history textbook?
Thank you so much - I am running macOS Ventura and keep getting this error "unsupported argument 'native' to option '-march= / ERROR: Failed building wheel for hnswlib" from "pip install -r requirements.txt" - I appreciate some guidance
Please make a Video about HormoziGPT :-) I have a question about this one des this AI just contain information from Hormonzi or can you use it like chatgpt 3.5 and 4?
Is this capable of data analysis? If i give it financial documents or historical data?
awesome man. how do you find all those gold nuggets?
Always keeping an eye on GitHub trending page for you all!
@@LiamOttley 🔥
Very informative video, I tried to implement the same , Have reached to stage where it ask for Enter Question. But then it throws error "AttributeError: 'DualStreamProcessor' object has no attribute 'flush'" when i ask questions. I am using windows Server 2012 R ,
great video...can this be put into a Streamlit app as well...the ability to upload the docs and use the chat bot in the actual Streamlit app....thoughts?
Was thinking the same, got half way through setting it up, and it works with streamlit, but my poor 16gb machine can't handle the query lol
@@joshuamacdougall5968 ok cool..can you share the code how you set it up with a Streamlit frontend...would love to test it out....thanks a ton
does it support .CSV or excel file?
Im having error ingesting the csv :(
Great tutorial! Works for me.
It's sloooooow, even on an Intel 12900, but works as advertised.
Anyone gotten it to use CUDA?
I would like to feed it with all my data bud usw it with the knowledge of chatgpt 3,5, and 4 as usual is this somehow possible?
7:34 was a jump scare 💀
Dose it support excel files?
yes
Does it support other language than English?
Would love to connect a code repo and ask questions about projects any my codebase
Everyone seems excited about this PrivateGPT. I've installed it and uploaded several PDF files. Am I the only one getting incorrect responses from the bot? It doesn't seem all that phenomenal to me, to be honest.
Could you please make a tutorial on hosting the backend on AWS, and querying on the uploaded files through frontend app?
😂 sound like you asking people work for u free...
@@ngweisheng996 Ha ha ha 🤣 I was working on it, and was facing some difficulties. Now made it work!!
Are there GPU-accelerated fork of this AI? Only dependant to CPU sound quite a disaster job for it even for a server CPU.
This is work for persian language?
I like web models, i like local models, but i want a hybrid model that i cant flick a switch to access the web
hello @Liam Ottley does it inject and support arabic ?
What about for coding? With the whole offline privacy in mind...
I think you need to clarify on your " install that " and " do this " way of speaking. That would make these a bit easier to follow along with.
Hey can you find a repo which has more super powers like a web ui and a backend exposing apis? ( all running locally ) would be super cool
@@moistweeneryou can have a web ui and still be local. It’s just a web app
sound like you asking people work for u free
Getting the below error
C:\Users\babas\AppData\Local\Temp\pip-install-ugk0x7xu\llama-cpp-python_a827470d54a74b4d954d18497e059ba1\_skbuild\win-amd64-3.9\cmake-build
Please see CMake's output for more information.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
I am gonna be a walking and talking policy handbook at work😆
Can anyone point me to an install video for privateGPT that starts from scratch? I mean a bare bones visual studio code without C++ compiler and whatever dependencies are needed to get this working. I am getting errors on the requirements.txt and nothing I am doing is working to resolve.
I have the latest git clone from today as well.
Create a virtual environment with Anaconda and try install the reqs again
@@LiamOttley Thank you, I will try that
AttributeError: 'Llama' object has no attribute 'ctx'
Is anyone else facing issue with slow response ? The replies for me takes around 2 to 5 minutes for a 10 page PDF :/
does any body knows if it can run on a GPU? it seems as cpu only, the performance is pure s***
I got a short story loaded up and the bot just hallucinates a bunch, making up stuff.
Not working.
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [308 lines of output]
Failed to build llama-cpp-python hnswlib
ERROR: Could not build wheels for llama-cpp-python, hnswlib, which is required to install pyproject.toml-based projects
I have this problem when I try to install the preferences. How do I fix?
The error message "Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. exit code: 1" indicates that the process of building the "wheel" for the llama-cpp-python package was not successful. During the wheel building process, an error with exit code 1 occurred. Additionally, the error message mentions a dependency called hnswlib, which also failed to build.
Here are some common causes of this error message and possible solutions:
Unsatisfied dependencies: The package dependencies may require additional dependencies that are not installed or have incompatible versions. Make sure you have the required versions and try installing those dependencies manually before attempting to build the package.
Platform compatibility: The package you are trying to build may have specific platform dependencies or requirements. Check the package or project documentation to ensure the correct platform compatibility.
Incorrect environment configuration: Ensure that your development environment is properly configured, including the correct Python version, development tools such as a C++ compiler, and any required environment variables.
Internal package issues: There might be internal issues with the package itself that require fixes or updates. You can search for more information about the issue in the project's source or report it to the package's developers for further assistance.
If the error message doesn't provide enough information to resolve the problem, it's recommended to consult the official documentation, project repositories, or community resources related to the package or project you're trying to build. In some cases, it may also be helpful to reach out to the package's developers for further support.
I got this form chat GPT 3,5 turbo :) lol
ask chatgpt lol
i have the same problem
did you get it fixed ?
Same problem hiere
Update python to 3.9 and above
This is same as all available chat pdf application out there.
Althought accurate this solution is not feasible as the inferencing process for 1 query take up to 5 minutes. Unless we can integrate with GPU which is impossible fot gpt4all
I loaded up pdfs to private Gpt, took 5 min for chatgpt to even search the docs. Goes faster just looking through them in words.
you missed the step to "Rename example.env to .env and edit the variables“
Wow thats cool
What I need is an AI or set of AI tools that I can use offline to scan physical documents and convert pictures of text into text documents, file them into various folders and subfolders based on content and answer questions about the information in the collection of documents as a whole...
And I need to do it for free on. $300 tablet.
It's November 2023 now...
I'll wait a year.
As cool as it is, for anything complex I have found it takes several minutes PER response. It has potential but definitely isn't there yet.
Not working anymore.
HOW FAST IS IT ANSWERING
Mine takes 2-3 minutes.
Holy moly
NO GUI
actually it does support pdf...and csv + txt
No, you need to change the text loader to a pdf loader for it to work with PDF...
@@LiamOttley why have you deleted my reply?
The latest version of this application supports PDF native
@@LiamOttley from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader
@@LiamOttley I think that there was an addition to the code to enable this.
now it's:
from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader
in any case, I added PyPDFLoader
FINALLY!
PSA to RUclipsrs: I'm pretty sure AI can unscramble blurred screenshots.
😂😂