PrivateGPT: Chat to Your PDFs Offline and for FREE in Minutes (Full Tutorial)

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии •

  • @LiamOttley
    @LiamOttley  Год назад +1

    Leave your questions below! 😎
    📚 My Free Skool Community: bit.ly/3uRIRB3
    🤝 Work With Me: www.morningside.ai/
    📈 My AI Agency Accelerator: bit.ly/3wxLubP

  • @PixelaidMix
    @PixelaidMix Год назад +30

    There are already hundreds of this same video subject using PrivateGPT floating around YT with different talking heads explaining it a little differently. What you and the other AI tinkerers do not fully disclose is how long the ingestion time takes for most users with common computers when they finally get this all setup. PrivateGPT doesn't use GPU, only CPU. That being said, you will need a very good CPU to process the info during ingesting files and query information. Be prepared for long wait times. I'm sure the script will become more polished over time from the dev and/or from a fork dev. This has lots of potential but it's definitely not ready for any real production environment without some tweaks and optimization.

    • @olgachanturia7788
      @olgachanturia7788 Год назад +2

      thanks :)

    • @sixihili1956
      @sixihili1956 Год назад +1

      I was waiting for a similar program called Alpaka Electron to show up something after my query until I found your comment hahahah 😂 I knew that but I was just waiting a confirmation and here you go

    • @EvanPerez-r6b
      @EvanPerez-r6b Год назад

      Would an individual be able to make the tweaks/optimization to make it ready for a real production environment or would it be the team behind privateGPT only? If an individual could improve it, what would need to be changed?

  • @IsaiahThatcher
    @IsaiahThatcher Год назад +9

    Bro! I was literally just looking for this when I got up this morning.

  • @nickstaresinic9933
    @nickstaresinic9933 Год назад +11

    In addition to the very informative and actionable content, I really appreciate the time-saving concision of Liam's presentations. That really separates him from the AI/GPT videos that now crowd that subject-area. (Couldn't be more concise if he really were an avatar of a LiamGPT chatbot.)

    • @LiamOttley
      @LiamOttley  Год назад +3

      My pleasure mate, thank you for your kind words. This is the main goal with my videos, I know you’re all busy and don’t have time for nonsense!

    • @muradbaghirli
      @muradbaghirli Год назад

      @@LiamOttley Hi, ggml-model-q4_0.bin is not in repository, where can I download it from?

    • @bigglyguy8429
      @bigglyguy8429 Год назад

      I was just thinking the opposite.. Clicked on how to install and he still waffles on, then leaps into an install without explaining about using my own IDE or something? Wut? Useless video

    • @ComicBookPage
      @ComicBookPage Год назад

      I need the ggml-model-q4_0.bin too. The project page has been updated and the link to it has been removed.

  • @MYprdcsg
    @MYprdcsg Год назад +8

    I received error like this: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for llama-cpp-python
    Failed to build llama-cpp-python
    ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

  • @SiddharthShukla987
    @SiddharthShukla987 7 месяцев назад +1

    How does PrivateGPT read the pdfs? Does it extract the text from it?

  • @nolan6650
    @nolan6650 Год назад +3

    Wow, this is just great! Have you thought about bringing imartinez (the author of the repo) onto your show to chat about it more? It would be very interesting to see what is in the works!!

  • @carlosterrazas5091
    @carlosterrazas5091 Год назад +10

    Yes I’ve been looking for something like this for a while, for my use case it would be awesome to be able to create a web interface so let’s say coworkers or friends can can use access it via the internet, I thought about using cloud flare tunnels to connect to my local PC, but I’ll need to figure out a way to connect the interface to Google sites maybe? We’ll see. Awesome content!

    • @dikduk
      @dikduk Год назад +4

      Create it with a Virtual Machine. Put it in the cloud.

    • @russ3llcoolio
      @russ3llcoolio Год назад

      Gpt4all

  • @stevenjacobk8850
    @stevenjacobk8850 Год назад +4

    @liam Ottley -- unfortunately they updated the repo so there is no longer a link to one of the models... can you add a download link in your description? not sure if there are other changes to the file as well...

    • @diegocot417
      @diegocot417 Год назад

      Just follow the steps in the README of the repo, and you'll be able to make it work. You no longer need two models. And there's no need to modify the code like Liam did; by default, it already reads PDFs 👌

  • @KatarzynaHewelt
    @KatarzynaHewelt Год назад

    you are our chatbot hero Liam :D! thanks for another video - very helpful!

  • @artistaartificial5635
    @artistaartificial5635 Год назад +2

    Is It possible use other models? Like mpt 7b 4bit 128g

  • @vishnuvardhanvaka
    @vishnuvardhanvaka Год назад +3

    hello sir. Now some new changes are made to the repo and the model is not training and saying that model not found ,but i placed my downloaded models in a created direcotry of models . Please do a video on this new repo of PrivateGPT. Thanking you and hoping to see a video on this !

  • @MohamedElGhazi-ek6vp
    @MohamedElGhazi-ek6vp Год назад +1

    great work , please I can't find the link of the Embiding model you used on git description if you can give it te me

  • @juanhartwig8620
    @juanhartwig8620 Год назад +2

    Liam, thanks for the post. Is there any way to harness the power of the GPU in these codes? especially in the digest process?

  • @latlov
    @latlov Год назад

    5:00 is it possible to load dot sql files?

  • @pablosoriano8502
    @pablosoriano8502 Год назад

    where can we download the models shown in 3:30 from? Specially the second one of 4.21 gb ? Thanks

    • @diegocot417
      @diegocot417 Год назад

      at this moment you no longer need two models, just follow the steps of the readme of the repo and it will work

  • @jyorko721
    @jyorko721 Год назад

    ty, I needed this library, my implementation for the pdf part uses ocr and pystesseract to additionally extract text from images

  • @adriansrfr
    @adriansrfr Год назад

    I'm totally a fan of your work, ty!

  • @mcombatti
    @mcombatti Год назад +31

    In the next few days, I'll be sharing a project that I've been working on for a few weeks now. It's a piece of software that runs completely offline (unless you enable web for in-depth current responses.) But I was unhappy with the vector database solutions available, so I rolled my own into the software. It is able to chunk, vectorize, and query 5000x faster than langchain, even on miniscule resources 🧙‍♂️ I'll reach out upon release so you can demo and provide feedback. No python required 🤗 everything is standalone compiled. (Windows, Mac, Linux(Inc RPi) systems)

    • @airebreton
      @airebreton Год назад +1

      man, I have no idea of coding, but the really interesting idea is to be able to choose privately our own and large collection of data. if we users can access that, we will be able to offer our private collections of data to others so knowledge will spread easily. of course it's all about the correct code to be able to load as large data as possible about a theme and the good quality of the data choosen. thanks for your willingness to make this happen.

    • @ChuckWilliamsTechnology
      @ChuckWilliamsTechnology Год назад +1

      Awesome...looking forward to see your demo...thanks a ton..

    • @1242elena
      @1242elena Год назад +2

      Post the link here once it's complete

    • @spillledcarryout
      @spillledcarryout Год назад +1

      Plz, let me know how to access/$$. I need to summarize pdf's and not at all a coder. I tried the above and got stuck.

    • @StijnSmits
      @StijnSmits Год назад +1

      interested

  • @madpeco5811
    @madpeco5811 Год назад +1

    I can't find the embeding model on the github repo. Can someone please send a link where I can download it or other working modells?

  • @1242elena
    @1242elena Год назад +4

    The example document "state of the union speech" is around 30 pages / 6,500 words. What is the limit to the number of pages/words/number of documents/size of documents that can be added to the document folder to be converted to embedings then queried? I want to use it for database research of a local PC database that is around 1GB of various .csv and PDF files. If this isn't possible is there another method or can only a few of them be queried at a time? Or is it possible if only a few were converted to embeddings at a time until the full database could be complete? (in theory) Thanks for the amazing tutorial this is incredibly useful the main issue is if it is scalable and if not how to make it scalable while still being localized/secure/free etc....

  • @akki_the_tecki
    @akki_the_tecki Год назад +1

    pls any one can share the github link, given link is not this, Thats different. Please this is VERY VERY Important

  • @johnbrewer1430
    @johnbrewer1430 Год назад +4

    I've been messing with this for a couple days but because of other priorities only got it to run today. I trained it on 133 MB of pdf documents. I noticed that it uses the CPU, which worked fine for the ingest, but then when I asked it my first question, it took over five minutes to answer and it was suboptimal. No biggie on how good the answer was, because my source was probably not optimal, but the time it took to answer was crazy. I noticed it used the CPU for that too. Digging around I saw a reference to adding GPU to a method, but that method doesn't exist and my Python skills aren't up to fixing it. Just my thoughts on this. (Not complaining here, just pointing something out.)

  • @sarazayan6655
    @sarazayan6655 Год назад

    what python version should I use because I want to create an environment before installing the requirements.txt
    conda create -n test_env python= ?????
    what python version should I type in the place of question marks .

  • @jorgerios4091
    @jorgerios4091 Год назад +2

    Nice video, thanks. Is it possible you make a video creating a nice UI instead of using the cmd window? If you show us how to do it with chatgpt would be a plus.

  • @arsalanniroomandi3109
    @arsalanniroomandi3109 Год назад

    Thank you so much for the video.
    Can we ask questions from more than one pdf file using this method?

  • @stagrei8233
    @stagrei8233 Год назад

    Thanks man. I love your work.

  • @blackhat965
    @blackhat965 Год назад +1

    Is there a token limit on queries here? If I’m processing the LLM locally, I wouldn’t mind using more processing power for tokens.

  • @IAM_Timmy1t
    @IAM_Timmy1t Год назад +1

    It is taking a lot of time to digest the pdf doc for me is there a way we can make this faster?

  • @GiovaDuarte
    @GiovaDuarte Год назад +1

    hi, this looks great. I am getting the error below. Any ideas?
    Building wheels for collected packages: llama-cpp-python, hnswlib
    Building wheel for llama-cpp-python (pyproject.toml) ... error
    error: subprocess-exited-with-error
    × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
    │ exit code: 1

  • @LibertyRecordsFree
    @LibertyRecordsFree Год назад +6

    Could you in the future always consider to instal, configure and run the apps in a virtual environment (venv) as a good practice? Thanks for the nice video.

    • @satanael387
      @satanael387 Год назад

      for a newb can you explain why?

    • @LibertyRecordsFree
      @LibertyRecordsFree Год назад

      @@satanael387 Just so if you transfer you working folder, you have every resources needed to have you program running. And it is easier to share to someone.

  • @denijane89
    @denijane89 8 месяцев назад

    I just tried to install it, I cannot. Basically, I get into some kind of closed circle between "poetry install --extras vector-stores-qdrant" and "poetry install --extras ui". No idea what it is exactly that it wants and no idea what exactly it is doing. It's not supposed to require so much stuff. I installed the privateGPT-app from github and it works. SLOWLY, but works. Thanks for the video anyway, now I finally have my dream code for parsing and questioning pdfs. Pretty awesome.

  • @ares0wept
    @ares0wept Год назад

    LOL! The first of the 13 requirements takes an hour to download (for me anyway). I have no idea how long the rest will take. PrivateGPT in Minutes! My extensive background in programming on an Apple IIe in 1984 is finally paying off.

  • @prateeklath
    @prateeklath Год назад +1

    Hi, can you pls post link for 13b model as i am looking over internet getting confused between 4bit converted model etc. Pls share link for 13b models also.

  • @pdas6145
    @pdas6145 Год назад

    You are providing an interface to interact with an llm but are all llms the same?

  • @AgainstTheHype
    @AgainstTheHype Год назад

    commodo firewall has flagged the pip3.exe as trojan during installing dependencies. What should I do?

  • @allgoodstufftoknow7000
    @allgoodstufftoknow7000 Год назад

    @Liam Ottley Thanks a lot! can you elaborate why you don't llama-index loader instead? is there any special reason? although llama-index use longchain..

  • @ChuckWilliamsTechnology
    @ChuckWilliamsTechnology Год назад

    How long did it take to ingest and get the sample "state of union" file ingested...

  • @rryann088
    @rryann088 Год назад

    we do not need the openai api key to use this right???

  • @prateekkeshari
    @prateekkeshari Год назад +1

    It's very cool! Tried it and made it work. My only issue is that it's really really slow :(

  • @ChuckWilliamsTechnology
    @ChuckWilliamsTechnology Год назад

    How can we built this with Streamlit...basically the use can upload their docs from the Streamlit UI...and then go ahead and ask questions from the index...but like this using PrivateGPT instead of OpenAI and running it locally....thanks a ton....

  • @_Sepherial
    @_Sepherial Год назад

    Can the installation be done with Powershell instead of Visual Studio Code?

  • @gileneusz
    @gileneusz Год назад +1

    Mac M1 here. I got this error while installing requirements: Failed to build llama-cpp-python
    ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

    • @shacharbard1613
      @shacharbard1613 Год назад +1

      what fixed this for me is installing Visual Studio tools. google this error and you'll find a Stack Overflow thread suggesting to install VS tools. then you will need to modify the installation by adding another module (can't remember right now).
      once this is done, restart your PC and repeat the installation.
      make sure you have the models in their directory before you run the requirements.txt installation!!!

    • @gileneusz
      @gileneusz Год назад +1

      @@shacharbard1613 "then you will need to modify the installation by adding another module (can't remember right now)." well that's not very helpful advice😆 . I googled it many times and couldn't find any good solution... 😭

  • @tonaltti
    @tonaltti Год назад

    Can you personalize the chatbot with this? To instruct how it should answer etc.? If yes, how?

  • @DubaiLife786
    @DubaiLife786 Год назад

    yo ive been creating plugins applied for the access im on the waitlist adk how im supposed to test this any ideas?

  • @fifeoluwabunmi5959
    @fifeoluwabunmi5959 9 месяцев назад

    The response time I'm getting is terribly slow. The only time the reponse is quicker is when I hit Control + C after a few seconds. But even then, the response is not usable. Any help?

  • @reditec7248
    @reditec7248 Год назад

    Hey Liam seems the Visual studio information has changed is it possible to update video to align with new information on Github?

  • @buddyowens839
    @buddyowens839 Год назад

    are there any technical or practical limits to document size?

  • @lastemperor1347
    @lastemperor1347 Год назад +7

    Thanks Liam Ottley, that was a great tutorial. However, I think it would be great if you can find some open source alternatives for Auto-GPT also.

  • @RpaMedatron
    @RpaMedatron 8 месяцев назад

    hi please can you make the video on this project
    which link you have shared of github name :"privateGPT" in this video description
    how i can run? I am not able to understand. please help me in.
    the project you showed in video this is not in github link now.
    I have to create project chat with pdf offline.

  • @gorgik
    @gorgik Год назад

    Can this software ask more complex questions like "mark hate speech in document and export the result in this json format...", or "summarize the news article in 3 paragraphs"?

  • @dokotomonaku
    @dokotomonaku Год назад

    Does this work for other languages as well?

  • @SuchithGunarathna
    @SuchithGunarathna Год назад

    can we host this on cloud and make a use of it ?

  • @shonnspencer1162
    @shonnspencer1162 Год назад

    Where is your .env file? doesn't work on my Mac however your tips helped out utilizing python3.
    Great job!!! good information

    • @dubdelay
      @dubdelay Год назад

      .env is a hidden file in the privateGPT directory.

  • @ChristiaanRoest79
    @ChristiaanRoest79 Год назад

    I am getting an error after using the pip command in visual code studio. I have Pyton installed. What am i doing wrong?

  • @acain6803
    @acain6803 Год назад

    What are the limits on length? Can I feed it an entire history textbook?

  • @eahamdan
    @eahamdan Год назад

    Thank you so much - I am running macOS Ventura and keep getting this error "unsupported argument 'native' to option '-march= / ERROR: Failed building wheel for hnswlib" from "pip install -r requirements.txt" - I appreciate some guidance

  • @bene88597
    @bene88597 Год назад

    Please make a Video about HormoziGPT :-) I have a question about this one des this AI just contain information from Hormonzi or can you use it like chatgpt 3.5 and 4?

  • @tt703
    @tt703 Год назад

    Is this capable of data analysis? If i give it financial documents or historical data?

  • @shacharbard1613
    @shacharbard1613 Год назад +5

    awesome man. how do you find all those gold nuggets?

    • @LiamOttley
      @LiamOttley  Год назад +8

      Always keeping an eye on GitHub trending page for you all!

    • @shacharbard1613
      @shacharbard1613 Год назад +1

      @@LiamOttley 🔥

  • @advashishta680
    @advashishta680 Год назад

    Very informative video, I tried to implement the same , Have reached to stage where it ask for Enter Question. But then it throws error "AttributeError: 'DualStreamProcessor' object has no attribute 'flush'" when i ask questions. I am using windows Server 2012 R ,

  • @ChuckWilliamsTechnology
    @ChuckWilliamsTechnology Год назад

    great video...can this be put into a Streamlit app as well...the ability to upload the docs and use the chat bot in the actual Streamlit app....thoughts?

    • @joshuamacdougall5968
      @joshuamacdougall5968 Год назад +1

      Was thinking the same, got half way through setting it up, and it works with streamlit, but my poor 16gb machine can't handle the query lol

    • @ChuckWilliamsTechnology
      @ChuckWilliamsTechnology Год назад

      @@joshuamacdougall5968 ok cool..can you share the code how you set it up with a Streamlit frontend...would love to test it out....thanks a ton

  • @3rdwalk296
    @3rdwalk296 Год назад

    does it support .CSV or excel file?
    Im having error ingesting the csv :(

  • @PDXdjn
    @PDXdjn Год назад +1

    Great tutorial! Works for me.
    It's sloooooow, even on an Intel 12900, but works as advertised.
    Anyone gotten it to use CUDA?

  • @bene88597
    @bene88597 Год назад

    I would like to feed it with all my data bud usw it with the knowledge of chatgpt 3,5, and 4 as usual is this somehow possible?

  • @eanternet
    @eanternet Год назад +2

    7:34 was a jump scare 💀

  • @reljic88
    @reljic88 Год назад +1

    Dose it support excel files?

  • @famousfigures1999
    @famousfigures1999 Год назад

    Does it support other language than English?

  • @Chasingaxl
    @Chasingaxl Год назад

    Would love to connect a code repo and ask questions about projects any my codebase

  • @vpad201
    @vpad201 Год назад

    Everyone seems excited about this PrivateGPT. I've installed it and uploaded several PDF files. Am I the only one getting incorrect responses from the bot? It doesn't seem all that phenomenal to me, to be honest.

  • @mapkbalaji
    @mapkbalaji Год назад

    Could you please make a tutorial on hosting the backend on AWS, and querying on the uploaded files through frontend app?

    • @ngweisheng996
      @ngweisheng996 Год назад

      😂 sound like you asking people work for u free...

    • @mapkbalaji
      @mapkbalaji Год назад

      @@ngweisheng996 Ha ha ha 🤣 I was working on it, and was facing some difficulties. Now made it work!!

  • @swiftypopty1102
    @swiftypopty1102 Год назад

    Are there GPU-accelerated fork of this AI? Only dependant to CPU sound quite a disaster job for it even for a server CPU.

  • @khosravangroup
    @khosravangroup Год назад

    This is work for persian language?

  • @prodbykaioken
    @prodbykaioken Год назад

    I like web models, i like local models, but i want a hybrid model that i cant flick a switch to access the web

  • @nafang-x3u
    @nafang-x3u Год назад

    hello @Liam Ottley does it inject and support arabic ?

  • @trilogen
    @trilogen Год назад

    What about for coding? With the whole offline privacy in mind...

  • @coreydagod9317
    @coreydagod9317 Год назад +1

    I think you need to clarify on your " install that " and " do this " way of speaking. That would make these a bit easier to follow along with.

  • @joelsamuel9771
    @joelsamuel9771 Год назад

    Hey can you find a repo which has more super powers like a web ui and a backend exposing apis? ( all running locally ) would be super cool

    • @sysadmin9396
      @sysadmin9396 Год назад

      @@moistweeneryou can have a web ui and still be local. It’s just a web app

    • @ngweisheng996
      @ngweisheng996 Год назад

      sound like you asking people work for u free

  • @babasahebpinjar6290
    @babasahebpinjar6290 Год назад +1

    Getting the below error
    C:\Users\babas\AppData\Local\Temp\pip-install-ugk0x7xu\llama-cpp-python_a827470d54a74b4d954d18497e059ba1\_skbuild\win-amd64-3.9\cmake-build
    Please see CMake's output for more information.
    [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for llama-cpp-python
    Failed to build llama-cpp-python
    ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

  • @wayne8797
    @wayne8797 Год назад

    I am gonna be a walking and talking policy handbook at work😆

  • @wellendowd5354
    @wellendowd5354 Год назад

    Can anyone point me to an install video for privateGPT that starts from scratch? I mean a bare bones visual studio code without C++ compiler and whatever dependencies are needed to get this working. I am getting errors on the requirements.txt and nothing I am doing is working to resolve.
    I have the latest git clone from today as well.

    • @LiamOttley
      @LiamOttley  Год назад

      Create a virtual environment with Anaconda and try install the reqs again

    • @wellendowd5354
      @wellendowd5354 Год назад

      @@LiamOttley Thank you, I will try that

  • @Zaheer-r4k
    @Zaheer-r4k Год назад

    AttributeError: 'Llama' object has no attribute 'ctx'

  • @sijanshrestha8665
    @sijanshrestha8665 Год назад

    Is anyone else facing issue with slow response ? The replies for me takes around 2 to 5 minutes for a 10 page PDF :/

  • @stavsap
    @stavsap Год назад +1

    does any body knows if it can run on a GPU? it seems as cpu only, the performance is pure s***

  • @buddyowens839
    @buddyowens839 Год назад

    I got a short story loaded up and the bot just hallucinates a bunch, making up stuff.

  • @rajatgupta293
    @rajatgupta293 Год назад +1

    Not working.

  • @ItaloGeovani
    @ItaloGeovani Год назад +2

    × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
    │ exit code: 1
    ╰─> [308 lines of output]
    Failed to build llama-cpp-python hnswlib
    ERROR: Could not build wheels for llama-cpp-python, hnswlib, which is required to install pyproject.toml-based projects
    I have this problem when I try to install the preferences. How do I fix?

    • @unitedcococompany8801
      @unitedcococompany8801 Год назад +1

      The error message "Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. exit code: 1" indicates that the process of building the "wheel" for the llama-cpp-python package was not successful. During the wheel building process, an error with exit code 1 occurred. Additionally, the error message mentions a dependency called hnswlib, which also failed to build.
      Here are some common causes of this error message and possible solutions:
      Unsatisfied dependencies: The package dependencies may require additional dependencies that are not installed or have incompatible versions. Make sure you have the required versions and try installing those dependencies manually before attempting to build the package.
      Platform compatibility: The package you are trying to build may have specific platform dependencies or requirements. Check the package or project documentation to ensure the correct platform compatibility.
      Incorrect environment configuration: Ensure that your development environment is properly configured, including the correct Python version, development tools such as a C++ compiler, and any required environment variables.
      Internal package issues: There might be internal issues with the package itself that require fixes or updates. You can search for more information about the issue in the project's source or report it to the package's developers for further assistance.
      If the error message doesn't provide enough information to resolve the problem, it's recommended to consult the official documentation, project repositories, or community resources related to the package or project you're trying to build. In some cases, it may also be helpful to reach out to the package's developers for further support.
      I got this form chat GPT 3,5 turbo :) lol

    • @paulsernine5302
      @paulsernine5302 Год назад

      ask chatgpt lol

    • @ryanjames3907
      @ryanjames3907 Год назад

      i have the same problem
      did you get it fixed ?

    • @andrejsshewchenko7190
      @andrejsshewchenko7190 Год назад

      Same problem hiere

    • @relaxed.stories
      @relaxed.stories Год назад

      Update python to 3.9 and above

  • @chiewzhewei3821
    @chiewzhewei3821 Год назад

    This is same as all available chat pdf application out there.
    Althought accurate this solution is not feasible as the inferencing process for 1 query take up to 5 minutes. Unless we can integrate with GPU which is impossible fot gpt4all

  • @swedishsteelgaming3353
    @swedishsteelgaming3353 Год назад

    I loaded up pdfs to private Gpt, took 5 min for chatgpt to even search the docs. Goes faster just looking through them in words.

  • @colinyang1998
    @colinyang1998 Год назад

    you missed the step to "Rename example.env to .env and edit the variables“

  • @unitedcococompany8801
    @unitedcococompany8801 Год назад

    Wow thats cool

  • @Anon-xd3cf
    @Anon-xd3cf Год назад

    What I need is an AI or set of AI tools that I can use offline to scan physical documents and convert pictures of text into text documents, file them into various folders and subfolders based on content and answer questions about the information in the collection of documents as a whole...
    And I need to do it for free on. $300 tablet.
    It's November 2023 now...
    I'll wait a year.

  • @keccakec
    @keccakec Год назад

    As cool as it is, for anything complex I have found it takes several minutes PER response. It has potential but definitely isn't there yet.

  • @21tribes46
    @21tribes46 Год назад

    Not working anymore.

  • @-GRXNDSCOPER-
    @-GRXNDSCOPER- Год назад

    HOW FAST IS IT ANSWERING

    • @PDXdjn
      @PDXdjn Год назад

      Mine takes 2-3 minutes.

  • @joewright4136
    @joewright4136 Год назад

    Holy moly

  • @punchbowldeals
    @punchbowldeals Год назад

    NO GUI

  • @managedworks-totalproperty5900
    @managedworks-totalproperty5900 Год назад +2

    actually it does support pdf...and csv + txt

    • @LiamOttley
      @LiamOttley  Год назад

      No, you need to change the text loader to a pdf loader for it to work with PDF...

    • @managedworks-totalproperty5900
      @managedworks-totalproperty5900 Год назад

      @@LiamOttley why have you deleted my reply?

    • @managedworks-totalproperty5900
      @managedworks-totalproperty5900 Год назад +2

      The latest version of this application supports PDF native

    • @managedworks-totalproperty5900
      @managedworks-totalproperty5900 Год назад +1

      @@LiamOttley from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader

    • @shacharbard1613
      @shacharbard1613 Год назад +1

      @@LiamOttley I think that there was an addition to the code to enable this.
      now it's:
      from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader
      in any case, I added PyPDFLoader

  • @handler007
    @handler007 Год назад

    FINALLY!

  • @chenlim2165
    @chenlim2165 Год назад +2

    PSA to RUclipsrs: I'm pretty sure AI can unscramble blurred screenshots.