This is very interesting and great work. There is the Mozilla project called llamafile which makes running local LLM with one simple executable file. It also can use CPU instead of GPU intensive. LLamafile makes running LLMs on older hardware possible. It has great performance improvement. It will be great if LocalGPT can work with LLamafile. Thank you.
there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?
By Far the LocalGPT is the most robust RAG system out there - Thank you - But I'm running it on a i9 13900/4090 GPU system - Is there any plans on making the RAG system a bit faster - It can take up to 5 minutes to come back with a response...... Thanks again - Very Cool...
Yes you can. Out of the total training data around 5 or 10 percent (forgot now) is languages other than English. Which is close to the total training data for llama 2.
I tested the ingest and query model with PDF edition of FINANCIAL ACCOUNTING International Financial Reporting Standards ELEVENTH EDITION using default parameters and answers were 80% wrong, particularly with sample journal entries from the context: > Question: provide example of VAT journal entries > Answer * The sales revenue is recorded as a debit to the "Sales Revenue" account, which increases the company's assets.
Please keep this code version for future use, if you update code and if people cannot find code from this video they skip , which i personally did on your old video on LocalGPT and started watching this but for my gpu old code was compatable but cannot clone, since that version doesnt exist
Hello thanks for great video you help me alot about this. Could you help me to add Panda and PandaAI? it could help me to analys the data from the excel and/or csv file. Thanks
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Want to learn RAG beyond basics? Make sure to sign up here: tally.so/r/3y9bb0
does localgpt work in an ubuntu machine without nvidia gpu?
This is very interesting and great work. There is the Mozilla project called llamafile which makes running local LLM with one simple executable file. It also can use CPU instead of GPU intensive. LLamafile makes running LLMs on older hardware possible. It has great performance improvement. It will be great if LocalGPT can work with LLamafile. Thank you.
there is a problem in the code. Even when I ingest new files it's still gives answer and make mess with the last file I deleted. How to handle this. I tried different prompts but it's not working for me?
By Far the LocalGPT is the most robust RAG system out there - Thank you - But I'm running it on a i9 13900/4090 GPU system - Is there any plans on making the RAG system a bit faster - It can take up to 5 minutes to come back with a response...... Thanks again - Very Cool...
Yes, I am experimenting with using ollama for the LLM and I think that will increase the speed. Working on major updates, stay tuned :)
on m2 mbp 16gb with ollama+llama38b+anythingllm is returning in. seconds …
@@laalbujhakkar Then again I'm having it search 300 MB of documents.........
May i use llama3 with languages other then english?
Yes you can. Out of the total training data around 5 or 10 percent (forgot now) is languages other than English. Which is close to the total training data for llama 2.
Yes, you can as pointed out. You also want to make sure to use a multi-lingual embedding model.
I want a specific conversational chatbot with very few amount of data. How can I do it?
I tested the ingest and query model with PDF edition of FINANCIAL ACCOUNTING International Financial Reporting Standards ELEVENTH EDITION using default parameters and answers were 80% wrong, particularly with sample journal entries from the context:
> Question:
provide example of VAT journal entries
> Answer
* The sales revenue is recorded as a debit to the "Sales Revenue" account, which increases the company's assets.
can use this offline
and
Can I save the conversation so that I can refer to it after a period of time or when creating a new conversation?
Yes,
For memory you will have to send the past conversation as context. Try looking into one of the rope trained models with longer context length.
Yeah fella
This is for offline use. localgpt has a flag save_qa that will enable you to save your conversations and you can load them.
Any idea when support for Apple Silicon M3 is coming?
It already supports Apple Silicon. Make sure you correctly install the llamacpp version. Instructions are in the Readme
Please keep this code version for future use, if you update code and if people cannot find code from this video they skip , which i personally did on your old video on LocalGPT and started watching this but for my gpu old code was compatable but cannot clone, since that version doesnt exist
Hello thanks for great video you help me alot about this. Could you help me to add Panda and PandaAI? it could help me to analys the data from the excel and/or csv file. Thanks
I am getting this error: You are trying to offload the whole model to the disk
a .exe or a gui for windows would me nice gradio like stable diffusion please
Very interested in how to correctly ingest csv files and formats and limitations
Csvs are tricky. You can either go by adding the data to a database and then querying on it. Or create text chunks out of it.
@@sauravmukherjeecom assuming foe larger cvs importing directly to db would make more sense and smaller file we could chunk
Which screen recorder do you use?
Screen.Studio
4gb gpu 16 gb ram. Will llama3 work fine?
Could you help me configure localGPT with pgvector embeddings? :$ I'm seriously struggling
Why use this over something like AnythingLLM?
They solve the same problem. My goal with localgpt is to be a framework for testing different components of RAG as lego blocks.
Hi , is there way to contact you for privet project ?
There is a link in the video description or email me at engineerprompt at gmail
😂kuch samaj nahi aa raha .. kaha se start karna hai
there is a playlist on localgpt on the channel. that will be a good starting point :)