How to Use Llama 3 with PandasAI and Ollama Locally
HTML-код
- Опубликовано: 2 май 2024
- Today, we'll cover how to perform data analysis and visualization with local Meta Llama 3 using Pandas AI and Ollama for free. Happy learning.
▶ Subscribe: bit.ly/subscribe-tirendazai
▶ Join my channel: bit.ly/join-tirendazai
00:01 Introduction
01:32 Setup
03:02 Initialize the model
05:15 Initialize the app
08:10 Build the app
09:18 Inference
11:16 Data visualization
RELATED VIDEOS:
▶ PandasAI Tutorials: bit.ly/pandasai
▶ Ollama Tutorials: bit.ly/ollama-tutorials
▶ LangChain Tutorials: bit.ly/langchain-tutorials
▶ Generative AI for DS: bit.ly/genai-for-data-science
▶ HuggingFace Tutorials: bit.ly/hugging-face-tutorials
▶ LLMs Tutorials: bit.ly/llm-tutorials
FOLLOW ME:
▶ Medium: / tirendazacademy
▶ X: / tirendazacademy
▶ LinkedIn: / tirendaz-academy
Don't forget to subscribe and turn on notifications so you don't miss the latest videos.
▶ Project files: github.com/TirendazAcademy/Pa...
Hi, I am Tirendaz, PhD. I create content on generative AI & data science. My goal is to make the latest technologies understandable for everyone.
#ai #generativeai #datascience - Наука
Holy crap! My mind is absolutely RACING with proof of concept projects I can deliver to our Finance team with this. Thank you so much for making this!
My pleasure 😊
What are you thinking about? I'm a college student and I want to understand more use cases.
dude, please do another video on this with more things you can do, or maybe explaining further, this is amazing.
This channel has not been discovered yet, thank you for the up-to-date and practical videos!
Thanks 🙏
Great, great video! I just got a new subscriber! Congrats and regards from Brazil!
Thanks 🙏
Undiscovered gem - the voice the instruction pure gold
Thanks 🙏
Unbelievable, never knew about Pandas AI. THANKS VERY MUCH 🎉🎉🎉
Thanks 🙏
Excellent video and great methods!!
Glad you liked it!
Very precious. Keep it up.
Thanks 🙏
Please make a video for agent which perform action using lama or langchain for example if we give an prompt bring red box the lama 3 generate the plan and agent do action.
Thanks
Thanks!
You're welcome 🙏
Excellent video.
which cpu are you using?
Thanks! My system is AMD Ryzen 5 7500F, 64GB RAM and 4070 TI Super graphics card with 16GB VRAM
Good work
Thanks 🙏
so great video
Thank you 🤗
@@TirendazAI how about pdf .?can you help this?
Thank you for the tutorial. I have a question: do I need to apply for an API key before using PandasAI?
PandasAI is open source and free. If you are using any open source model, you do not need an API key.
Wanted to know, can I load the model directly from huggingface? Also I have stored the model and tokenizer using save_pretrained, How can I use these?
To load the model from HuggingFace, you can use langchain. Check this link: python.langchain.com/v0.1/docs/integrations/platforms/huggingface/
Grear! Thanks! it works 😁there was an error in python !!!
You're welcome!
Thanks for your video.
I've replicated your entire solution as a starting point. I keep getting errors from the LLM when trying the exact same queries, same dataset, same everything as what you did in your video. I have a RTX 4070 12GB and tried multiple llama and dolphin llama models up. It seems that every time we ask the LLM to write code to create a histogram or pie chart it creates an error - can you help?
Here is an example:
Query :
create a heatmap of the numerical values
Result:
Unfortunately, I was not able to answer your question, because of the following error:
'list' object has no attribute 'select_dtypes'
When a prompt does not work, try again by changing this prompt. For example, "Plot a heatmap of the numerical values".
Tebrikler kardeşim. Aksanından hemen anladım. Selamlar. Abone oldum 🎉
Teşekkürler 🙏
Thanks for this video. How to integrate groq api to go faster?
You can leverage the langchain_groq library or utilize the OpenAI compatibility.
I showed how to use pandasai with groq api in this video:
ruclips.net/video/C6R9JLHZDH0/видео.html
I have 64gb ram and 8gb vram, i want to run llama 70B but it doesn't fit. how can i run it on system ram (64gb one) on python . can you make a video for that?
I also have 64gb RAM, it worked for me. My system used about 58GB RAM for llama-3:70B. I show RAM I use if I make a video with llama 3:70B.
Is it possible to expand the limit per file? my csv files are larger then 1GB.
This is possible, but you need to use a larger model, such as llama-3:70b instead of llama-3b:8b.
I don’t know how Panda works so sorry for the dumb question. Is the entire CSV data processed by LLM, meaning that large set will be slow or even too big , or is the calculation/processing of data is all done in Panda, meaning LLM only creates formulas for Panda ?
Yes, LLM processes your data and generates an answer based on your question. You can think of this process as summarizing a text. If you have a small data set, you may get a faster response.
great tutorial!
I was playing around with this dataset and I get strange errors for questions like 'how many First class female survived?'
-
Unfortunately, I was not able to answer your question, because of the following error:
'list' object has no attribute 'loc'
or
list indices must be integers or slices, not str
or
invalid syntax (, line 3)
can anyone reproduce and explain why is this happening?
Sometimes, when LLM does not understand the prompt, it may not return the output you want. You can try changing the prompt or using a different prompt.
Hey buddy I also want to integrate additional feature so that model can also generate sql query for the result as well trying it using pandasql but no success, can you help me
I made a video about MySQL database, take a look.
Is it possible to get PandasAi to show the code it used to generate the plots, etc.?
Yes, you can get code if you specify in your prompt.
@@TirendazAI Thanks!
@@scottmiller2591 My pleasure!
How are you able to get the response so fast. It is taking me few minutes to get a response. My csv file has 7K records. running on ubuntu 22 with i7 with 32 GB RAM.
The most important component for a fast response is the graphics card.
I'm getting the error no module named pandasai.llm.local_llm. Is there any way to solve it?
llm is a module in pandasai. Make sure pandasai is installed and virtual environment is activated.
Sorry, but my sample rewrite as your code return me a msg "Unfortunately, I was not able to answer your question, because of the following error: Connection error" and after in code py there is a error message : raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.
Did you start Ollama using the "ollama serve" command?
Can it handles Large Data? Connecting through sql
Yes it can. You can work with data such as CSV, XLSX, PostgreSQL, MySQL, BigQuery, Databrick, Snowflake.
Why use the default llama rather than the llama instruct?
The instruct models are fine-tuned to be able to follow prompted instructions. This version usually is used to make a chatbot, implementing RAG or using agents.
i'm not getting a response from the chat. it keeps "Generating the prompt". what could be a reason for that? Thanks!
Did you get any error? If yes, can you share this error? I can say something if I see the error.
@@TirendazAI there’s no error my friend. It only takes a very long time to get the output. Any ideas?
Which large model do you use?
@@TirendazAI llama3 !
@@TirendazAI llama3 !
Can I do this using MySQL in streamlit for visualization can you send code
I am planning to implement a project using MySQL.
@@TirendazAI I need this fastly can you send code
it is very slow to answer i have 48gb of ram but asking simple question takes ages ...
A powerful graphics card is important for fast response. My card is 4070 TI super with 16 GB VRAM. For small or medium datasets, I can get answers in a short time.
amazingngngngngng!
Pandas Ai yi duymamıştım
Son zamanlarda bir çok yapay zeka aracı çıktı takip etmek zorlaştı.