How to Use Llama 3 with PandasAI and Ollama Locally

Поделиться
HTML-код
  • Опубликовано: 2 май 2024
  • Today, we'll cover how to perform data analysis and visualization with local Meta Llama 3 using Pandas AI and Ollama for free. Happy learning.
    ▶ Subscribe: bit.ly/subscribe-tirendazai
    ▶ Join my channel: bit.ly/join-tirendazai
    00:01 Introduction
    01:32 Setup
    03:02 Initialize the model
    05:15 Initialize the app
    08:10 Build the app
    09:18 Inference
    11:16 Data visualization
    RELATED VIDEOS:
    ▶ PandasAI Tutorials: bit.ly/pandasai
    ▶ Ollama Tutorials: bit.ly/ollama-tutorials
    ▶ LangChain Tutorials: bit.ly/langchain-tutorials
    ▶ Generative AI for DS: bit.ly/genai-for-data-science
    ▶ HuggingFace Tutorials: bit.ly/hugging-face-tutorials
    ▶ LLMs Tutorials: bit.ly/llm-tutorials
    FOLLOW ME:
    ▶ Medium: / tirendazacademy
    ▶ X: / tirendazacademy
    ▶ LinkedIn: / tirendaz-academy
    Don't forget to subscribe and turn on notifications so you don't miss the latest videos.
    ▶ Project files: github.com/TirendazAcademy/Pa...
    Hi, I am Tirendaz, PhD. I create content on generative AI & data science. My goal is to make the latest technologies understandable for everyone.
    #ai #generativeai #datascience
  • НаукаНаука

Комментарии • 80

  • @DallasGraves
    @DallasGraves 23 дня назад +3

    Holy crap! My mind is absolutely RACING with proof of concept projects I can deliver to our Finance team with this. Thank you so much for making this!

    • @TirendazAI
      @TirendazAI  23 дня назад

      My pleasure 😊

    • @user-el8jv8hx2g
      @user-el8jv8hx2g 21 день назад

      What are you thinking about? I'm a college student and I want to understand more use cases.

  • @MANONTHEMOON419
    @MANONTHEMOON419 17 дней назад +1

    dude, please do another video on this with more things you can do, or maybe explaining further, this is amazing.

  • @user-flashaction
    @user-flashaction 25 дней назад +1

    This channel has not been discovered yet, thank you for the up-to-date and practical videos!

  • @wvagner284
    @wvagner284 24 дня назад +2

    Great, great video! I just got a new subscriber! Congrats and regards from Brazil!

  • @ladonteprince
    @ladonteprince 21 день назад +2

    Undiscovered gem - the voice the instruction pure gold

  • @60pluscrazy
    @60pluscrazy 24 дня назад +2

    Unbelievable, never knew about Pandas AI. THANKS VERY MUCH 🎉🎉🎉

  • @WhySoBroke
    @WhySoBroke 24 дня назад +1

    Excellent video and great methods!!

  • @GhostCoder83
    @GhostCoder83 25 дней назад +1

    Very precious. Keep it up.

  • @mubasharsaeed6044
    @mubasharsaeed6044 24 дня назад +2

    Please make a video for agent which perform action using lama or langchain for example if we give an prompt bring red box the lama 3 generate the plan and agent do action.
    Thanks

  • @felipemorelli4059
    @felipemorelli4059 20 дней назад +1

    Thanks!

  • @felipemorelli4059
    @felipemorelli4059 20 дней назад +1

    Excellent video.
    which cpu are you using?

    • @TirendazAI
      @TirendazAI  19 дней назад +2

      Thanks! My system is AMD Ryzen 5 7500F, 64GB RAM and 4070 TI Super graphics card with 16GB VRAM

  • @Moustafa_ayad
    @Moustafa_ayad 24 дня назад +1

    Good work

  • @teachitkh
    @teachitkh 24 дня назад +1

    so great video

    • @TirendazAI
      @TirendazAI  23 дня назад

      Thank you 🤗

    • @teachitkh
      @teachitkh 22 дня назад

      @@TirendazAI how about pdf .?can you help this?

  • @user-mv9ul9tz1c
    @user-mv9ul9tz1c 23 дня назад +1

    Thank you for the tutorial. I have a question: do I need to apply for an API key before using PandasAI?

    • @TirendazAI
      @TirendazAI  23 дня назад +1

      PandasAI is open source and free. If you are using any open source model, you do not need an API key.

  • @JohnBvoN
    @JohnBvoN 17 дней назад

    Wanted to know, can I load the model directly from huggingface? Also I have stored the model and tokenizer using save_pretrained, How can I use these?

    • @TirendazAI
      @TirendazAI  17 дней назад

      To load the model from HuggingFace, you can use langchain. Check this link: python.langchain.com/v0.1/docs/integrations/platforms/huggingface/

  • @varganbas427
    @varganbas427 20 дней назад +1

    Grear! Thanks! it works 😁there was an error in python !!!

  • @user-pn6ey5dn4y
    @user-pn6ey5dn4y 23 дня назад +1

    Thanks for your video.
    I've replicated your entire solution as a starting point. I keep getting errors from the LLM when trying the exact same queries, same dataset, same everything as what you did in your video. I have a RTX 4070 12GB and tried multiple llama and dolphin llama models up. It seems that every time we ask the LLM to write code to create a histogram or pie chart it creates an error - can you help?
    Here is an example:
    Query :
    create a heatmap of the numerical values
    Result:
    Unfortunately, I was not able to answer your question, because of the following error:
    'list' object has no attribute 'select_dtypes'

    • @TirendazAI
      @TirendazAI  10 дней назад

      When a prompt does not work, try again by changing this prompt. For example, "Plot a heatmap of the numerical values".

  • @stanTrX
    @stanTrX 23 дня назад +1

    Tebrikler kardeşim. Aksanından hemen anladım. Selamlar. Abone oldum 🎉

  • @user-mm1tt6oy7v
    @user-mm1tt6oy7v 24 дня назад +1

    Thanks for this video. How to integrate groq api to go faster?

    • @TirendazAI
      @TirendazAI  24 дня назад +1

      You can leverage the langchain_groq library or utilize the OpenAI compatibility.
      I showed how to use pandasai with groq api in this video:
      ruclips.net/video/C6R9JLHZDH0/видео.html

  • @HmzaY
    @HmzaY 11 дней назад +1

    I have 64gb ram and 8gb vram, i want to run llama 70B but it doesn't fit. how can i run it on system ram (64gb one) on python . can you make a video for that?

    • @TirendazAI
      @TirendazAI  10 дней назад

      I also have 64gb RAM, it worked for me. My system used about 58GB RAM for llama-3:70B. I show RAM I use if I make a video with llama 3:70B.

  • @lowkeylyesmith
    @lowkeylyesmith 12 дней назад

    Is it possible to expand the limit per file? my csv files are larger then 1GB.

    • @TirendazAI
      @TirendazAI  11 дней назад

      This is possible, but you need to use a larger model, such as llama-3:70b instead of llama-3b:8b.

  • @greg-guy
    @greg-guy 23 дня назад

    I don’t know how Panda works so sorry for the dumb question. Is the entire CSV data processed by LLM, meaning that large set will be slow or even too big , or is the calculation/processing of data is all done in Panda, meaning LLM only creates formulas for Panda ?

    • @TirendazAI
      @TirendazAI  22 дня назад

      Yes, LLM processes your data and generates an answer based on your question. You can think of this process as summarizing a text. If you have a small data set, you may get a faster response.

  • @urajcic1
    @urajcic1 22 дня назад +1

    great tutorial!
    I was playing around with this dataset and I get strange errors for questions like 'how many First class female survived?'
    -
    Unfortunately, I was not able to answer your question, because of the following error:
    'list' object has no attribute 'loc'
    or
    list indices must be integers or slices, not str
    or
    invalid syntax (, line 3)
    can anyone reproduce and explain why is this happening?

    • @TirendazAI
      @TirendazAI  21 день назад

      Sometimes, when LLM does not understand the prompt, it may not return the output you want. You can try changing the prompt or using a different prompt.

  • @RICHARDSON143
    @RICHARDSON143 22 дня назад

    Hey buddy I also want to integrate additional feature so that model can also generate sql query for the result as well trying it using pandasql but no success, can you help me

    • @TirendazAI
      @TirendazAI  10 дней назад

      I made a video about MySQL database, take a look.

  • @scottmiller2591
    @scottmiller2591 22 дня назад +1

    Is it possible to get PandasAi to show the code it used to generate the plots, etc.?

    • @TirendazAI
      @TirendazAI  21 день назад +1

      Yes, you can get code if you specify in your prompt.

    • @scottmiller2591
      @scottmiller2591 21 день назад +1

      @@TirendazAI Thanks!

    • @TirendazAI
      @TirendazAI  21 день назад +1

      @@scottmiller2591 My pleasure!

  • @user-kk1li5mk7q
    @user-kk1li5mk7q 19 дней назад

    How are you able to get the response so fast. It is taking me few minutes to get a response. My csv file has 7K records. running on ubuntu 22 with i7 with 32 GB RAM.

    • @TirendazAI
      @TirendazAI  19 дней назад

      The most important component for a fast response is the graphics card.

  • @varshakrishnan3686
    @varshakrishnan3686 15 дней назад

    I'm getting the error no module named pandasai.llm.local_llm. Is there any way to solve it?

    • @TirendazAI
      @TirendazAI  15 дней назад

      llm is a module in pandasai. Make sure pandasai is installed and virtual environment is activated.

  • @varganbas427
    @varganbas427 21 день назад

    Sorry, but my sample rewrite as your code return me a msg "Unfortunately, I was not able to answer your question, because of the following error: Connection error" and after in code py there is a error message : raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

    • @TirendazAI
      @TirendazAI  21 день назад

      Did you start Ollama using the "ollama serve" command?

  • @sahilchipkar9761
    @sahilchipkar9761 25 дней назад

    Can it handles Large Data? Connecting through sql

    • @TirendazAI
      @TirendazAI  24 дня назад

      Yes it can. You can work with data such as CSV, XLSX, PostgreSQL, MySQL, BigQuery, Databrick, Snowflake.

  • @spkgyk
    @spkgyk 24 дня назад

    Why use the default llama rather than the llama instruct?

    • @TirendazAI
      @TirendazAI  24 дня назад

      The instruct models are fine-tuned to be able to follow prompted instructions. This version usually is used to make a chatbot, implementing RAG or using agents.

  • @sebastianarias9790
    @sebastianarias9790 13 дней назад

    i'm not getting a response from the chat. it keeps "Generating the prompt". what could be a reason for that? Thanks!

    • @TirendazAI
      @TirendazAI  13 дней назад

      Did you get any error? If yes, can you share this error? I can say something if I see the error.

    • @sebastianarias9790
      @sebastianarias9790 13 дней назад

      @@TirendazAI there’s no error my friend. It only takes a very long time to get the output. Any ideas?

    • @TirendazAI
      @TirendazAI  13 дней назад

      Which large model do you use?

    • @sebastianarias9790
      @sebastianarias9790 13 дней назад

      @@TirendazAI llama3 !

    • @sebastianarias9790
      @sebastianarias9790 13 дней назад

      @@TirendazAI llama3 !

  • @vinaya68vinno1
    @vinaya68vinno1 23 дня назад

    Can I do this using MySQL in streamlit for visualization can you send code

    • @TirendazAI
      @TirendazAI  23 дня назад

      I am planning to implement a project using MySQL.

    • @vinaya68vinno1
      @vinaya68vinno1 23 дня назад

      @@TirendazAI I need this fastly can you send code

  • @majukanumi9639
    @majukanumi9639 25 дней назад

    it is very slow to answer i have 48gb of ram but asking simple question takes ages ...

    • @TirendazAI
      @TirendazAI  24 дня назад

      A powerful graphics card is important for fast response. My card is 4070 TI super with 16 GB VRAM. For small or medium datasets, I can get answers in a short time.

  • @ccc_ccc789
    @ccc_ccc789 23 дня назад +1

    amazingngngngngng!

  • @stanTrX
    @stanTrX 23 дня назад

    Pandas Ai yi duymamıştım

    • @TirendazAI
      @TirendazAI  13 дней назад

      Son zamanlarda bir çok yapay zeka aracı çıktı takip etmek zorlaştı.