The How-To Guy
The How-To Guy
  • Видео 62
  • Просмотров 173 583
How to Install and Run LLAMA 3.1 8B model on your LAPTOP with OLLAMA
In this video, we'll walk through setting up Ollama and pulling the new Llama 3.1 8B model!
TIMESTAMPS:
============
0:00 - Intro
0:34 - Download Llama 3.1 8B from Ollama
1:22 - Chat with Llama 3.1 8B locally
1:50 - Create a Python pong game with Llama 3.1 8B
3:16 - Run the Python pong game
4:35 - Outro
LINKS:
======
🔗 Meta announcement: ai.meta.com/blog/meta-llama-3-1/
🔗 Ollama: ollama.com/
🤗 Join this channel to get access to perks:
ruclips.net/channel/UCApiD66gf36M9hZanbjgNawjoin
Follow me on socials:
==================
GitHub → github.com/tonykipkemboi
LinkedIn → www.linkedin.com/in/tonykipkemboi/
𝕏 → x.com/tonykipkemboi
Don't forget to like, subscribe, and hit the notification bell to stay updated on ...
Просмотров: 1 708

Видео

GPT-4o mini on Google Colab Notebook
Просмотров 599Месяц назад
#OpenAI released GPT-4o-mini today! 🚀 This video will give you a quick overview of what it is and how to use the model through an API key. We'll run the model on a @Google Colab Notebook. The model performs better than similar smaller models on several benchmarks and is much cheaper, so you can swap out your models today! TIMESTAMPS: 0:00 - Intro 0:09 - GPT-4o-mini benchmark performance overvie...
Chat with CSV files using LangChain Agent [GPT-4o]
Просмотров 1,4 тыс.Месяц назад
In this video, we'll use the @LangChain CSV agent that allows you to interact with your data through natural language queries. Here's what we'll cover: ✅ Quick introduction to LangChain ✅ Setting up the environment and installing necessary libraries ✅ Loading and preprocessing multiple CSV files ✅ Implementing the LangChain CSV agent for intelligent data analysis ✅ Demonstrating querying and ex...
How to build a Streamlit UI for Local PDF RAG [Ollama models]
Просмотров 4,5 тыс.2 месяца назад
In this tutorial, we'll take our local Ollama PDF RAG (Retrieval Augmented Generation) pipeline to the next level by adding a sleek Streamlit UI! 🚀 We'll build on our previous PDF RAG project and create an interactive web application that allows users to upload a PDF, ask questions, and get accurate, context-aware answers. Here's what we'll cover: ✅ Quick recap of our previous RAG pipeline usin...
How to build a ROBUST AI Agent stack [CrewAI + YouTube API + Ollama + Groq + AgentOps]
Просмотров 9 тыс.4 месяца назад
In this video, we'll discuss how to create #AI agents that interact with the RUclips Data API to extract comments from any given video and generate actionable insights. Based on user feedback, these agents can help you understand and create better content. What you will learn: ✅ Installation & Setup: How to get RUclipsYapperTrapper up and running with step-by-step instructions. ✅ Configuring Ag...
How to create the ULTIMATE Ollama UI app with Streamlit
Просмотров 12 тыс.4 месяца назад
In this tutorial, we'll build a full-fledged Streamlit app user interface to interact with our local model using Ollama! I chose Streamlit because it is easy to get started and very composable. Before starting, download [Ollama](ollama.com/) on your local machine. Enjoy, and please leave your feedback in the comments! TIMESTAMPS: 0:00 - Introduction 0:47 - Preface 1:44 - Code directory walkthro...
How to chat with your PDFs using local Large Language Models [Ollama RAG]
Просмотров 96 тыс.5 месяцев назад
In this tutorial, we'll explore how to create a local RAG (Retrieval Augmented Generation) pipeline that processes and allows you to chat with your PDF file(s) using Ollama and LangChain! ✅ We'll start by loading a PDF file using the "UnstructuredPDFLoader" ✅ Then, we'll split the loaded PDF data into chunks using the "RecursiveCharacterTextSplitter" ✅ Create embeddings of the chunks using "Oll...
How to stream CrewAI Agent steps and thoughts in a Streamlit app [Code Included]
Просмотров 7 тыс.5 месяцев назад
In this video, I walk through creating a callback handler to stream the CrewAI agent's thoughts/steps on a Streamlit app under the `st.status` container! I used an example app where we have AI Travel Agents to whom we give our current location, destination, and time range for vacation, and they generate an itinerary for 7 days! You no longer have to use the REPL to monitor the agent process! 😃 ...
How to build the FASTEST AI chatbot with Groq and Streamlit
Просмотров 3,6 тыс.5 месяцев назад
Learn how to build a Streamlit AI chatbot using Groq, the fastest LLM inference API. We will go over the code for building the app to include a menu option to select the model type and also a slider to choose the tokens. LINKS Streamlit app used in the demo → groqdemo.streamlit.app/ 👨‍💻 Code in GitHub → github.com/tonykipkemboi/groq_streamlit_demo ♻️ Venv videos ↓ - ruclips.net/video/xMDh4TYoI...
Automate upgrading pip in a Python virtual environment [venv]
Просмотров 1 тыс.7 месяцев назад
This tutorial shows you how to create a virtual environment and upgrade Pip using a simple shell script, saving you time and effort. In this video, you'll learn: ✅ A quick method to automate venv creation ✅ How to upgrade Pip effortlessly Subscribe for more Python tips and tricks! Timestamps ↓ 🎬 0:00 - 0:18 : Intro 👨🏽‍💻 0:18 - 1:20 : Code 🛣️ 1:20 - 1:53 : Adding script to PATH 🏃🏽‍♂️ 1:53 - 2:30...
How to use ChatGPT API with Python
Просмотров 434Год назад
How to use ChatGPT API with Python
What is the Graph protocol?
Просмотров 202Год назад
What is the Graph protocol?
How to get $ETH for Goerli testnet development
Просмотров 345Год назад
How to get $ETH for Goerli testnet development
How to create a Python Virtual Environment (Beginner Friendly)
Просмотров 3112 года назад
How to create a Python Virtual Environment (Beginner Friendly)
How I made $132.10 with 83 lines of Python!
Просмотров 4822 года назад
How I made $132.10 with 83 lines of Python!
How to get FREE testnet $MATIC tokens for development
Просмотров 6 тыс.2 года назад
How to get FREE testnet $MATIC tokens for development
How to get FREE devnet $SOL (SOLANA) and $USDC from faucets
Просмотров 9 тыс.2 года назад
How to get FREE devnet $SOL (SOLANA) and $USDC from faucets
Fix Jinja2 error in Docker getting started Tutorial
Просмотров 1222 года назад
Fix Jinja2 error in Docker getting started Tutorial
How to get FREE $ETH tokens on Chainlink faucet for development
Просмотров 6 тыс.2 года назад
How to get FREE $ETH tokens on Chainlink faucet for development
A Decentralized Autonomous Organization Project Demo (KenyaDAO)
Просмотров 852 года назад
A Decentralized Autonomous Organization Project Demo (KenyaDAO)
How to make a word cloud with Python [Beginner Friendly]
Просмотров 872 года назад
How to make a word cloud with Python [Beginner Friendly]
How to scrape websites using Python and beautifulSoup
Просмотров 5122 года назад
How to scrape websites using Python and beautifulSoup
How to Check if Two Strings are Anagram with Python Code 🔥
Просмотров 652 года назад
How to Check if Two Strings are Anagram with Python Code 🔥
How To Create a MetaMask Wallet (Easy)
Просмотров 1732 года назад
How To Create a MetaMask Wallet (Easy)
How to Auto Accept Facebook Friend Requests in few lines of JavaScript
Просмотров 6922 года назад
How to Auto Accept Facebook Friend Requests in few lines of JavaScript
How To Create a Phantom Wallet
Просмотров 2042 года назад
How To Create a Phantom Wallet
How to Unhide your NFT's on OpenSea
Просмотров 4582 года назад
How to Unhide your NFT's on OpenSea

Комментарии

  • @gilfcr8620
    @gilfcr8620 4 часа назад

    Hello @tonykipkemboi, i finaly find the solution, i use langchain_chroma and change this line of code : vector_db = Chroma.from_documents(persist_directory=CHROMA_PATH, documents=chunks, embedding=embeddings, collection_name="myRAG" ) Thanks !

  • @anurag040891
    @anurag040891 19 часов назад

    Great Video Tony, I want to know if what is the method to display the metadata of the pdf while publishing the answer ?

    • @tonykipkemboi
      @tonykipkemboi 19 часов назад

      @@anurag040891 more of like citations? i did experiment with it a bit but wasn't happy with it to add it to the video.

  • @MyHarshitgola
    @MyHarshitgola День назад

    Great content. I'm getting an error when I try to upload the file on the streamlit interface. I also tried to run the local_ollama_rag.ipynb file in juptyer notebook and get the same error when I execute upload pdf tab. can someone advice how to resolve this? OSError: No such file or directory: '/Users/nltk_data/tokenizers/punkt/PY3_tab'.

    • @tonykipkemboi
      @tonykipkemboi 15 часов назад

      Thank you. Maybe try installing "nltk" package to see if it resolves the issue?

    • @MyHarshitgola
      @MyHarshitgola 45 минут назад

      @@tonykipkemboi Are there any code dependencies for the files to be sourced from the folder 'PY3_tab'. When I installed nltk, I only saw folder PY3 not PY3_tab.

  • @davidaliaga4708
    @davidaliaga4708 День назад

    Error code 404 the model gpt-4o does not exist or you do not have access to it. What to do?

    • @davidaliaga4708
      @davidaliaga4708 День назад

      How can we convert this to a version that uses hugging face instead?

  • @Adinasa2
    @Adinasa2 День назад

    its not working giving error ConnectError: [Errno 61] Connection refused Traceback: File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 75, in exec_func_with_error_handling result = func() ^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 574, in code_to_exec exec(code, module.__dict__) File "/Users/adityagupta/Desktop/rag/ollama_pdf_rag/streamlit_app.py", line 278, in <module> main() File "/Users/adityagupta/Desktop/rag/ollama_pdf_rag/streamlit_app.py", line 200, in main models_info = ollama.list() ^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ollama/_client.py", line 464, in list return self._request('GET', '/api/tags').json() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ollama/_client.py", line 69, in _request response = self._client.request(method, url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 827, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 914, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 1015, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 232, in handle_request with map_httpcore_exceptions(): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc

  • @gilfcr8620
    @gilfcr8620 День назад

    Hello Tony and thank you for your amazing Job !! I have just an error that actually streamlit always interrup it self like that : 2024-09-10 18:24:16 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:24:16 - INFO - Extracting model names from models_info 2024-09-10 18:24:16 - INFO - Extracted model names: ('nomic-embed-text:latest', 'mistral-nemo:latest', 'mistral:latest') 2024-09-10 18:24:18 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:24:21 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:24:21 - INFO - Creating vector DB from file upload: monopoly.pdf 2024-09-10 18:24:21 - INFO - File saved to temporary path: C:\Users\XXX\AppData\Local\Temp\tmpobt9kjka\monopoly.pdf 2024-09-10 18:24:24 - INFO - pikepdf C++ to Python logger bridge initialized 2024-09-10 18:24:25 - INFO - Document split into chunks OllamaEmbeddings: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5.06it/s] (OllamaChat) PS C:\Dev\tony\OllamaChat\ollama_pdf_rag> Let me know if you want more info :)

    • @tonykipkemboi
      @tonykipkemboi День назад

      @@gilfcr8620 thanks for posting. can you share the error message itself? these seem to be just the logs. also describe what you're seeing/expecting and where it fails.

    • @gilfcr8620
      @gilfcr8620 День назад

      @@tonykipkemboi Humm i tried to run this application with a debug session in VScode and no error appear... it just get out after embeddings... What is your advise ? i'm not a full time developer but i can click where you want :)

    • @gilfcr8620
      @gilfcr8620 День назад

      @@tonykipkemboi Ok i find it !! this is a Chroma issue, i just change : vector_db = Chroma.from_documents(documents=chunks, embedding=embeddings, collection_name="myRAG" by : vector_db = Chroma.from_documents(persist_directory=CHROMA_PATH, documents=chunks, embedding=embeddings, collection_name="myRAG" And use langchain_chroma package. Thank you !

  • @abirprottoy0079
    @abirprottoy0079 День назад

    how to use multiple pdfs?

  • @junaidbadshah9343
    @junaidbadshah9343 3 дня назад

    I looking for 2 weeks this type of video but finally I got it Thank you brother.

  • @oslyris
    @oslyris 7 дней назад

    can we pre process data for this?if yes can you please tell me how

  • @hugalves
    @hugalves 11 дней назад

    Awesome!! Thank you Tony! Just leaving a fix, the failed bank link has been changed. The current one is: www.fdic.gov/system/files/2024-07/banklist.csv Additionally, what kind of recommendations do you have to csv files with columns as value (dollar currency, for example)? By some reason, when I ask something like 'give me the 3 biggest expenses with Traveling category' never works, he fails listing. Any advice would be really appreciated! Thanks!

  • @accidentalUser6657
    @accidentalUser6657 13 дней назад

    Can you upload a tutorial where from scratch, we can create AI assistant on data we have....eg : HR assistant...

    • @tonykipkemboi
      @tonykipkemboi 9 дней назад

      I think you can adapt this tutorial so long as your documents are all PDFs. You can also easily modify to other document types.

  • @felipetesta
    @felipetesta 13 дней назад

    Great video! There's any way I can let a LLM read a folder on my PC and answer me using archives (pdfs, .md, .doc, sheets, etc) from that source?

    • @tonykipkemboi
      @tonykipkemboi 9 дней назад

      yes you can. you can use the directory loader function from langchain but you'll have to adjust the embeddings to accommodate for the different file types.

  • @BillVoisine
    @BillVoisine 13 дней назад

    This is excellent!! Thank you!!

  • @khoerizzz7011
    @khoerizzz7011 14 дней назад

    I got error for langdetect, is there any solutions?

    • @tonykipkemboi
      @tonykipkemboi 9 дней назад

      are you using a macOS or windows

    • @khoerizzz7011
      @khoerizzz7011 8 дней назад

      @@tonykipkemboi I am using windows, please help

  • @ArmandoSilvaVelázquez
    @ArmandoSilvaVelázquez 14 дней назад

    That cool be nice to see multiple pdfs loaded, to see if can be made handle different topics at once.

  • @micdavin
    @micdavin 18 дней назад

    Bro.. can i ask something like the document we input will be save on the local database? Or will be published somewhere? Anyone who know the answer please tell me, sharing with me bro.. thank you

  • @ten2the6
    @ten2the6 18 дней назад

    You sir are awesome! It is easy to make things hard, yet hard to make them simple. Thanks for working so hard to make this simple. Excellent presentation. I will be coming back for more!!

    • @tonykipkemboi
      @tonykipkemboi 17 дней назад

      Thank you @ten2the6, am glad you found it useful! 🫡

  • @maly9903
    @maly9903 18 дней назад

    I am a layman and have been trying to figure this out for a week, I've watched alot of video's but yours has been by far the best. You cadence is good, you are direct while making it accessible on a high level to follow along. No obfuscation or assumptions. etc etc etc. Great video thank you, have a comment like and subscribe.

    • @tonykipkemboi
      @tonykipkemboi 17 дней назад

      Thank you, @maly9903, I am glad you found it useful! 🫡

  • @sirishkumar-m5z
    @sirishkumar-m5z 20 дней назад

    A robust AI agent stack is essential for success. SmythOS offers cutting-edge AI agent solutions that may significantly improve your projects.

  • @neonmuthoni218
    @neonmuthoni218 22 дня назад

    Can you do for a next.js project

    • @tonykipkemboi
      @tonykipkemboi 21 день назад

      I do most of my projects in Python, but good idea maybe I can do one using @vercel's v0.

  • @surygarcia6823
    @surygarcia6823 22 дня назад

    What I don't get is, where is yhe database? If I want to launch this into production, then when is my database? Or will it be virtual forever?

  • @abhishekm6703
    @abhishekm6703 23 дня назад

    Is it possible to make a video about a chatbot using Groq and open source models and open embeddings which is shareable and used by others. It must be pre-trained on data for example Google drive link containing videos, photos, multiple pdf files, website urls.

  • @ikrammir734
    @ikrammir734 23 дня назад

    How i will get this code

  • @fatihsahinbas2249
    @fatihsahinbas2249 25 дней назад

    When I add any pdf including the pdf that you have included in your github account, ValueError: Error raised by inference API HTTP code: 404, {‘error’: ‘model \’nomic-embed-text\‘ not found, try pulling it first’} I get this error, I searched but I could not find the reason? Can you help me?

    • @tonykipkemboi
      @tonykipkemboi 24 дня назад

      you need to pull the nomic embed model first before running the app. "ollama pull nomic-embed-text"

    • @fatihsahinbas2249
      @fatihsahinbas2249 24 дня назад

      @@tonykipkemboi Ah okey. Thanks tony.

  • @dounia-o7i
    @dounia-o7i 25 дней назад

    that was a really helpful video thanks a lot , but i have only one problem is that it took me so long to respond like 30min , btw im using weviate image in docker as vector db and nominic embeddings also the ollama's phi3 as my pretrained llm but it doesnt take so much time could u please suggest me to do smth to make it work

  • @G3driver
    @G3driver 27 дней назад

    anyone else having a problem installing chroma on a mac m2 osx14.5?

    • @tonykipkemboi
      @tonykipkemboi 26 дней назад

      what error are you getting?

    • @G3driver
      @G3driver 25 дней назад

      @@tonykipkemboi Getting requirements to build wheel ... error error: subprocess-exited-with-error

  • @JonathanVuJon
    @JonathanVuJon 27 дней назад

    What do you use to record your screen capture??

  • @ramonjales9941
    @ramonjales9941 27 дней назад

    very good!

  • @mohamedalichakroun6967
    @mohamedalichakroun6967 29 дней назад

    It work also for scanned pdf and images ?

  • @arrows8367
    @arrows8367 Месяц назад

    Wonderful Video, I followed till the last step but then I am getting this error:ValueError: Environment variable OCR_AGENT module name unstructured.partition.utils.ocr_models must be set to a whitelisted module part of ['unstructured.partition.utils.ocr_models.tesseract_ocr', 'unstructured.partition.utils.ocr_models.paddle_ocr', 'unstructured.partition.utils.ocr_models.google_vision_ocr'].....................What to do in thiscase? I used chatGPT. I set the Path variable . Please could you tell me exactly what to do.The error is not going.

  • @mpesakapoeta
    @mpesakapoeta Месяц назад

    After embedding for the first time. Do i have to do it the second time i need to requery the vector database

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      Not necessarily. If you're restarting the kernel after being away and having to run it from top to bottom again, then yes. If you're still on the same runtime, your prompts are the only things getting embedded and then similarity search is done to retrieve the top-k results from the embeddings saved initially from the pdf.

  • @rajeevranjan4372
    @rajeevranjan4372 Месяц назад

    am getting this error, help me someone resolve this issue PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? Traceback: File "C:\Users\RajeevSingh\.conda\envs\myenv\Lib\site-packages\streamlit untime\scriptrunner\exec_code.py", line 85, in exec_func_with_error_handling result = func() ^^^^^^ File "C:\Users\RajeevSingh\.conda\envs\myenv\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 576, in code_to_exec exec(code, module.__dict__) File "C:\Users\RajeevSingh\pdfReadingusingOllama\ollama_pdf_rag\streamlit_app.py", line 278, in <module> main() File "C:\Users\RajeevSingh\pdfReadingusingOllama\ollama_pdf_rag\streamlit_app.py", line 223, in main st.session_state["vector_db"] = create_vector_db(file_upload) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @Reichvarg
    @Reichvarg Месяц назад

    This is a really good video. Thanks a lot for making it, I found it very helpful!

  • @iiTzMemo
    @iiTzMemo Месяц назад

    doesnt work.

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      what exactly doesn't work?

    • @iiTzMemo
      @iiTzMemo Месяц назад

      @@tonykipkemboi So I wanted to test out the streamlit version of this program which is available on your github repository. The llm actually generates answers about the pdf I uploaded - however instead of answering my initial question, he invents his own question and answers it. I saw that it is because of the PROMPT_QUERY and tried to remove or change it, but with that the llm doesn’t „see“ my question anymore.

  • @tsl9150
    @tsl9150 Месяц назад

    Only, why would you use a WEF PDF document as a example PDF? Bit suspicious of all things wef.. :P

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      It was the first free pdf i got...didn't put much thought into the meta

  • @letlive1796
    @letlive1796 Месяц назад

    Can we deploy this in streamlet? if yes Can you make a video about it.

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      You can deploy it but you will also need to deploy an instance of Ollama. You could deploy it to any VPS like EC2 -> download Ollama -> load Streamlit

  • @muhdkahfi5870
    @muhdkahfi5870 Месяц назад

    why does the streamlit disconnected after the embedding process is completed?

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      What do you mean when you say disconnected?

    • @gilfcr8620
      @gilfcr8620 День назад

      @@tonykipkemboi Hello Tony and thank you for your amazing job, i firu out the same error actually : 2024-09-10 18:12:42 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:12:45 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:12:45 - INFO - Creating vector DB from file upload: mypdf.pdf 2024-09-10 18:12:45 - INFO - File saved to temporary path: C:\Users\XXX\AppData\Local\Temp\tmp9feff731\assurance-prevoyance_com21426.pdf 2024-09-10 18:12:45 - INFO - Document split into chunks 2024-09-10 18:12:46 - INFO - Anonymized telemetry enabled. See docs.trychroma.com/telemetry for more information. OllamaEmbeddings: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.81it/s] (OllamaChat) PS C:\Dev\tony\OllamaChat\ollama_pdf_rag> What do you think about that ? i'll try to find a solution on my side.

    • @gilfcr8620
      @gilfcr8620 День назад

      @@tonykipkemboi Hello Tony ands thank you for your amazing job ! I figure out the same error actually : 2024-09-10 18:12:42 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:12:45 - INFO - HTTP Request: GET localhost:11434/api/tags "HTTP/1.1 200 OK" 2024-09-10 18:12:45 - INFO - Creating vector DB from file upload: maypdf.pdf 2024-09-10 18:12:45 - INFO - File saved to temporary path: C:\Users\XXX\AppData\Local\Temp\tmp9feff731\assurance-prevoyance_com21426.pdf 2024-09-10 18:12:45 - INFO - Document split into chunks 2024-09-10 18:12:46 - INFO - Anonymized telemetry enabled. See docs.trychroma.com/telemetry for more information. OllamaEmbeddings: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.81it/s] (OllamaChat) PS C:\Dev\tony\OllamaChat\ollama_pdf_rag> What do you think ?

  • @danielepalmieri6455
    @danielepalmieri6455 Месяц назад

    Can't wait for the RAG part! :D great video!

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      It's already out ruclips.net/video/lig9c7OkxTI/видео.htmlsi=idhvTI-05jc5ZOQk

  • @MohanishPrerna
    @MohanishPrerna Месяц назад

    Hello Sir, First thanks for making this video. I am trying this solution in my local Windows Machine but while uploading File getting below error, "OSError: [WinError 126] The specified module could not be found. Error loading "C:\Data\GharAdhar\workspace\ollama_pdf_rag\venv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies." Can you please quide me what I am missing.

  • @rrxn
    @rrxn Месяц назад

    Hi, I've been trying to follow along but the first few lines: "local_path = "WEF_The_Global_Cooperation_Barometer_2024.pdf" # Local PDF file uploads if local_path: loader = UnstructuredPDFLoader(file_path=local_path) data = loader.load() else: print("Upload a PDF file")" give me this error: "FileNotFoundError: [WinError 2] The system cannot find the file specified During handling of the above exception, another exception occurred: PDFInfoNotInstalledError Traceback (most recent call last) Cell In[8], line 6 4 if local_path: 5 loader = UnstructuredPDFLoader(file_path=local_path) ----> 6 data = loader.load() 7 else: 8 print("Upload a PDF file") PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?" I'm using vscode and the Jupyter notebook extension for the same. I have installed all the libraries in a virtual environment and have been trying to run it there.

  • @RedCloudServices
    @RedCloudServices Месяц назад

    Does this pdf library encode embedded tables in the pdf document

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      I didn't cover that piece in this tutorial but my guess would be no.

  • @montassarbellahabdallah4638
    @montassarbellahabdallah4638 Месяц назад

    when i execute this code local_path = "test.pdf" # Local PDF file uploads if local_path: loader = UnstructuredPDFLoader(file_path=local_path) data = loader.load() else: print("Upload a PDF file") i have an error can u help me with the solution File ~\anaconda3\Lib\urllib equest.py:250, in urlretrieve(url, filename, reporthook, data) 248 # Handle temporary file setup. 249 if filename: --> 250 tfp = open(filename, 'wb') 251 else: 252 tfp = tempfile.NamedTemporaryFile(delete=False) PermissionError: [Errno 13] Permission denied: 'C:\\Users\\monta\\AppData\\Local\\Temp\\tmpbzvo7wzp'

  • @KampusGratis1
    @KampusGratis1 Месяц назад

    your video is very mindful, i have tried the code, i want to ask some, why every time i run invoke, it will run embedding first, the process of embedding is rather long, is it normal? i want to build some new product using llm based to my client

    • @KampusGratis1
      @KampusGratis1 Месяц назад

      nvidia geforcs rtx 3080 TI. 12th gen Intel I9-12900 ( 24 Core) this is my spec of my pc

  • @fredrick_nganga
    @fredrick_nganga Месяц назад

    I am getting the error message below when I upload the pdf. Mark you, I have installed all the dependencies including torch. OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\lib\fbgemm.dll" or one of its dependencies. Traceback: File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\streamlit untime\scriptrunner\exec_code.py", line 75, in exec_func_with_error_handling result = func() ^^^^^^ File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\streamlit untime\scriptrunner\script_runner.py", line 574, in code_to_exec exec(code, module.__dict__) File "C:\Users\HP\Downloads\ollama_pdf_rag\streamlit_app.py", line 278, in <module> main() File "C:\Users\HP\Downloads\ollama_pdf_rag\streamlit_app.py", line 223, in main st.session_state["vector_db"] = create_vector_db(file_upload) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\Downloads\ollama_pdf_rag\streamlit_app.py", line 82, in create_vector_db data = loader.load() ^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\document_loaders\base.py", line 30, in load return list(self.lazy_load()) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\document_loaders\unstructured.py", line 89, in lazy_load elements = self._get_elements() ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\document_loaders\pdf.py", line 72, in _get_elements from unstructured.partition.pdf import partition_pdf File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\unstructured\partition\pdf.py", line 54, in <module> from unstructured.partition.pdf_image.analysis.bbox_visualisation import ( File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\unstructured\partition\pdf_image\analysis\bbox_visualisation.py", line 16, in <module> from unstructured_inference.inference.layout import DocumentLayout File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\unstructured_inference\inference\layout.py", line 19, in <module> from unstructured_inference.models.base import get_model File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\unstructured_inference\models\base.py", line 7, in <module> from unstructured_inference.models.chipper import MODEL_TYPES as CHIPPER_MODEL_TYPES File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\unstructured_inference\models\chipper.py", line 9, in <module> import torch File "C:\Users\HP\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\torch\__init__.py", line 148, in <module> raise err

  • @sulayamar8538
    @sulayamar8538 Месяц назад

    What are the Ollama modules that were used, I don't want to install unimportant modules on my machine since it has only limited space.

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      @@sulayamar8538 did you watch the video?

  • @wesleymogaka
    @wesleymogaka Месяц назад

    Ahsante sana Kip. Working on a bank/ fintech chatbot and will use this info to build it.

  • @atharimam8591
    @atharimam8591 Месяц назад

    Can any one tell this is completely free to implement this video technique???

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      Yes, this is free to implement and you can do it offline once you've downloaded all the required modules.

  • @sharathkumarsadari6977
    @sharathkumarsadari6977 Месяц назад

    If I use python_repl tool in combination with my sql agent to create plotly chart, how do I show it in streamlit app?

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      Good question, I haven't tried converting this to a Streamlit app yet. I would print out the entire output to see if I can parse portions of it and pass it to a Streamlit elements.

  • @torahulsingh
    @torahulsingh Месяц назад

    Thank you for posting this video? It appears that the inference is pretty fast on your Macbook. Could you please share your Macbook config?

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      I'm using a MacBook Air 15" M2 with 16GB memory

  • @sivakumar7679
    @sivakumar7679 Месяц назад

    Is it compulsory to pull mistral model from Ollama to run the project which size around 4GB??

    • @tonykipkemboi
      @tonykipkemboi Месяц назад

      @@sivakumar7679 you can pick any other model.