ADVANCED Python AI Agent Tutorial - Using RAG

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 267

  • @loggerboy9325
    @loggerboy9325 9 месяцев назад +207

    Just a tip for whoever is following along.The code, from llama_index.query_engine needs to be llama_index.core.query_engine.

    • @senorperez
      @senorperez 9 месяцев назад +12

      How did you figured that out bro ? 😮❤
      Btw thanks alot

    • @Al_Miqdad_
      @Al_Miqdad_ 9 месяцев назад

      why

    • @senorperez
      @senorperez 9 месяцев назад

      @@Al_Miqdad_ because llama_index.query_engine doesn't works, unless you add .core

    • @martyallen6931
      @martyallen6931 9 месяцев назад +3

      i love you

    • @loggerboy9325
      @loggerboy9325 9 месяцев назад

      @@senorperez looked up the llama documentation

  • @sadiqkhawaja7019
    @sadiqkhawaja7019 9 месяцев назад +24

    Detailed, no-nonense, topical. One of the best coding channels on youtube. Always looking forward to a new video.

  • @ReDoG129
    @ReDoG129 9 месяцев назад +13

    This channel is a Godsend, which instilled the fundamentals of Python within me, which helped me to obtain a certification in robotics. You never cease to amaze me. 😊

  • @hypo30cal
    @hypo30cal 6 месяцев назад +14

    If you are running this blindly without using Tim's requirements file, please note that due to security `from llama_index.query_engine import PandasQueryEngine` is no longer the right import, try pip installing `llama-index-experimental` and then using the PandasQueryEngine class from `from llama_index.experimental.query_engine import PandasQueryEngine`.
    This is for py3.10. Finally, the PromptTemplate class is now at `from llama_index.core import PromptTemplate`. The documentation will really help though.
    Thanks Tim.

    • @productscience
      @productscience 6 месяцев назад +1

      thank you super helpful!! :)

    • @princezuko7073
      @princezuko7073 20 дней назад

      What about the PDFreaders?
      ImportError: cannot import name 'PDFReader' from 'llama_index.readers' (unknown location)

  • @josephabuo6027
    @josephabuo6027 9 месяцев назад +15

    5 mins into the video and I am already excited about the future!

    • @TechWithTim
      @TechWithTim  9 месяцев назад +2

      For sure it’s super cool!

  • @philippechassany7279
    @philippechassany7279 8 месяцев назад +6

    To add context in order to refer to previous response like "save the response to my notes", you can add
    context = " ".join([f"role: {exchange['role']} content: {exchange['content']}" for exchange in st.session_state.messages])
    response = agent.query(context + "
    " + prompt)

  • @myanghua
    @myanghua 9 месяцев назад +1

    This is the gold standard for this kind of coding tutorials.💯 I hope more RUclipsrs would be like him. Please keep up the good work.

  • @rahultino
    @rahultino 2 месяца назад

    This is so well presented. AI appears very scary to ordinary programmers who don't have the deep knowledge of how AI works. This video shows how a programmer can utilize already built models to produce useful agents. Thanks Tim for the video. Kudos.

  • @omghosal3301
    @omghosal3301 8 месяцев назад +6

    For windows if './ai/bin/activate' doesn't work then use ' ./ai/Scripts/activate' , that would do the trick^^

  • @amanaggarwal4061
    @amanaggarwal4061 9 месяцев назад +1

    one of the best videos on internet regarding AI agents

  • @vkphoenixfr
    @vkphoenixfr 2 месяца назад

    This is a very good video, very well structured and explained. Thanks a lot !

  • @adds5257
    @adds5257 4 месяца назад

    Very cool. More tutorial on llama usage. This tool will help researchers to manage knowledge. If it can also store image and generate image as answer based on the query's context then it will be more useful. It can be used to build personal library and digital librarian

  • @LeonardoGomez-lk5ei
    @LeonardoGomez-lk5ei 8 месяцев назад

    I second that, the RAG toolkit is amazing.

  • @mushinart
    @mushinart 9 месяцев назад +1

    Amazing video ,tim ...i always wanted a fast an ld easy way to understand llamaindex...now I can build my own project fast ... Thanks a million brother

  • @srinivasguptha9538
    @srinivasguptha9538 8 месяцев назад

    I love to see you used venv. I find it more practical than other alternatives.

  • @bilalmohammed717
    @bilalmohammed717 5 месяцев назад

    Excellent tutorial. Its clear enough to follow and implement. Keep up your good work.

  • @Al_Miqdad_
    @Al_Miqdad_ 9 месяцев назад +2

    thank you very much for your feedback ❤❤❤❤

  • @suryapratap3622
    @suryapratap3622 8 месяцев назад

    awesome great explanation i spended days to read the docs i know the efforts you in to generate this content, thanks

  • @SinovuyoLuzipho
    @SinovuyoLuzipho 7 месяцев назад

    This is great wow ...🎉i can think of a lot of ideas now for this ...but please guys play safe on this... like wiring your complex project to the net..dev opps are very important regarding that...😅otherwise this is great...❤❤great content Tim..

  • @myslates2854
    @myslates2854 8 месяцев назад

    Tim you saved my day, you are awesome. I will write in details later how, but for now thanks for the brilliant working code

  • @yuvrajkukreja1248
    @yuvrajkukreja1248 9 месяцев назад +14

    More ai video 😊 awesome working

  • @PedorEmilo
    @PedorEmilo 8 месяцев назад

    So RAG stands for Really Awesome Guidance, nice.

  • @ahmadsaud3531
    @ahmadsaud3531 9 месяцев назад +5

    Thanks, Tim. I've noticed that many of the RAG examples available on RUclips primarily focus on enhancing the model using PDFs, CSVs, or plain text. However, in practice, a significant portion of business data is stored in relational databases, such as Oracle or SQL Server. Could you provide an example demonstrating how RAG can be applied to data from relational databases?

    • @ahmadsaud3531
      @ahmadsaud3531 8 месяцев назад

      Hi Tim, i am waiting for your answer please

    • @arnav3674
      @arnav3674 7 месяцев назад

      @@ahmadsaud3531 did you get the answer ?

    • @ahmadsaud3531
      @ahmadsaud3531 7 месяцев назад

      @@arnav3674 not yet

    • @ahmadsaud3531
      @ahmadsaud3531 7 месяцев назад

      @@arnav3674 not yet

  • @leonvanzyl
    @leonvanzyl 8 месяцев назад

    @Tim Excellent video.
    The reason the app wasn't able to save the note (end of video) is because you need to include chat memory / history. The llm has no view of the previous messages.

  • @ayanjawaid2251
    @ayanjawaid2251 9 месяцев назад

    Tim we need more content like this or a course... and as always awesome work ❤

  • @prakhars962
    @prakhars962 9 месяцев назад +1

    in next 5 years they can write research papers, if you just give your idea and results. this is mindblowing.

  • @inocentesantiago3194
    @inocentesantiago3194 8 месяцев назад

    This looks like a helpful tutorial, hope I can learn something!

  • @saravanannatarajan6515
    @saravanannatarajan6515 9 месяцев назад

    Best explanation using coding , hats off bro

  • @CristianCamacho-b3t
    @CristianCamacho-b3t 8 месяцев назад

    Appreciate you sharing your skills, super helpful for noobs like me.

  • @DevelopmentMyTechLab
    @DevelopmentMyTechLab 7 месяцев назад

    Great Topic! It would be awesome if you extend this example with crewai

  • @rodrigogazcon506
    @rodrigogazcon506 8 месяцев назад

    Thanks for sharing Tim.

  • @BorisHrzenjak
    @BorisHrzenjak 9 месяцев назад +1

    great stuff, even though I had to bail on the pdf part because I got some weird stuff going on, it said I have no openai api key and stuff, battled with it for two days and figured out that the code works without that part so... no pdf for me, but everything else works fine :)
    I will definitely play around with llama-index more

    • @sonalithakur8234
      @sonalithakur8234 9 месяцев назад

      can you please tell how you build the project without open ai key?... I am facing the issue in this only

    • @BorisHrzenjak
      @BorisHrzenjak 9 месяцев назад

      @@sonalithakur8234I didn't build it without an api_key. I removed the part of code that was meant to read pdf because it was giving me problems.

    • @_c_v
      @_c_v 9 месяцев назад

      Yeah same problem for me did anyone figure it out?

    • @mbasemhassen2947
      @mbasemhassen2947 8 месяцев назад

      I think that the issue was that the function was not called in openai section so for me, the issue was model was not being used for instance in the 5 or 7 line of code in main the code which ends with input openai is not working because this is not a function which is why the model section near the end was not working so if you want this code to run that I think function needs to be defined for it

  • @seanh1591
    @seanh1591 9 месяцев назад +3

    Hi Tim - Thanks so much for the video. Great job!!! Would you be able to address not using OpenAI (model, agent) but rather using an open source LLMs?

  • @lechx32
    @lechx32 9 месяцев назад

    Thank you for the video. It is interesting and clear

  •  Месяц назад

    Thanks for the really nice video! You explained everything in detail, and I loved it! I would like to ask you: Would you say that RAG can be called AI Search Agent? Is there any autonomy in a RAG application, for example, when the model generates an answer from the relevant context? Would you say it’s correct to define RAG as an agent? I'm not criticizing your title, of course. It's just that some describe it as a RAG agent and others as a RAG chatbot, and I'm really confused. Would love to hear your thoughts! Thank you!

  • @krishnak3532
    @krishnak3532 9 месяцев назад +4

    Hey Tim,
    Can you make a video with mistral model locally loaded rather than using openai API key.

  • @khalifarmili1256
    @khalifarmili1256 6 месяцев назад

    Lots of Thanks in the comments section but i owe you another one, THANKS A LOT !!

  • @AbelMartinez-xb3gl
    @AbelMartinez-xb3gl 8 месяцев назад

    Excited to experiment more.

  • @malikanaser8251
    @malikanaser8251 6 месяцев назад

    Hi man, you are the best, I wish if it was about extracting data from text or pdf and also harnessing data from agents LLM to store it in KG and make LLM query from it, all the video I watched about that were poor and not a practical solution, either they doesn't work or poor result or use paid software or don't accumulate data in the KG database with no duplicate... Man you are the one for this project, if you did it I unsure you your channel will be on fire.

  • @pauloseixas5452
    @pauloseixas5452 9 месяцев назад

    Alright let's go i'll get all hyped up regardless of what will come of it Thanks Tim

  • @user-tl1qc9ym2y
    @user-tl1qc9ym2y 8 месяцев назад

    please make a whole series on this

  • @mariamanuel2795
    @mariamanuel2795 8 месяцев назад

    Very informative video!

  • @AaronGayah-dr8lu
    @AaronGayah-dr8lu 5 месяцев назад

    This is brilliant. Thank you.

  • @ShrutiLokhande-v2d
    @ShrutiLokhande-v2d 8 месяцев назад

    This is amazing. Can you create a next video on automation script generation and Sql query generation(for complex schema) using Rag or AI agents. ( but use open source models.)

  • @vrajmalvi7194
    @vrajmalvi7194 7 месяцев назад

    @TechWithTim can you make a video on how do you go thorough any documentation, what is your mindset where you start, and what flow do you follow. Please and Thank you :)

  • @awesomeowwww
    @awesomeowwww Месяц назад

    Very detailled and high quality stuff!! But do you think it's safe to use it without an isolated Docker container? It could potentially damage your system, not?

  • @mahmoudabuzamel7038
    @mahmoudabuzamel7038 6 месяцев назад

    Great tutorial Tim!

  • @brandonhernandezvillantes2937
    @brandonhernandezvillantes2937 8 месяцев назад

    Nice one Tim!

  • @stephenbonifacio3846
    @stephenbonifacio3846 8 месяцев назад +4

    not sure how long ago this was recorded but the correct import for pandasquery engine as of the latest version of llama-index is:
    from llama_index.core.query_engine import PandasQueryEngine

  • @damianaguila7841
    @damianaguila7841 8 месяцев назад

    Thanks for sharing.

  • @SivaMahadevan-ny7vm
    @SivaMahadevan-ny7vm 7 месяцев назад

    This tutorial is super helpful. Thanks Tim. I was able to get the app working. when I ask a question about canada or population, Agent is able to answer the question by looking at the PDF, CSV etc.. But when I ask a question like "what is solar eclipse", the agent is still able to answer the question. How can I prevent it from happening ? I just want answers that are available in the documents.

  • @andewwayne7751
    @andewwayne7751 3 месяца назад +2

    Great video, but llamaindex did some major changes. Hence, the import statements as represented in the video and download files are incorrect. It is taking some time to figure out the new structure of llamaindex.
    Does any way have the new import/from statements that llama index now uses?

  • @egericke123
    @egericke123 7 месяцев назад +1

    It doesn't save a note using the previous prompt because I think it is lost. I think you are calling a new instance of the model each time you give a new prompt. So you would have to update or append prompt outside the while loop to get it to remember the entire conversation... But I could definitely be wrong, just my intuition :P

  • @juanbetancourt5106
    @juanbetancourt5106 9 месяцев назад

    Thank you Tim.

  • @chymoney1
    @chymoney1 9 месяцев назад

    This is really cool stuff awesome video

  • @jmsolorzano13
    @jmsolorzano13 5 месяцев назад

    Hi Tim... All your channel is great...! I want to create a RAG Agent but, of one website, do you if is possible?
    😊

  • @dimox115x9
    @dimox115x9 5 месяцев назад +4

    I did pip install llama-index-experimental so many times and also the upgrade version.
    I did ' from llama_index.experimental.query_engine import PandasQueryEngine ' and it says ' no module name llama_index.experimental '.
    Everything sound good but that part doesn't work, anyone?
    Weird, plz anyone ?

  • @JoseSalerno-pf5ph
    @JoseSalerno-pf5ph 4 месяца назад

    Hey Tim! Is there a way to do this without using an API Key?
    awesome video, they are really helpful!

  • @MusabFarah-s2v
    @MusabFarah-s2v 3 месяца назад

    Good vid, needs to be updated, Llama index changing

    • @clowd1e449
      @clowd1e449 2 месяца назад

      Did it work for you, I've gotten way too many errors?

  • @enkhbaatardorjsuren9427
    @enkhbaatardorjsuren9427 8 месяцев назад

    Brilliant!

  • @ANG747
    @ANG747 8 месяцев назад

    Makes me want to build my own AI chatbot.

  • @marcomaiocchi5808
    @marcomaiocchi5808 4 дня назад

    Dude was forced to go from sublime to vscode. So he made vscode look like sublime.

  • @pntra1220
    @pntra1220 8 месяцев назад

    Hi Tim, first of all, great tutorial! I wanted to ask you if you know if it's possible or efficient to use llama index to do RAG over 300k pages of pdfs. I've been researching and a lot of people say that I will have to fine tune the embeddings models and use one from hugging face. Also to make the results better, use metadata. However, I am wondering if using llama index is the correct approach or if I will need to create my own RAG system. Thank you for taking the time to read this.

  • @SolidBuildersInc
    @SolidBuildersInc 5 месяцев назад

    It's really Chilly in here, what's going on ? 🤣🤣🤣
    So, are you mitigating the need to have multiple agents with the idea of having 1 Agent that is using the proper tools and Data sources to provide responses?
    This simplifies the code quite a bit.
    I am not sure why you didn't create a seperate file for each engine?
    Also would probably allow a file picker instead of downloading the file file.
    Are you still going to reduce the chance of Hallucination with this approach?
    Thanks for Sharing.....
    Great presentation

  • @vdzneladze1
    @vdzneladze1 9 месяцев назад +3

    Hi guys, I encounted with the following eroor message:
    from llama_index import PromptTemplate
    ImportError: cannot import name 'PromptTemplate' from 'llama_index' (unknown location)
    Please advise

    • @stevenzusack9668
      @stevenzusack9668 9 месяцев назад +5

      'llama_index' should be 'llama_index.core' for both the import and the pip install. At least, that's what worked for me. So, the pip install is 'pip install llama-index.core pypdf python-dotenv pandas' and the import is 'from llama_index.core.query_engine import PandasQueryEngine'

    • @ryansumbele3552
      @ryansumbele3552 7 месяцев назад +1

      @@stevenzusack9668 thank you for your response, this just worked for me

  • @TanzerTel
    @TanzerTel 9 месяцев назад +1

    in case if you see this error --ImportError: cannot import name 'OpenAI' from 'llama_index.llms' (unknown location)-- do this --from llama_index.llms.openai import OpenAI--
    and for error. --ImportError: cannot import name 'note_engine' from 'note_engine' (/Users/macbookpro/AIAgent/note_engine.py)--
    change this. if not os.path.exist(note_file): to this if not os.path.isfile(note_file):

  • @shillowcollins6392
    @shillowcollins6392 8 месяцев назад

    I think this is way easier than the Langchain framework

  • @valkyrchesa
    @valkyrchesa 3 месяца назад

    Thanks for the video. Can this be done without llama-index and openai? like using the AI on your local PC

  • @tylerpeterson420
    @tylerpeterson420 9 месяцев назад

    Your vid quality is legit what's your setup?

  • @harmansavla7510
    @harmansavla7510 9 месяцев назад

    Love your content❤

  • @ArinPandey-z4h
    @ArinPandey-z4h 2 месяца назад

    Hey Tim, Can we integrate matplolib to actually get plots when we are querying the excel file?

  • @Blimaxx
    @Blimaxx 5 месяцев назад

    So freaking good

  • @theuser810
    @theuser810 7 месяцев назад +1

    For me, the agent keeps using the wrong column:
    df[df['Country'] == 'Canada']['Population']
    despite there being no column named population

  • @spotnuru83
    @spotnuru83 5 месяцев назад +1

    is there a way to do this without openAI because if we want to use it at enterprise level, giving data to openAI is not secure. can we use any open source llms and acheive the same?

  • @kayodedaniel6174
    @kayodedaniel6174 9 месяцев назад

    Thanks for the information @internetMadeCoder but i have a question i struggle at learning programming languages which makee it frustrating and make the process feel tiring i went online and there something when am learning from the video it seems pretty easy and but when i want to use it to try and solve maybe exercises it feels difficult and also forget of what i learnt the previous days and also how do i work on this and be able to learn better what caan u advice me to do

  • @Aaron-l9v9r
    @Aaron-l9v9r 9 месяцев назад

    Great video, just one question: What would I have to do if I wanted to use open-source tools instead of the openAI API? Thanks.

  • @pottoker612
    @pottoker612 6 месяцев назад +2

    tim is on the rag....

  • @varungonsalves6249
    @varungonsalves6249 8 месяцев назад

    Hi There, Great video. I was wondering if this same method would work but the llm was loaded in via llama instead of using openai llm?

  • @zengxuezhi
    @zengxuezhi 6 месяцев назад

    Thanks Tim, this is really informative video. Just have a question for this. In your code, the LLM model is OpenAI by default. I tried using a local LLM model such as llama (using "codellama-7b-instruct.Q8_0.gguf' loaded by LlamaCPP), and leave everything else the same as your code. But, it won't produce the desired result as your code shows. Can you have another video using a local LLM model rather than OPENAI that achieves the same functionality of Python AI Agent? Thanks in advance!

  • @malekmot
    @malekmot 9 месяцев назад

    Awesome! Hey Tim, can you tell me what theme and font are you using for vscode?

  • @madhudson1
    @madhudson1 6 месяцев назад

    what are your thoughts on using some of the open source LLMs for this, via Ollama?

  • @sdkfeldfwerer6751
    @sdkfeldfwerer6751 8 месяцев назад +1

    Can I use it with my git repos (js, ts on nodejs)? It would be great to build custom, local copilot for coding.

  • @xspydazx
    @xspydazx 6 месяцев назад

    Question : once loading a vector store , how can we output a dataset from the store to be used as a fine tuning object ?

  • @ignaciopincheira23
    @ignaciopincheira23 4 месяца назад

    Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).

  • @cclementson1986
    @cclementson1986 7 месяцев назад

    Perhaps extend this to a web based interactive chat that allows a user to choose between different LLM models like the new llama 3 vs chatgpt

  • @neilpayne8244
    @neilpayne8244 8 месяцев назад

    Thanks for the great vid. i tried following along but getting too many import errors. i also tried building a new virtual env, then installed the modules from your requirements.txt mentioned in this thread, and that also now doesnt work (llama_index tries to load pkg_resources which is not found).

  • @tinellixavier8022
    @tinellixavier8022 9 месяцев назад

    I have maybe beginner question coming, I wonder if it is possible to make and IA agent that can use both the normal model trained on his dataset, our RAG with provided data source as in this course plus internet search and compile these source in the output ?

  • @bhasadish
    @bhasadish 9 месяцев назад

    if you pass context history with the new prompt only then would it be able to save to note. Passing context history to LLM is an integral part of any RAG otherwise it looses context.

  • @mr_e_forex
    @mr_e_forex 9 месяцев назад +2

    Tech with Tim, can you please make something for the forex traders please

  • @samikrothapalli3957
    @samikrothapalli3957 8 месяцев назад +1

    Hi so in 18:37 you get the Pandas output however for me i get df[df['Country'] == 'Canada']['Population2023'].values[0] as my pandas output I was wondering if you could help with that?

  • @pauloseixas5452
    @pauloseixas5452 9 месяцев назад

    thank god you still aware of the fundamental importance of being free(only by being free can a poor, loser like me can dream of becoming a coder just before rotting completely)

  • @Hello_-_-_-_
    @Hello_-_-_-_ 9 месяцев назад

    Cool video. Random question, are you ever going to move out of Canada? I know a few that have tried but there are too many hoops.

    • @TechWithTim
      @TechWithTim  9 месяцев назад

      Yes I’m currently living in Dubai

  • @AndyPandy-ni1io
    @AndyPandy-ni1io 5 месяцев назад

    The error message "FileNotFoundError: [Errno 2] No such file or directory: '...config_sentence_transformers.json'" means that the llama_index library is trying to load the specified embedding model ("BAAI/bge-m3") from your local machine, but it can't find the necessary configuration file.

  • @mallunightmares5845
    @mallunightmares5845 7 месяцев назад

    How to build a Autonomous RAG LLM Agent with Function Calling that is connected with External REST API like Microsoft Graph API ? Can You make a video on this ?

  • @mejia414
    @mejia414 8 месяцев назад

    muy buen video pregunta como hago para que la salida este en formato pandas o dict o list ?

  • @arshadsafi8317
    @arshadsafi8317 8 месяцев назад +1

    Everything is great except a few things. Idk why but the llama_index libraries used in the video has to be cahnged slightly for instance :"from llama_index.core.agent import ReActAgent" instead of "from llama_index.agent import ReActAgent", same with the prompts file ('from llama_index import PromptTemplate' won't work idk why still). Aprt from that, am I the only one who is getting error 429 and I havent used had a singel usage (according to the openai api usage page). HELP NEEDED!

    • @TheFeanture
      @TheFeanture 8 месяцев назад

      openai.RateLimitError: Error code: 429
      same problems
      not working for me. do you found solution?

    • @TheFeanture
      @TheFeanture 8 месяцев назад

      for me. CSV file was to long. i just deleted everything after Canada. now it is working

    • @arshadsafi8317
      @arshadsafi8317 8 месяцев назад

      @@TheFeanture The problem was actually in the Openai account. Unforetunately when you sign up for a chat gpt account at the same time you receive that $5 free cedit. I had my account my nearly two years and it was expired (3 months limit for the $5). Solution: create a new openai account, with a new phone number.

    • @mikkelchristensen4237
      @mikkelchristensen4237 7 месяцев назад

      Did you figure out what the updated version of 'from llama_index import PromptTemplate' is?

    • @wolfofthelight5690
      @wolfofthelight5690 7 месяцев назад

      @@mikkelchristensen4237 from llama_index.core import PromptTemplate

  • @DarshanK-h7s
    @DarshanK-h7s 8 месяцев назад

    i am using Azure open AI. The agent some how does not work and give "openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}". All the credentials API, endpoint, were stored in .env file.
    Any solution or workaround for this?

  • @adamabdullah6789
    @adamabdullah6789 8 месяцев назад

    sory for basic question. can we make it into api that can be consumed ? thanks

  • @WeirdoPlays
    @WeirdoPlays 4 месяца назад

    What if ee wnt to use existing index from vectordabases

  • @cematilkan8553
    @cematilkan8553 8 месяцев назад

    I wonder how we can use local LLMs like ollama or mistral using your code.