3 Ways to Make a Custom AI Assistant | RAG, Tools, & Fine-tuning

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 84

  • @ShawhinTalebi
    @ShawhinTalebi  7 месяцев назад +1

    👉More on LLMs: ruclips.net/p/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0
    Drop a comment for ShawGPT 👇
    --
    Resources
    [1] ShawGPT (No code): chat.openai.com/g/g-fdVpmWWIp-shawgpt
    [2] Playground: platform.openai.com/playground
    [3] Assistants API: platform.openai.com/docs/assistants/overview
    [4] Assistants Doc: platform.openai.com/docs/api-reference/assistants
    [5] More on tools: platform.openai.com/docs/assistants/tools/code-interpreter
    [6] Fine-tuning Guide: platform.openai.com/docs/guides/fine-tuning
    [7] Fine-tuning Doc: platform.openai.com/docs/api-reference/fine-tuning
    [8] Fine-tuning Data Prep: cookbook.openai.com/examples/chat_finetuning_data_prep

  • @saadowain3511
    @saadowain3511 6 месяцев назад +2

    Thanks for everything you do. Please continue.

  • @syamalaburugula5965
    @syamalaburugula5965 Месяц назад

    Easy to follow and understand. Thank you such for sharing your knowledge!

  • @wg5920
    @wg5920 6 месяцев назад +2

    Nice to see you own tutorial applied in the comments on real time....

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад +1

      I try to walk the walk -ShawGPT

  • @fuad471
    @fuad471 3 месяца назад +1

    it was a pleasure watching you and learning new things about llm.

  • @caileymitchell5325
    @caileymitchell5325 Месяц назад

    Hey Shaw, appreciate your videos!

  • @ifycadeau
    @ifycadeau 7 месяцев назад +3

    ShawGPT is lit 🔥

    • @ShawhinTalebi
      @ShawhinTalebi  7 месяцев назад

      Thanks! I'm trying 😂 -ShawGPT

  • @jimishthakkar
    @jimishthakkar Месяц назад

    This is too good.

  • @adangerzz
    @adangerzz 7 месяцев назад

    Darn it, Shaw, I missed the first 90 seconds because I was distracted by identifying all your musical gear. I have to start over now! :) (But seriously, thanks for the info!)

    • @ShawhinTalebi
      @ShawhinTalebi  7 месяцев назад

      LOL! Should have made an AI for that 😂

  • @BrooksColeHOLO
    @BrooksColeHOLO 5 месяцев назад

    Really great video, thank you. Please consider using your hand gestures for conveying meaning rather than as an oscilloscope for voice volume. Greater comprehension will ensue.

    • @ShawhinTalebi
      @ShawhinTalebi  4 месяца назад

      That's good feedback. I'm still working on my hand gestures during extemporaneous speech 😅

  • @satsanthony4452
    @satsanthony4452 7 месяцев назад

    Well explained. Good reference to the cooking shows.😆

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад

      Haha thanks! Food & data are my two great loves 😂 -ShawGPT

  • @toddbristol707
    @toddbristol707 3 месяца назад

    Thanks!

    • @ShawhinTalebi
      @ShawhinTalebi  3 месяца назад

      Thank you! Glad it was helpful :)

  • @somerset006
    @somerset006 3 месяца назад

    Very nicely done! I think that the comparison between the first option and the second option would have been more fair if you had provided more examples of your answers in the first one. Do you actually use your auto-responder on YT? Thanks!

    • @ShawhinTalebi
      @ShawhinTalebi  3 месяца назад +1

      I used it here and there (you may see some of my comments signed -ShawGPT).

  • @FrancescoFiamingo99
    @FrancescoFiamingo99 3 месяца назад

    dear Shaw, always great, i m really learning a lot tks to you...even if not from USA and old guy :) :) - i have a few questions :1 ) being the video of 3 months ago it is possible now to fine tune with files rather than json answer/question model? 2) using the rag api assistant how i can make one time loading the files in the assistant and be part of the "model", so to have faster interactions -or is already like that?- 3) when we speak about files in rag, possible to use PowerPoint presentations? 4) would be great if u could make an example of use the "tool" with a simple python code to understand how to integrate those in the assistant. sorry for maybe simple for even wrong questions :) :) :)

    • @ShawhinTalebi
      @ShawhinTalebi  3 месяца назад

      Great to hear! Glad the content is helpful.
      1) While I haven't worked with OpenAI's fine-tuning API since this video, I would imagine it's similar to what I do here. If you want to fine-tune on unstructured text you may need to pursue more custom fine-tuning tools.
      2) For what I should here, the files only need to be provided once.
      3) Yes! While this can be complicated if building the RAG system from scratch, OpenAI's API seems to handle them out of the box.
      4) Great suggestion. I'll add that to my list :)

  • @user-pv9fg2ei1b
    @user-pv9fg2ei1b 2 месяца назад

    great video!

  • @EspressDelivery
    @EspressDelivery 2 месяца назад

    "This is like one of those cooking shows and we cooked the pasta last night and we're gonna eat it in front of you"
    "Thus began Shaw Talebi's foray into Mukhbang videos"

    • @ShawhinTalebi
      @ShawhinTalebi  2 месяца назад

      LOL my dream job 😍... bread Mukhbangs only

  • @ravityagi7441
    @ravityagi7441 2 месяца назад

    Hi Shaw. thanks for wonderful Video! I wanted to know cost aspects of your finetuning example. how OpenAI charge for finetuning?

    • @ShawhinTalebi
      @ShawhinTalebi  2 месяца назад

      The training cost was about $0.80

  • @terryliu3635
    @terryliu3635 5 месяцев назад

    Great series of videos! Thank you for sharing. What would you recommend after watching these videos to dive deeper into the Gen AI development field?

    • @ShawhinTalebi
      @ShawhinTalebi  5 месяцев назад +1

      Thanks for watching! I'd recommend 2 things. 1) interview practitioners in the field. 2) do a hands-on project.

  • @somerset006
    @somerset006 3 месяца назад

    Based on your experience with fine-tuning, would you say it's very similar to few-shot learning that we started including into prompts? Thanks!

    • @ShawhinTalebi
      @ShawhinTalebi  3 месяца назад +1

      It is similar in that we are providing examples to the model to use for future generations.
      However, fine-tuning has 2 key differences. 1) it can pass along far more examples than few-shot learning. 2) fine-tuning updates the internal parameters of the model while few-shot learning leaves them as is.

  • @MeirMichanie
    @MeirMichanie 2 месяца назад

    I discovered your channel yesterday and I am hucked, great job. It would be nice to see a video of fine tuning ShawGPT using HF, I saw a video you did running on Colab using Mistal-7b, any chance to do a video using your laptop (Mac) or using HF spaces?

    • @ShawhinTalebi
      @ShawhinTalebi  2 месяца назад

      Thanks for the great suggestions! The QLoRA video uses HF to implement another version of ShawGPT using Colab. I plan on doing a future video on local fine-tuning on Mac with Llama3.

    • @ShawhinTalebi
      @ShawhinTalebi  Месяц назад +1

      Here's an example for M-series Macs: ruclips.net/video/3PIqhdRzhxE/видео.html
      P.S. your comment made it into the video :)

  • @justeverything9658
    @justeverything9658 4 месяца назад

    thank you for the very detailed session. One issue I've noticed is that the run = wait_for_assistant(thread, run) command didn't give me any output even after an hour or so and I had to stop it. Is there anything I am missing? Kindly suggest

    • @ShawhinTalebi
      @ShawhinTalebi  4 месяца назад

      Thanks for the question. Not sure what that could be. Did this happen every time you ran the call, even over different days?
      You could also try troubleshooting in the assistant's playground.

    • @justeverything9658
      @justeverything9658 4 месяца назад

      @@ShawhinTalebi Sorry for the late reply, it happens every time i run it. The "run.status" shows InProgress and never completes.

  • @robins80
    @robins80 5 месяцев назад

    Which version of OpenAI should one use to get the sample code to run? I installed 1.14.3 and got "cannot import name 'OpenAI' from 'openai'".

    • @ShawhinTalebi
      @ShawhinTalebi  5 месяцев назад

      I used v1.11.1
      Full requirements list is available here: github.com/ShawhinT/RUclips-Blog/blob/main/LLMs/ai-assistant-openai/requirements.txt

  • @sisu007
    @sisu007 7 месяцев назад +1

    Come on boy 🎉🎉🎉

  • @AI_ML_DL_LLM
    @AI_ML_DL_LLM 7 месяцев назад +2

    can you make a video fine tuning an open source LLM? thanks

    • @ShawhinTalebi
      @ShawhinTalebi  7 месяцев назад

      Coming very soon!

    • @Shubhampalzy
      @Shubhampalzy 3 месяца назад

      @@ShawhinTalebi I wish to make a chatbot using open source llms which is fine tuned on my custom dataset. Since the dataset will be big, it will have to use RAG. Can you please make a video on this or share the links if you have already done so.
      Thanks a lot for your awesome videos

  • @tomerva22
    @tomerva22 2 месяца назад

    Can we also make an AI assistance using HuggingFace API?
    What are the Pros/Cons doing it?

    • @ShawhinTalebi
      @ShawhinTalebi  2 месяца назад +1

      Yes! Some pros of using HF is more flexibility in system design (e.g. choice of LLM) and potentially lower latency because the system can be self-hosted. The downside, however, is self-hosting is more sophisticated thus requires the right expertise to implement and maintain the system. Additionally, you will need to provision the compute resources to run the LLM.

  • @jaa928
    @jaa928 4 месяца назад

    Thank you as always for the informative content! Apologies for the n00b question, but which IDE are you using? I tried opening the repository with PyCham CE but my environment doesn't show the run options in the Jupyter Notebooks. Is it PyCharm Professional or something else?

    • @ShawhinTalebi
      @ShawhinTalebi  4 месяца назад

      I'm using JupyterLab! Installation steps are provided here: jupyterlab.readthedocs.io/en/stable/getting_started/installation.html

  • @mgarlabX
    @mgarlabX 6 месяцев назад

    Hi Shaw. Amazing instructions, I am enjoying a lot. One question, please: sometimes and randomly the function "wait_for_assistant" freezes at "client.beta.threads.runs.retrieve" without error messages. Any ideas how to deal with that?

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад +1

      Completions can take a few minutes to generate for some prompt. I'd give it some time to confirm that it's truly freezing.

  • @abhinavkashyap3666
    @abhinavkashyap3666 6 месяцев назад

    If I am implementing a RAG architecture, how to deal with the documents (on which my AI ChatBot is going to work ) in the process of fine-tuning. Do I need to somehow provide it's context or not?

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад

      Good question! You can certainly include the RAG prompts in the fine-tuning process. A simpler approach, however, would be to fine-tune your model, then incorporate RAG into the FT model.
      A future video will do exactly this to improve ShawGPT 😎

  • @BlueDattebayo
    @BlueDattebayo 5 месяцев назад

    Thank you so much for the detailed breakdown Shaw, I was wondering if there is a way to bring this execution into the front-end side of things? i.e. Jupyter Notebook into streamlit/gradio? This is intended for an interactive PDF-trained model. Looking forward to hearing from your end! 🙂

    • @ShawhinTalebi
      @ShawhinTalebi  5 месяцев назад +1

      Definitely! While I don't cover how to do that with the OpenAI API in this series, I added that to my list for future videos.

  • @zahrahameed4098
    @zahrahameed4098 4 месяца назад

    Using RAG with LLMs are free or paid? Or if free, what options do I have?

    • @ShawhinTalebi
      @ShawhinTalebi  4 месяца назад +2

      There are both! An example of paid is the premium version ChatGPT. Free options involve using open-source models/libraries and hosting them locally. I walkthrough an example of that here: ruclips.net/video/Ylz779Op9Pw/видео.htmlsi=nukPFu907oc3yT58

    • @zahrahameed4098
      @zahrahameed4098 4 месяца назад

      @@ShawhinTalebi Okay. Thankyou so much!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 7 месяцев назад +6

    not sure about others, but for me shorter content is better, like around 10 minutes. a long content means i need to find a larger block of time, so it is harder to watch spontaneously. i think your content is high quality.

    • @ShawhinTalebi
      @ShawhinTalebi  7 месяцев назад

      This is good feedback. I am the same way. For this it was easier, to make one long video instead of 3 shorter ones, but will keep that in mind for future videos!

  • @fbardeau
    @fbardeau 6 месяцев назад

    hi,
    huge fan of your videos
    have a basic question: why not use CSV ou JSON files like in fine-tuning for RAG to improve retrieval? is PDF really ok?
    thanks

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад +1

      Good question. If I'm understanding correctly your asking about the file type used for knowledge retrieval. Since the details of how this is done by OpenAI is unclear, I'm not sure what impact that would have in the current use case.
      However, more generally, most RAG system translate source documents into a vector database, so the quality of retrieval will depend on how well information can be parsed from the source files. In which case, CSV of JSON might be easier.

  • @hannefarhat8687
    @hannefarhat8687 6 месяцев назад

    So if i understand, if i want a simple ia chatbot, i can use open ia chat in the playground instead of assistant ?

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад

      If you are fine with using it only in the sandbox then yes!

    • @hannefarhat8687
      @hannefarhat8687 6 месяцев назад

      @@ShawhinTalebi sorry what's the sandbox ?

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад

      @@hannefarhat8687 Sorry I meant Playground!

  • @Aabdulnour01
    @Aabdulnour01 Месяц назад

    does this still work?

    • @ShawhinTalebi
      @ShawhinTalebi  23 дня назад

      I think the only change is that the assistants API is no longer in beta.
      I have more recent example code here: ruclips.net/video/3JsgtpX_rpU/видео.htmlsi=XZWmLIomSnx2FFeQ&t=446

  • @ifycadeau
    @ifycadeau 7 месяцев назад

    Commenting so ShawGPT notices me 🤪

    • @ShawhinTalebi
      @ShawhinTalebi  6 месяцев назад +2

      Hey, I noticed ya! 😏 -ShawGPT

    • @ifycadeau
      @ifycadeau 6 месяцев назад

      @@ShawhinTalebi woah!!

  • @calebpcls5166
    @calebpcls5166 3 месяца назад

    I am in ChatGPT and i dont see any option to create a personalized GPT, the GPT builder at 2:30. I have the individual subscription, should i have a different one?

    • @ShawhinTalebi
      @ShawhinTalebi  2 месяца назад

      I believe this is only available to premium users at the moment.

  • @guerbyduval4104
    @guerbyduval4104 4 месяца назад

    Now why are you showing how to use ChatGpt? Why not using Hugging face models? It's too hard for you?

    • @ShawhinTalebi
      @ShawhinTalebi  4 месяца назад

      I use HF models in the next videos of this series: ruclips.net/p/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0

  • @cyber-dioxide
    @cyber-dioxide 5 месяцев назад

    That was very time wasting video, title seemed like we are going to make a small Text generation model with hugging face dataset. On the other hand it was using API.

    • @ShawhinTalebi
      @ShawhinTalebi  5 месяцев назад

      Thanks for the feedback. I share examples using Hugging Face in the videos linked below.
      QLoRA: ruclips.net/video/XpoKB3usmKc/видео.html
      RAG: ruclips.net/video/Ylz779Op9Pw/видео.html

  • @ifycadeau
    @ifycadeau 7 месяцев назад

    For free?? It’s feels like Christmas 🥹🙏🏾✨