Build a Chatbot in 15 Minutes with Streamlit & Hugging Face Using DialoGPT

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии •

  • @smartzgaming7424
    @smartzgaming7424 2 месяца назад +1

    WOW I’ve learned so much , and it’s been incredibly fascinating. The process of exploring new knowledge and gaining fresh perspectives is both exciting and rewarding!

    • @DS-AIwithKV
      @DS-AIwithKV  2 месяца назад

      Thank you so much! This means the world to me

  • @HIMANSHU-mt1jk
    @HIMANSHU-mt1jk 2 дня назад +1

    awesome

  • @vitalis
    @vitalis Месяц назад

    You look young, how and where did you learn all this? Kudos, subbed, keep it up

  • @matiasmartinez3639
    @matiasmartinez3639 Месяц назад +1

    Hi I'm a beginer programer and for my college i have to do a virutal librery with a chatbot, with this you helped me a lot, but a have a doupt. I'm a spanish speaker and for my college the bot have to be able to respond in spanish if I use this code but with another model like "meta-llama/Llama-3.2-1B" for example. It could be work? btw please keep doing videos like this are so useful. You got a new suscriber ❤

    • @DS-AIwithKV
      @DS-AIwithKV  Месяц назад

      Hi, thank you so much for your kind words. Yes, the Streamlit framework works for models like gpt3.5, and llama. With gpt3.5, you need openAI API, and then call the API to generate the response. With meta-llama/Llama-3.2-1B though, I experience the inference time is too slow via HuggingFace. I was using 40 GB GPU and it took me 15 minutes to get to the inference. The inference is nothing fancy, basically just one question, which is not what we are looking for when building a chatbot. You can try localizing the llama by using ollama (ollama.com/library/llama3.2). I will get to it in the later tutorial.