Coding tutorial: RAG with LangChain and Llama3

Поделиться
HTML-код
  • Опубликовано: 10 янв 2025

Комментарии • 9

  • @ygfster
    @ygfster Месяц назад

    Nice video and notebook, thanks for sharing!

  • @akilkhatri1089
    @akilkhatri1089 Месяц назад

    Thank you!!

  • @kylewilliams4721
    @kylewilliams4721 28 дней назад

    So this isn't a local Llama3 model, does huggingface store api request data?

    • @yanaitalk
      @yanaitalk  26 дней назад

      This is a local model, downloaded from huggingface.

    • @kylewilliams4721
      @kylewilliams4721 21 день назад

      @@yanaitalk Just to be clear, you downloaded the llama3 model on your hard drive?

    • @yanaitalk
      @yanaitalk  21 день назад

      @@kylewilliams4721 yes, in the notebook, it is downloaded to local drive in colab.

    • @AhsenWaheed-230
      @AhsenWaheed-230 9 дней назад

      ​@@yanaitalk
      I am getting this error : No module named 'transformers.models.cohere.configuration_cohere' at line [llama3 = AutoModelForCausalLM.from_pretrained(..)]....somewhere on internet i found that I need to update the transformer version and I changed the version in requirements.txt and used the latest version.But I am still getting this error. Can anyone help with that plz.
      Also, I just download the whole model from huggingface and uploaded to my drive. Can I use it directly instead of downloading it using the above line where I am getting the error.

  • @amitsingha1637
    @amitsingha1637 5 месяцев назад

    Thanks

  • @ashwinkumar5223
    @ashwinkumar5223 7 месяцев назад

    Superb.. Thanks