Prompt Engineering using Llama-2 model

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 8

  • @AlainFavre-n4k
    @AlainFavre-n4k 5 месяцев назад

    Very instructive and easy to follow

  • @abutareqrony4513
    @abutareqrony4513 6 месяцев назад

    Great explanation bro. Thanks

  • @dr_einsteinn
    @dr_einsteinn 3 месяца назад

    is the code working on any app platform like streamlit?

  • @brijeshtanwar4392
    @brijeshtanwar4392 8 месяцев назад

    This is really helpful video for us. Can you make a video explaining all the parameters in transformers.pipeline()…. Like repetition_penalty, beam_search, etc…. And does these vary for other open source LLMs like mistral 7b, 13b

  • @taijingshen4503
    @taijingshen4503 4 месяца назад

    Do we need to request access from the repo owner for llama 2 7b chat hf used here?

  • @GangJiang-n6n
    @GangJiang-n6n 9 месяцев назад +1

    Cool!!!

  • @Abhishekkumar-wn9do
    @Abhishekkumar-wn9do 9 месяцев назад

    if we off the internet still model will be downloaded using cache files..???

    • @ycopie1126
      @ycopie1126  9 месяцев назад +2

      Only first time internet is needed to download the model. Next time onwards it will take from your local cache without internet.