Code Llama 34B model with Inference and HuggingChat | Local Setup Guide (VM) and Live Demo

Поделиться
HTML-код
  • Опубликовано: 7 авг 2024
  • In this video you learn how you can run Meta's new Code Llama 34B parameter instruct model locally on a GCP VM through a text generation inference server from Hugging Face and to connect it to the HuggingChat UI.
    Chapters in this video:
    0:00 - Intro and Explanation
    01:13 - Demo HuggingChat locally
    01:31 - Remote SSH into GCP VM
    02:35 - Code Llama writes Python app
    03:41 - GPU requirements and GCP machine type
    04:23 - Request access from Meta AI
    04:46 - HuggingFace requirements
    04:54 - Text Generation Inference
    05:37 - Code Llama setup in HuggingChat
    06:59 - Outro
    Video related links:
    - ChatGPT - but Open Sourced | Running HuggingChat locally (VM): • ChatGPT - but Open Sou...
    - Running HuggingChat locally (VM): www.blueantoinette.com/2023/0...
    - SSH into Remote VM with VS Code: • SSH into Remote VM wit...
    About us:
    - Homepage: www.blueantoinette.com/
    - Contact us: www.blueantoinette.com/contac...
    - Twitter: / blueantoinette_
    - Consulting Hour: www.blueantoinette.com/produc...
    Hashtags:
    #codellama #llama2 #inference #huggingchat #ai #metaai #huggingface
  • НаукаНаука

Комментарии • 11

  • @nunoalexandre6408
    @nunoalexandre6408 11 месяцев назад +1

    Love it!!!!!!!!!!!!!!!

  • @kimnoel489
    @kimnoel489 11 месяцев назад +1

    Hello Robert thanks again for this good tutorial :). I tried to create such VM in the region you mention and in many other regions, but every time I get the error saying it's currently unavailable (GPU shortage). I also encounter shortage with Nvidia T4. Did you find easily resources? Or it's because you are a GCP partner so you access resources in priority?

    • @BlueAntoinette
      @BlueAntoinette  11 месяцев назад

      Hi Kim, good to hear from you again :). Well, I did not encounter GPU shortages, but unavailability of the required „a2-highgpu-2g“ machine type. What I did was to reach out to Google at Twitter and the very next day it worked for me: x.com/robertschmidpmp/status/1696870241584775368?s=46&t=5SAiC-TXlqIYFkhMf8DAMg
      Not sure it was by accident, however feel free to respond to my tweet. Alternatively you can provide feedback to Google from the Google Cloud Console. Or you can send me an email with your account details and I will reach out to my partner manager at Google directly.

  • @caedencode
    @caedencode 8 месяцев назад

    Would it be able to just run llama

  • @finnsteur5639
    @finnsteur5639 11 месяцев назад

    I'm trying to create 100 000 reliable tutorials for hundred complex software like photoshop, blender, da vinci resolve etc.. Llama and gpt don't give reliable answer unfortunately. Do you think finetuning llama 7b would be enough (compared to 70b)? Do you know how much time/data that would take?
    I also heard about embedding but couldn't get it to work on large dataset. Would that be a better option? We have at least 40 000 pages of documentation I don't know what the better approach is.

    • @BlueAntoinette
      @BlueAntoinette  11 месяцев назад

      Check out HuggingFaceEmbeddings (SentenceTransformers) together with a vector store like Chroma

    • @finnsteur5639
      @finnsteur5639 11 месяцев назад

      @@BlueAntoinette So for you embedding is enough to answer complex question that rely on multiple part of a 800 page technical documentation? We don't have to finetune?

    • @BlueAntoinette
      @BlueAntoinette  11 месяцев назад

      @@finnsteur5639 Personally I would check that out first and test it with available open source models. If you don‘t get relevant result you still can try to fine tune.

    • @BlueAntoinette
      @BlueAntoinette  9 месяцев назад

      @@finnsteur5639 I've now created a new solution that maybe can help you in this regard. Learn more in my latest video ruclips.net/video/n63SDeQzwHc/видео.html