Install Llama 3.2 1B Instruct Locally - Multilingual On-Device AI Model

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • This video shows how to locally install Meta Llama 3.2 1B Instruct LLM locally and test it on various benchmarks. Its great for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
    🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
    🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
    bit.ly/fahd-mirza
    Coupon code: FahdMirza
    ▶ Become a Patron 🔥 - / fahdmirza
    #llama32 #llama1b #llama3b #llama11b #llama90b
    PLEASE FOLLOW ME:
    ▶ LinkedIn: / fahdmirza
    ▶ RUclips: / @fahdmirza
    ▶ Blog: www.fahdmirza.com
    RELATED VIDEOS:
    ▶ Resource www.llama.com/
    All rights reserved © Fahd Mirza

Комментарии • 17

  • @fahdmirza
    @fahdmirza  4 дня назад

    🔥Enters Llama 3.2 with Text and Vision in 1B, 3B, 11B, and 90B, ruclips.net/video/SfjQCHsZ6Ec/видео.html
    🔥Install Llama 3.2 1B Instruct Locally - Multilingual On-Device AI Model, ruclips.net/video/aKEUjAjJY7Q/видео.html
    🔥Llama 3.2 3B Instruct - Small Yet Powerful Meta Model - Install Locally, ruclips.net/video/xTgyrC-HZ7o/видео.html

  • @johnkintree763
    @johnkintree763 4 дня назад +3

    Instead of asking a language model to count the letters in a word, it would make more sense for the language model to understand the input, call the appropriate function for the task, and give the value returned from the function in its answer. A language model by itself is stupid.
    A language model that can extract knowledge and sentiment from conversations and other sources of text, and then merge the extracted entities and relationships into a graph representation can form a synergy with graph databases. This could be a path to collective human and digital intelligence.

  • @Ayushsingh019
    @Ayushsingh019 4 дня назад

    in the case of the Hindi translation model is partially wrong. The correct one is the "Main tumhe pyar krta hoon". Llama 3.2 needs to watch more Bollywood movies😅

  • @slc388
    @slc388 День назад

    Please make a video on LLM unsupervised fine-tuning.

    • @fahdmirza
      @fahdmirza  14 часов назад

      already there are plenty on the channel, plz search

  • @adriangpuiu
    @adriangpuiu 3 дня назад

    finally someone who dosent push videos about the lobotomised llama vision models ...

  • @felixserra2399
    @felixserra2399 2 дня назад

    Can I use it for camera position

  • @prince-sonawane
    @prince-sonawane 4 дня назад

    will this work on amd gpu with 16gb vram?

    • @fahdmirza
      @fahdmirza  4 дня назад

      Unlikely

    • @brulsmurf
      @brulsmurf 4 дня назад

      You can use Ollama if you have a recent AMD gpu (7800 XT, 6800 XT etc). 16GB is plenty for these kind of models.

    • @KonstantinOrlov
      @KonstantinOrlov 4 дня назад

      Works great (1B and 3B) on 5700xt (8gb), ran it in lm studio

    • @RaidenZ4
      @RaidenZ4 2 дня назад

      I runned this model 4gb ram laptop with no gpu