Install Llama 3.2 1B Instruct Locally - Multilingual On-Device AI Model
HTML-код
- Опубликовано: 29 сен 2024
- This video shows how to locally install Meta Llama 3.2 1B Instruct LLM locally and test it on various benchmarks. Its great for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
bit.ly/fahd-mirza
Coupon code: FahdMirza
▶ Become a Patron 🔥 - / fahdmirza
#llama32 #llama1b #llama3b #llama11b #llama90b
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ RUclips: / @fahdmirza
▶ Blog: www.fahdmirza.com
RELATED VIDEOS:
▶ Resource www.llama.com/
All rights reserved © Fahd Mirza
🔥Enters Llama 3.2 with Text and Vision in 1B, 3B, 11B, and 90B, ruclips.net/video/SfjQCHsZ6Ec/видео.html
🔥Install Llama 3.2 1B Instruct Locally - Multilingual On-Device AI Model, ruclips.net/video/aKEUjAjJY7Q/видео.html
🔥Llama 3.2 3B Instruct - Small Yet Powerful Meta Model - Install Locally, ruclips.net/video/xTgyrC-HZ7o/видео.html
Instead of asking a language model to count the letters in a word, it would make more sense for the language model to understand the input, call the appropriate function for the task, and give the value returned from the function in its answer. A language model by itself is stupid.
A language model that can extract knowledge and sentiment from conversations and other sources of text, and then merge the extracted entities and relationships into a graph representation can form a synergy with graph databases. This could be a path to collective human and digital intelligence.
good comment, thanks.
in the case of the Hindi translation model is partially wrong. The correct one is the "Main tumhe pyar krta hoon". Llama 3.2 needs to watch more Bollywood movies😅
lol, thanks for that.
Please make a video on LLM unsupervised fine-tuning.
already there are plenty on the channel, plz search
finally someone who dosent push videos about the lobotomised llama vision models ...
cheers
Can I use it for camera position
yes
will this work on amd gpu with 16gb vram?
Unlikely
You can use Ollama if you have a recent AMD gpu (7800 XT, 6800 XT etc). 16GB is plenty for these kind of models.
Works great (1B and 3B) on 5700xt (8gb), ran it in lm studio
I runned this model 4gb ram laptop with no gpu