HuggingFace Fundamentals with LLM's such as TInyLlama and Mistral 7B
HTML-код
- Опубликовано: 7 фев 2025
- chris looks under the hood of huggingface models such as TinyLlama and Mistral 7-B. In the video Chris presents a high level reference model of large language models and uses this to show how tokenization and the AutoTokenizer module works from the HuggingFace transfomer library linking it back to the HuggingFace repository. In addition we look at the tokenizer config and Chris shows how Mistral and Llama-2 both use the same tokenizer and embeddings architecture (albeit different vocabularies). Finally Chris shows you how to look at the model configuration and model architecture of hugging face models.
As we start to build towards our own large language model, understanding these fundamentals are critical no matter whether you are a builder or consumer of AI.
Google Colab:
colab.research...
The best video I've watched on RUclips about LLM so far. You explain complex topics in an accessible language, clearly and understandably. You are doing a very good job. I'm eagerly waiting for the next videos :)
same here
Wow, thanks!, this one actually took a long time to get right, glad you liked it
For the very first time, I finally get it, thanks to you. Thank you for your service to the community.
Thanks, for another great video Chris. I've been through some LLM courses on Udemy but your channel is helping me to clear many doubts I have on the whole thing. I'm glad I found your channel. It's really the best on this subject. Congratulations. Marcelo.
Very kind, my rule is to try and always go one level below. It means that my vids are never short, glad the content is useful
Amazing way to get people comfortable with the model architecture. Thank you so much for sharing your knowledge.
Glad it was useful
Excellent explanation. Although I don't have a use case to fine-tune a model currently, I presume I will eventually it'll be great to have what you've shared in my back pocket. Thanks a bunch.
Awesome, glad it was useful
Thanks being so so in details. That was really a refresher for me. Glad someone like you is doing such a good work.
thank you, very much appreciate that
Great video. Just the right amount of detail. Thanks.
Glad it was helpful!
Great video! Looking forward to your next videos…
Yeah, next ones in series will be fun, glad you’re enjoying it
Insanely valuable video. Thank you!
Glad it’s useful
Thank you! This video brings light into the black box of LLM magic)
more to come, the next set of videos reveal a bunch more
Looking really forward to the next video.
next one dropped
Excellent tutorial to get started with LLMs.
Glad you liked it!
very well explained and useful
So glad to hear that, thank you
great video - many thanks!
Glad you liked it!
Aways awesome content ❤
Super glad it’s useful, thank you
Thanks a lot Chris.
glad it was useful
How does the tokenizer decode sub-word embeddings? Specifically, how do you determine which sequence is concatenated into a word vs. standing on its own? As shown, the answer would be decoded with spaces between the embeddings, which wouldn't make "Lovelace" into a word.
Certain tokens will have spaces others won’t so _lace would be a different token from lace. I have a deep dive of the tiktoken tokenizer where I spend a lot of time on this. I am planning to do a building a tokenizer vid soon as part of this series
how the tokenizer for gpt-4 (tiktoken) works and why it can't reverse strings
ruclips.net/video/NMoHHSWf1Mo/видео.html
@@chrishayuk Thanks, I'll check out the other video and looking forward to the next one.
great video. Thx!
thank you
Great video.
thank you
Great job sir, one video for me sir how to build llama APIs i want use my train own model now i want using in my website ..
That’s where we are are working up to, but you can check out my existing fine tuning llama-2 video
Bro, just turn-on the Big Thank so I can donate you
lol, not gonna happen but appreciate the gesture and glad you like the videos