Thanks for this comprensive work. I was eager to get all these bricks consolidated as you did. I am not a developper but i am now certain to have a plan for my own and small project; at least to prove it can be useful to people, enhancing their knowledge with pleasure! Thanks again. Best wishes for 2024 Michel from France
Interesting that we’re rapidly following the history of mainframe based computing down to local and mobile based computing. We’ll always have some form of api based llm but running something like a mistral 7b on mobile or perhaps a mixtral and beyond may become commonplace in just a few years time.
If we train a llm model with our data and deploy it in our server, everything is ours then will there be token limits then also? Like response output tokens? I want my model to generate like 25k output tokens is it possible if it's deployed in our server only and not using any big organisations api llm model
Thank you Niels for this short video on training and deploying LLMs. Really enjoyed. Keep making such videos. :)
Fantastic breakdown, thank you Niels
Awesome video. Simple and insightful.
All I can say is Thank you!
If we meet some time somehow I would be more than happy to give you a treat.
4:33, 6:05 How to evaluate the output of the LLM models: Hugging Face's Open LLM Leaderboard or LMSys's Chatbot Arena
Thanks for the video. I learned so much
Thanks for this comprensive work.
I was eager to get all these bricks consolidated as you did.
I am not a developper but i am now certain to have a plan for my own and small project; at least
to prove it can be useful to people, enhancing their knowledge with pleasure!
Thanks again.
Best wishes for 2024
Michel from France
Insightful, and very straightforward! Awesome video
What a great breakdown of exactly what we saw in 2023
Thank you very much for this video. You've been so helpful to me sincerely.
That was a really nice talk thank you!
This was a great one
Interesting that we’re rapidly following the history of mainframe based computing down to local and mobile based computing. We’ll always have some form of api based llm but running something like a mistral 7b on mobile or perhaps a mixtral and beyond may become commonplace in just a few years time.
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
If we train a llm model with our data and deploy it in our server, everything is ours then will there be token limits then also? Like response output tokens? I want my model to generate like 25k output tokens is it possible if it's deployed in our server only and not using any big organisations api llm model