Training and deploying open-source large language models

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024
  • НаукаНаука

Комментарии • 15

  • @aayushgarg5437
    @aayushgarg5437 11 месяцев назад +1

    Thank you Niels for this short video on training and deploying LLMs. Really enjoyed. Keep making such videos. :)

  • @RezaZaheri-ow3qm
    @RezaZaheri-ow3qm 7 месяцев назад

    Fantastic breakdown, thank you Niels

  • @thenewexeptor
    @thenewexeptor 11 месяцев назад

    Awesome video. Simple and insightful.

  • @RajaSekharaReddyKaluri
    @RajaSekharaReddyKaluri 11 месяцев назад

    All I can say is Thank you!
    If we meet some time somehow I would be more than happy to give you a treat.

  • @giofou711
    @giofou711 11 месяцев назад

    4:33, 6:05 How to evaluate the output of the LLM models: Hugging Face's Open LLM Leaderboard or LMSys's Chatbot Arena

  • @onangarodney7746
    @onangarodney7746 11 месяцев назад

    Thanks for the video. I learned so much

  • @michelguenard8391
    @michelguenard8391 11 месяцев назад +1

    Thanks for this comprensive work.
    I was eager to get all these bricks consolidated as you did.
    I am not a developper but i am now certain to have a plan for my own and small project; at least
    to prove it can be useful to people, enhancing their knowledge with pleasure!
    Thanks again.
    Best wishes for 2024
    Michel from France

  • @Yocoda24
    @Yocoda24 11 месяцев назад

    Insightful, and very straightforward! Awesome video

  • @Hypersniper05
    @Hypersniper05 11 месяцев назад

    What a great breakdown of exactly what we saw in 2023

  • @codingthefunway9852
    @codingthefunway9852 11 месяцев назад

    Thank you very much for this video. You've been so helpful to me sincerely.

  • @giviz
    @giviz 11 месяцев назад

    That was a really nice talk thank you!

  • @DailyProg
    @DailyProg 11 месяцев назад

    This was a great one

  • @thegrumpydeveloper
    @thegrumpydeveloper 11 месяцев назад

    Interesting that we’re rapidly following the history of mainframe based computing down to local and mobile based computing. We’ll always have some form of api based llm but running something like a mistral 7b on mobile or perhaps a mixtral and beyond may become commonplace in just a few years time.

  • @mohsenghafari7652
    @mohsenghafari7652 9 месяцев назад

    hi. please help me. how to create custom model from many pdfs in Persian language? tank you.

  • @akshatkant1423
    @akshatkant1423 9 месяцев назад

    If we train a llm model with our data and deploy it in our server, everything is ours then will there be token limits then also? Like response output tokens? I want my model to generate like 25k output tokens is it possible if it's deployed in our server only and not using any big organisations api llm model