Getting Started with OLLAMA - the docker of ai!!!

Поделиться
HTML-код
  • Опубликовано: 28 янв 2024
  • chris explores how ollama could be the docker of AI. in this video he gives a tutorial on how to get started with ollama and run models locally such as mistral-7b and llama-2-7b. he looks at how ollama operates and how it works very similarly to docker including the concept of the model library. chris also shows how you can create customized models and how to interact with the built-i fastapi server as well as use the javascript ollama library to interact with the models using node.js and bun. at the end of this tutorial you'll have a great understanding of ollama and it's importance in AI Engineering
  • НаукаНаука

Комментарии • 20

  • @bharatarora9036
    @bharatarora9036 6 месяцев назад +3

    Thank you @Chris For Sharing This. Very Informative

    • @chrishayuk
      @chrishayuk  6 месяцев назад +1

      Glad it was helpful!

  • @sbudaland
    @sbudaland 5 месяцев назад +2

    You are a great teacher and you speak tech very well in such a way that it encourages one to watch the whole video

    • @chrishayuk
      @chrishayuk  5 месяцев назад

      Thank you so much 🙂

  • @sollywelch
    @sollywelch 6 месяцев назад +3

    Great video, really enjoyed this! Thanks Chris

    • @chrishayuk
      @chrishayuk  6 месяцев назад +2

      Thank you, wasn’t the video I intended to record that day, glad it worked well, and you enjoyed it. Thank you

  • @NicolaDeCoppi
    @NicolaDeCoppi 6 месяцев назад +5

    Great video Chris! You're one of the smartest person I know!!!

    • @chrishayuk
      @chrishayuk  6 месяцев назад +2

      Too kind and right back atcha

  • @mechwarrior83
    @mechwarrior83 5 месяцев назад +1

    What a great little underrated channel. I love how you present information in such a clear manner. Instant subscribe!

    • @chrishayuk
      @chrishayuk  5 месяцев назад +1

      Thank you, glad you enjoyed it. Underrated is perfectly fine with me, channel is really about organising my thoughts, just feel lucky other people find it useful

  • @crabbypaddy5549
    @crabbypaddy5549 4 месяца назад

    I installed the llama2:70b wow it is super good, but it is heavy on my machine. it uses up 50 gb ram, and running my 5090x at 70 percent and still it is nearly uses up all of my 3090 GPU. it is a bit slower than the 7b but the answers are so much more complex and nuanced. Im blown away.

  • @zscoder
    @zscoder 5 месяцев назад +1

    Curious how we could setup use case for project context prompts?
    Thanks for this awesome video, subbed 🙌

  • @iamdaddy962
    @iamdaddy962 5 месяцев назад +5

    really wish your channel got more attention compared to the L4 "influencers"....seems like youtube "programmers" prefer entry level sensationalist memelords )):

    • @chrishayuk
      @chrishayuk  5 месяцев назад +3

      I’m okay with the level of attention it gets, the channel is my tech therapy. I just feel very lucky that other people don’t mind watching my therapy sessions

    • @iamdaddy962
      @iamdaddy962 5 месяцев назад +3

      @@chrishayuk i appreciate all REAL senior level wisdom you've bestowed on the internet!! thinking about how the techlead still gets hundreds of thousands of views sometimes makes me have an aneurysm haha

    • @chrishayuk
      @chrishayuk  5 месяцев назад +2

      Very very kind of you

  • @jocool7370
    @jocool7370 Месяц назад

    Thanks for making this video. I've just tried OLLAMA. It gave wrong answers to 3 of my 4 first (and only) prompts. Uninstalled it.

    • @chrishayuk
      @chrishayuk  10 дней назад

      these things depend on the query and the model

    • @jocool7370
      @jocool7370 10 дней назад

      @@chrishayuk Then they're useless?

    • @chrishayuk
      @chrishayuk  10 дней назад

      @@jocool7370 nope, you’ve learned something that model can’t do. That’s the true path to knowledge and understanding. Now go learn more things it can and can’t do, and compare with other models