Using Ollama for Local Large Language Models

Поделиться
HTML-код
  • Опубликовано: 11 дек 2024

Комментарии • 4

  • @claudiofernandes2015
    @claudiofernandes2015 10 месяцев назад +1

    it has been a while, welcome back, i used to like alot your content, you use to have the most interesting and advanced teaching

  • @pexx99
    @pexx99 10 месяцев назад

    You get very good performance and yet it is not using much of its memory. Does it use GPU regardless? In general is the GPU memory size required to match model memory consumption? What specs do you have there in your server?

  • @MatallicA_one
    @MatallicA_one 10 месяцев назад

    Somewhere out there an AI that transcribes videos into text and is struggling to keep up with Phil's typing prowess!