Using Local Large Language Models in Semantic Kernel

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 12

  • @justinyuen1807
    @justinyuen1807 4 месяца назад +4

    Would love to see how this works as well with the Ollama Embeddings API + Semantic Kernel Memory. ❤

  • @skaus123
    @skaus123 4 месяца назад +1

    do you think performance wise that ollama is better than lm studio ? lm studio, while has a nice ui looks like its further away from the metal.

  • @TheDemoded
    @TheDemoded 4 месяца назад

    what hardware did you use?

  • @CecilPhillip
    @CecilPhillip 4 месяца назад +2

    Curious to hear if anyone has been able to get local models working with automatic function calling

    • @florimmaxhuni4718
      @florimmaxhuni4718 4 месяца назад +1

      Same will like to see function calling with local LLMs

    • @vivekkaushik9508
      @vivekkaushik9508 4 месяца назад +1

      Ayyy it's Cecil from Microsoft. Didn't expect you here. What a small world.

    • @CecilPhillip
      @CecilPhillip 4 месяца назад +1

      @@vivekkaushik9508 Big fan of the channel. Also left Microsoft a while ago 🙂

    • @vivekkaushik9508
      @vivekkaushik9508 4 месяца назад

      @@CecilPhillip 😲 Sorry, I didn't know. I hope everything is well.

    • @CecilPhillip
      @CecilPhillip 4 месяца назад

      @@vivekkaushik9508 nothing to be sorry about. It's all good. Still a supporter of a lot of the work going on there