Fine-Tune Any LLM, Convert to GGUF, And Deploy Using Ollama

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 4

  • @chukypedro818
    @chukypedro818 25 дней назад

    insightful, thanks for this video

  • @BoeroBoy
    @BoeroBoy Месяц назад

    Love this. If you add the -ngl option to llama-server it will offload GGML layers to GPU though - as run here you're only using CPU for your test.

  • @Baxate
    @Baxate 4 месяца назад

    Very detailed work, great work Ishan and the Brev Team!

  • @GabFelix-kv3fq
    @GabFelix-kv3fq Месяц назад

    Hello! Great tutorial! I can't seem to find this launchable/notebook on gguf and ollama in the console brev website, is it still available or was this taken down? Wanting to do some tinkering on this as well.