Ollama: Running Hugging Face GGUF models just got easier!

Поделиться
HTML-код
  • Опубликовано: 27 янв 2025

Комментарии •

  • @sanketss84
    @sanketss84 3 месяца назад

    love your videos , packed with lot of to the point information which get the task does exactly as its supposed to work. thanks a lot.

    • @learndatawithmark
      @learndatawithmark  3 месяца назад +1

      Thanks, glad you like the style! I try to keep it as information dense as I can :)

    • @sanketss84
      @sanketss84 3 месяца назад

      @@learndatawithmark this format is great.

  • @pleabargain
    @pleabargain 3 месяца назад +1

    Short. To the point. Great!

  • @jason77nhri
    @jason77nhri 2 месяца назад

    Thank you for the guidance!
    I have a question about the difference between these two commands:
    The first command directly pulls the entire project repository of the model, e.g., bartowski/Ministral-8B-Instruct-2410-HF-GGUF-TEST.
    The second command runs a specific GGUF file under the project repository of the model.
    However, I noticed something strange. When I visit the same author's model page on Hugging Face, under the "Use this model" dropdown, it only shows options like llama.cpp, LM Studio, Jan, and vLLM, but there's no option for Ollama. Why is that?
    Thanks!

  • @StudyWithMe-mh6pi
    @StudyWithMe-mh6pi 3 месяца назад

    👋👋👋