Thank you for the guidance! I have a question about the difference between these two commands: The first command directly pulls the entire project repository of the model, e.g., bartowski/Ministral-8B-Instruct-2410-HF-GGUF-TEST. The second command runs a specific GGUF file under the project repository of the model. However, I noticed something strange. When I visit the same author's model page on Hugging Face, under the "Use this model" dropdown, it only shows options like llama.cpp, LM Studio, Jan, and vLLM, but there's no option for Ollama. Why is that? Thanks!
love your videos , packed with lot of to the point information which get the task does exactly as its supposed to work. thanks a lot.
Thanks, glad you like the style! I try to keep it as information dense as I can :)
@@learndatawithmark this format is great.
Short. To the point. Great!
That's what I try to do! Glad you liked it :D
Thank you for the guidance!
I have a question about the difference between these two commands:
The first command directly pulls the entire project repository of the model, e.g., bartowski/Ministral-8B-Instruct-2410-HF-GGUF-TEST.
The second command runs a specific GGUF file under the project repository of the model.
However, I noticed something strange. When I visit the same author's model page on Hugging Face, under the "Use this model" dropdown, it only shows options like llama.cpp, LM Studio, Jan, and vLLM, but there's no option for Ollama. Why is that?
Thanks!
👋👋👋