What are the different types of models - The Ollama Course

Поделиться
HTML-код
  • Опубликовано: 19 сен 2024

Комментарии • 16

  • @claudioguendelman
    @claudioguendelman День назад +2

    Excellent thanks from Chile.

  • @solyarisoftware
    @solyarisoftware День назад

    Thanks, Matt. Your Ollama course is great because it's easy to follow and addresses problems from your unique point of view. Always upvoted!
    Regarding naming, what you call "source" models are also referred to as "foundation" or "pretrained" models, as far as I know. It's a good distinction between chat-fine-tuned models (sometimes called chat-completion models) and instruct-fine-tuned models (sometimes called text-completion models).
    In general, custom fine-tuning a model involves taking a source model and refining it with custom data. This feature is not currently supported by the present version of Ollama, even though you’ve rightly dedicated one or two videos to how to create a custom fine-tuned model by training an original source model.
    Regarding multimodal models, as you mentioned, Ollama includes some vision LLMs (image input) like LLava and others, I believe. You correctly pointed out that multimodal could also involve audio input (and output), which seems feasible at the moment (I’ll need to double-check by example the new released Mistral Pixtral when available on ollama). BTW I think video processing using Ollama is also of great interest, so it might be worth exploring this topic in future videos.
    Just my two cents-thanks again!

  • @NLPprompter
    @NLPprompter День назад

    Thank you for keep caring poor learner. Thanks Matt

  • @therobotocracy
    @therobotocracy День назад

    Nice, well done.

  • @tonyhartmann7630
    @tonyhartmann7630 День назад

    Thanks for the explanation 😊

  • @Alex-os5co
    @Alex-os5co День назад

    Awesome course, thank you! My only request would be to have mentioned the suffix Q_M Q4 etc

  • @bernieoconnor9350
    @bernieoconnor9350 18 часов назад

    Thank you, Matt. Great info.

  • @jimlynch9390
    @jimlynch9390 22 часа назад

    Once more, I learned something. i've asked that question before but never have gotten a satisfactory answer. Thanks, Matt.

  • @ISK_VAGR
    @ISK_VAGR 18 часов назад

    Thanks Matt, great video and series! Why don’t LLMs always produce good embeddings? And why do embedding models sometimes underperform in RAG applications? I’ve tested many models, but only five have consistently provided accurate embeddings for paper abstracts, verified by clustering and ground truth.

  • @deepanshusinghal955
    @deepanshusinghal955 9 часов назад

    How can I get the fast answers and accurate at the same time using Ollama ?????

  • @bustabob08
    @bustabob08 18 часов назад

    Tak!

  • @muraliytm3316
    @muraliytm3316 5 часов назад

    Hi sir your videos are great and very informative and I really like them but could you please explain some of the concepts by sitting before a pc and show them practically, I am really confused what model to download, the benchmarks show good results and when I really use them they are worse and also there are different quantisations like q4,q6,q8,fp16,K_S,K_M,etc which are difficult to understand. Thanks for reading the comment

    • @technovangelist
      @technovangelist  4 часа назад

      There is another video in the course that shows the quants

  • @tecnopadre
    @tecnopadre День назад

    I always think why isn't a model that only talks and can be trained with information (like FAQ Help desk or company internal bot) and of course it's an small one and answers properly without hallucinations

    • @azrajiel
      @azrajiel 18 часов назад

      @@tecnopadre there is the so called noun phrase collisions,, which are seemingly a big part in hallucinations, even in rag systems. basically the problem is not inaccurate data but reference nouns that are ambiguous. there are some very interesting articles to google and also some work to eliminate them. basically it can be corrected with the right prompting.