Installing Ollama to Customize My Own LLM

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии • 168

  • @proterotype
    @proterotype 11 месяцев назад +20

    God, every once in a while you stumble across the perfect RUclips channel for what you want. This is that channel. Props to you for making difficult things seem easy

    • @decoder-sh
      @decoder-sh  10 месяцев назад +4

      Thanks for the kind words, I'm looking forward to making more videos! Stick around, "I was gonna make espresso" 😂

  • @hashmetric
    @hashmetric 11 месяцев назад +54

    Perfect. Thank you. Great format. Don’t change a thing. Please don’t become another channel that exists only to tell us “this changes everything,” anything about earning any amount of dollars as a RUclipsr, or about using GPT to create mass amounts of crap that will also make us money or a channel that tells us about a new model or paper every day. We don’t need any more of that. Congrats on the first video. More please.

    • @decoder-sh
      @decoder-sh  11 месяцев назад +22

      Not trying to monetize my channel nor lure people in with clickbait titles that the video doesn't pay off 👍 I'm new to content creation so I do intend to explore and experiment with a few things, but please hold me accountable if I ever jump the shark

    • @hashmetric
      @hashmetric 11 месяцев назад

      @@decoder-sh but not through Twitter 🤗

  • @rs832
    @rs832 10 месяцев назад +10

    Its helpful videos like this that make an instant subscribe and a plunge down the rabbit hole of your content an immediate no-brainer.
    Clear. ✅
    Concise. ✅
    Complete. ✅
    Thanks for providing quality content & for not skipping over the details.

    • @decoder-sh
      @decoder-sh  10 месяцев назад +2

      It's my absolute pleasure to make these videos, thank you for watching!

  • @ChrisBrogan
    @ChrisBrogan 11 месяцев назад +2

    Really grateful for this. I just downloaded ollama 20 minutes ago, and your 9 minutes has made me a lot smarter. I haven't touched a command line in about a decade.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching, I'm heartened to hear you had a good experience! Welcome back to the matrix 😎

  • @RetiredVet
    @RetiredVet 11 месяцев назад +2

    In 9 minutes, you gave the best introduction to ollama I have seen. The other videos I have watched were helpful, but you show features such as inspecting and creating models in a short, clearly understood way, that not only tells me how to use ollama, but is also useful info about LLM's I never knew.
    I am retired and looking into AI for fun. In the 60s, my science fair project was a neural network. My father, an engineer, was fascinated with AI and introduced me to the concept. Unfortunately, Marvin Minsky and Seymore Papert wrote Perceptrons and the field slowed down, and I moved on.
    You have a gift for explaining technical concepts. I've enjoyed all three of the current ones and look forward to the next.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thank you for your kind words. I wonder what it must’ve been like to study neural networks in the 60s, only a couple of decades after Von Neumann first conceived of feasible computers. You must’ve been breathing rarified air as even today most people don’t know what a neural network is.
      I read Minsky’s Society of Mind and use it as the basis for my own model of consciousness.
      Thanks again for your comment, and I look forward to making for videos for you soon.

  • @elcio-dalosto
    @elcio-dalosto 11 месяцев назад +2

    Just commenting to rise up the engagement of your channel. What a great content in a so short video. Thank you! I'm playing with ollama and loving it.

  • @fontenbleau
    @fontenbleau 11 месяцев назад +6

    Algorithm lifts you up in my recommendation waves, congratulation.

  • @RealEstate3D
    @RealEstate3D 5 месяцев назад +1

    That was an interesting one. I saw some of your videos and you've been subscribed instantly. You deserve more attention. Hope to see more from you in the future.

  • @HopsGuy
    @HopsGuy 11 месяцев назад +4

    Good format and style! Very clear. Looking forward to deeper dives!

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Plenty more to come, thanks for watching!

  • @sebastianarias9790
    @sebastianarias9790 9 месяцев назад +1

    Great educational content! The simplicity of your process and your explanation makes your channel stand out. Stay true!

    • @decoder-sh
      @decoder-sh  9 месяцев назад +1

      I will! ✊ Thanks for tuning in

  • @noormohammedshikalgar
    @noormohammedshikalgar 6 дней назад +1

    Very simple and knowledgeable video, thanks man -> Good work.

  • @vpd825
    @vpd825 11 месяцев назад +2

    Thank you for not wasting my time 🙏🏼 I feel I've gotten so much value per minute spent watching this than a lot of those other popular channels that started out the same but degraded in content quality and initial principals as time went by.

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      I appreciate you watching, please continue to keep me honest!

  • @build.aiagents
    @build.aiagents 11 месяцев назад +3

    Wow you are the only person i have seen cover anything remotely close to this, how to actually use ollama besides downloading models the obvious concept, but you actually open the hood, thank you!!

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Glad you found it useful!

  • @proterotype
    @proterotype 10 месяцев назад

    Finally today, after building and setting up a new machine, it was time for me to get off the sidelines and download Ollama and my first model. I had curated some videos from different creators into a playlist. When I went to choose one to guide me through the Ollama setup, yours was the easy choice. For what it’s worth.

    • @decoder-sh
      @decoder-sh  10 месяцев назад +1

      It's worth a whole lot, I'm happy to hear that you find my videos helpful 🙏

  • @AIVisionaryLab
    @AIVisionaryLab 5 месяцев назад

    Keep it up, brother! I love the way you teach with live demos
    It's really effective and easy to understand

    • @decoder-sh
      @decoder-sh  5 месяцев назад

      Thank you for the support, I'm resuming filming soon!

  • @MrOktony
    @MrOktony 6 месяцев назад

    Probably one of the best beginners tutorial out there!

  • @JaySeeSix
    @JaySeeSix 11 месяцев назад

    Logical, clean, appropriately thorough, and not annoying like so many others. A+. Thank you. Subscribed :)

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for subscribing! Plenty more coming soon 🫡

  • @TheColdharbour
    @TheColdharbour 11 месяцев назад

    Super!! Total beginner here & Really enjoyed following this and it all worked because of your careful explanation! Looking forward to working through the next ones!

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Thanks for watching! I look forward to sharing more videos soon

  • @RustemYeleussinov
    @RustemYeleussinov 11 месяцев назад +1

    Thank you for the awesome video! I wish you'd go deeper into "fine-tuning" models but keeping it simple for non-technical folks as you do it in all your videos. I've seen other videos people explain how to "fine-tune" model using cutsom dataset in Python but then no one talks how to use such model in Ollama. I wish you could make such video showing the process end-to-end.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching! I do plan on making a video on proper fine-tuning, but in the mean time, please watch this other video of mine on how to use outside models in Ollama! Hugging Face is a great source of fine-tuned models. ruclips.net/video/fnvZJU5Fj3Q/видео.html

  • @Arif-r3p5r
    @Arif-r3p5r 9 месяцев назад

    This is so clean ..... Great idea and very nice presentation. Funny thing is that my friend and I were talking creating this a week ago. Lol .

  • @jimlynch9390
    @jimlynch9390 11 месяцев назад

    Very good for your first! I don't have a gpu so I keep trying various things to see if I can find something I can use . This has helped, thanks.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching! There are a good amount of smaller LLM's like Phi and even smaller, which should be able to run interference on just a CPU. Good luck!

  • @szebike
    @szebike 4 месяца назад

    Awesome structure and explanation !

  • @sh0ndy
    @sh0ndy 9 месяцев назад

    No way this is 1st video?? Nice mate, this was awesome. Im subscribing.

    • @decoder-sh
      @decoder-sh  9 месяцев назад +1

      Thanks for subscribing! Many more on the way :)

  • @republicofamerica1229
    @republicofamerica1229 3 месяца назад +1

    Amazing explanation. Thanks

  • @kenchang3456
    @kenchang3456 11 месяцев назад

    Congrats, great first video.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thank you! Looking forward to making plenty more

  • @justpassingbylearning
    @justpassingbylearning 10 месяцев назад

    Easily the best channel. thank you for your time and input.

    • @decoder-sh
      @decoder-sh  10 месяцев назад +1

      Thank you for watching!

    • @justpassingbylearning
      @justpassingbylearning 10 месяцев назад

      Of course! Will be there for what you put out next! I was just telling someone how I found someone who teaches this so easily and articulates in such an understandable way

  • @MarkSze
    @MarkSze 11 месяцев назад +1

    Easy to follow and succinct, thanks!

  • @grahaml6072
    @grahaml6072 11 месяцев назад

    Great job on your first video. Very clear and succinct.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Glad you enjoyed it!

  • @mjackstewart
    @mjackstewart 9 месяцев назад

    Great job, hoss! I’ve always wanted to know more about Ollama, and you gave me enough information to be dangerous! Thankya, matey!

    • @decoder-sh
      @decoder-sh  9 месяцев назад +1

      Thank you kindly, be sure to use the power responsibly!

  • @ipv6tf2
    @ipv6tf2 8 месяцев назад +1

    missed opportunity to name it `phi-rate`
    love this tutorial! thank you

    • @decoder-sh
      @decoder-sh  8 месяцев назад +1

      Oh man you’re so right!

  • @bernard2735
    @bernard2735 11 месяцев назад

    Thank you. I enjoyed your tutorial - well presented and paced and helpful content. Liked and subscribed and looking forward to seeing more.

  • @brunogaliati3999
    @brunogaliati3999 11 месяцев назад

    very cool and simplistic tutorial. Keep making videos!

  • @mernik5599
    @mernik5599 8 месяцев назад +1

    Is it possible to enable internet access to ollama models? After following your tutorials i was able to do ollama and web ui setup very easily! Just wondering if there are solutions already developed that allows function calling and internet access when interacting with models through web ui

    • @decoder-sh
      @decoder-sh  8 месяцев назад

      This would be achieved through tools and function-calling! I plan to do a video on exactly this very soon, but in the mean time, here are some docs you could look at python.langchain.com/docs/modules/model_io/chat/function_calling/

  • @randomrouting
    @randomrouting 11 месяцев назад

    This was great, clear and to the point. Thanks!

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Glad you enjoyed it!

  • @baheth3elmy16
    @baheth3elmy16 10 месяцев назад

    I am glad I found your channel, I continually search for quality AI channels and don't find a lot around. Thanks for the video and I hope you channel picks up fast. Great content! As for Ollama, I am just not seeing what the hype is about it.. I mean how and why is it different?

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Thanks for watching all of my videos (so far)! Who are some of your favorite creators in the space?
      As a service, ollama runs LLMs. I agree it's not very differentiated. But it's easy to install, easy to use, and it's got a cute mascot. What's not to like?

    • @baheth3elmy16
      @baheth3elmy16 10 месяцев назад

      @@decoder-sh Nothing not to like about it, I guess I like more cosmetic GUIs for example: Everyone praises Comfy, and I just find it intimidating compared to A1111, I hate spiders and their webs and Comfy is a spider web

  • @yuedeng-wu2231
    @yuedeng-wu2231 Год назад

    amazing tutorial. very clear and helpful. Thank you!

  • @Bearistotle_
    @Bearistotle_ Год назад

    Great tutorial! Saved for future reference.

  • @alexaimlllm
    @alexaimlllm 11 месяцев назад

    Perfet, Simple, crisp on Topics. Thanks

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Thanks for watching!

  • @AntoninKral
    @AntoninKral 11 месяцев назад +1

    I would recommend changing FROM to point to point to name, not hash (like FROM phi). It makes your life way easier when pulling new versions.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Hi there, could you tell me more about this? If "phi" points to the hash and not the name, then what name should be used? I would like to make my life easier 🙏

    • @AntoninKral
      @AntoninKral 11 месяцев назад

      @@decoder-sh let's assume that you fetch "phi" model with hash hash1. You create your derived model using hash1. Later on, you fetch updated "phi" with hash2. Your derived model will still be using the old weights from hash1.
      Furthermore, if you use names in your model files, they will be portable. If you take a closer look to your modelfile -- it points to an actual file on disk. So if you send model file to someone else / upload it to the other computer, it will not work. While, if you use something like 'FROM phi:latest', ollama will happily fetch the underlying model for you.
      Same stuff as container images.

  • @AIFuzz59
    @AIFuzz59 9 месяцев назад

    Is it possible to create a model from scratch? I mean have a blank model and train on txt we provide to it?

  • @JimLloyd1
    @JimLloyd1 11 месяцев назад

    Good first vid. In case this gives you any ideas for future videos, I am currently trying to build something this is probably fairly simple, but awkward for me because my front-end experience is weak. I want to make a basic RAG system with clean chat interface that is a front end for ollama. I would prefer Svelte by could switch to another framework. As a first step, I just want to store every request/response exchange (user request, assistant response) into ChromaDB. I plan to ingest documents into the DB, but the first goal is just to do something like automatically pruning the conversation history to just the top N most semantically relevant exchanges. The simple use case here is that I want to be able to carry on one long conversation over various topics. When I change the topic back to something discussed before it should be able to automatically bring the prior conversations into the context.

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      This sounds like a really cool project! How far have you gotten so far? I plan to do several videos on increasingly complex RAG techniques, which will include conversation history and embedding / retrieval. In the mean time, you might consider a low-code UI tool like Streamlit llm-examples.streamlit.app/

  • @marinetradeapp
    @marinetradeapp 7 месяцев назад +1

    Great Video-Arrr-How can we pull data into an agent from a webhook, have the agent do a simple task, and then send the result back out via a webhook? This would make a great video.

  • @philiptwayne
    @philiptwayne 11 месяцев назад

    Nice video. In a future video, setting the seed programmatically would be helpful. I'm finding the losing track aspect of smaller models using seed 0 and it seems to me, create is the only way of changing it atm. cheers and well done 👍

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Good call, setting a temperature of 0 should make smaller models more reliable!

  • @lsmpascal
    @lsmpascal 10 месяцев назад

    I was waiting for this kind of video.
    Thank you so much.
    So, if I do understand things, we can create Assistants with every models this way, no ?

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Yes, you could use different system prompts to tell models to "specialize" in different things! Another common technique is to use an entirely different model that was trained on specialized data as different assistants. For example, some models are trained to specialize in math, others in medicine, others in function calling - you could route a task to a different model based on their specialty.

  • @computerscientist9980
    @computerscientist9980 11 месяцев назад

    Keep Making Videos! SUBSCRIBEDDD!!!

  • @originialSAVAGEmind
    @originialSAVAGEmind 9 месяцев назад +1

    @decoder I followed your tutorial exactly. I am on Windows which I know is new however when I try to create the new model from the model file I get "Error: no FROM line for the model was specified" Any thoughts on how to fix this?? I edited the modelfile in notepad incase this is the issue.

  • @jagadeeshk6652
    @jagadeeshk6652 11 месяцев назад

    Great video, thanks for sharing 🎉

  • @danielallison3540
    @danielallison3540 7 месяцев назад +1

    How far can you go with the model file? If I wanted to take an existing model and make it an expert in some documents I have would piping those docs to the SYSTEM prompt on the model file be the way to go?

    • @decoder-sh
      @decoder-sh  7 месяцев назад

      Depending on how large your model's context window is, and how many documents you have, that is one way to do it! If all of your documents can fit into the context window, then you don't need a whole RAG pipeline.

  • @bigRat4335
    @bigRat4335 4 месяца назад

    can run command on 8:01 denied permission?😕

  • @Ucodia
    @Ucodia 11 месяцев назад

    Great video thank you! I used it to customize dolphin-mixtral to specialize it for my coding needs and combined it with Ollama WebUI which I highly recommend. What I am still wondering is how can I augment the existing dataset with my own code dataset, I could not figure this out so far.

    • @decoder-sh
      @decoder-sh  11 месяцев назад +2

      Thanks for sharing! In a future video I intend to talk about fine tuning, which sounds relevant to what you’re looking for

  • @dusk2dawn2
    @dusk2dawn2 9 месяцев назад +1

    Nice! Is it possible to use these huge models from an external harddisk?

    • @decoder-sh
      @decoder-sh  9 месяцев назад +1

      It is, but you’ll pay the price every time they’re loaded into memory.

  • @kaimuller3990
    @kaimuller3990 20 дней назад

    What software are you using for the terminal and accessing ollama? I've managed to install ollama + an llm but I find the standard shell view a bit confusing. Thank you for the video!

    • @decoder-sh
      @decoder-sh  19 дней назад

      Thanks for watching! I use iTerm, but any terminal should work. You can find Ollama's CLI reference here (github.com/ollama/ollama?tab=readme-ov-file#cli-reference). Note that you may need to restart your terminal app after installing ollama to be able to interact with it. Good luck!

  • @nikwymyslonynapoczekaniu123
    @nikwymyslonynapoczekaniu123 6 месяцев назад +1

    C:\Users
    mibe>ollama create arr-phi --file arr-modelfile
    Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
    Can anyone help?

  • @statikk666
    @statikk666 9 месяцев назад +1

    Thanks mate, subbed

  • @pkuioouurrsq-yb8ku
    @pkuioouurrsq-yb8ku Месяц назад

    How can we train the model with our custom data so that it produces result based on given data

  • @robertdolovcak9860
    @robertdolovcak9860 11 месяцев назад

    Thank you. I enjoyed your tutorial. One question, is there a way to see Ollama's speed of inference (tokens/sec)?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching. Yes you can use the `--verbose` flag in the terminal to see inference speed. eg `ollama run --verbose phi`

  • @pkuioouurrsq-yb8ku
    @pkuioouurrsq-yb8ku Месяц назад

    how to structure the response like json content

  • @kachunchau4945
    @kachunchau4945 11 месяцев назад +1

    Hi,your work will be helpful for my experiment. A classification task with the model in ollama. But I found two different API when I wrote requests. One is /api/generate, and another one is /api/chat. Could you tell me the difference? and how to set uo the "role" in moldefile? thanks in advance

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Hi, that's a great question! The difference is subtle; both the generate and chat endpoints are telling the LLM to predict the next series of tokens under the hood.
      The generate endpoint accepts one prompt and gives one response, so any context needs to provided within that prompt. The chat endpoint accepts a series of messages as well as a prompt - but what's really happening is ollama concatenates these messages into one big string and then passes that whole chat history string as context to the model. So to summarize, the chat endpoint does exactly the same thing as the generate endpoint, it just does the work of passing a message history as context into your prompt for you.
      For your last question, ollama only recognizes three "roles" for messages: system, user, and assistant. System comes from your modelfile system prompt. User is anything you type. Assistant is anything your model responds with.
      Do you think it's worth me doing a video to expand on this?

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Here are the relevant code snippets btw - check them out if you read Go, or have your LLM give you a tldr :)
      Concatenate chat messages into a single prompt:
      github.com/ollama/ollama/blob/a643823f86ebe1d2af39d85581670737508efb48/server/images.go#L147
      In the chat endpoint handler, pass the aforementioned prompt to the llm predict method:
      github.com/ollama/ollama/blob/a643823f86ebe1d2af39d85581670737508efb48/server/routes.go#L1122

    • @kachunchau4945
      @kachunchau4945 11 месяцев назад

      @@decoder-sh Thank you very much for your detailed answer, when I was reading the development documentation for Chatgpt, it has a similar role setup, which helped me to understand the same in Ollama very well, but the way similar to /api/generate in Chatgpt is already LEGACY. For the difference between the two different APIs, I've watched a lot of videos online and they all lack answers and examples for this.
      1. For /api/generate, my understanding is that it's like a single request, but I'm curious how to make the response controllable, for example for a certain number of labels ( classification questions). Is it set through the Template of the modelfile? How would that be written.
      2. For /api/chat, but according to your explanation, do messages need to append previous questions and answers before this prompt? If so, should I set up a loop to keep appending questions and answers from the previous messages?
      3. Since I'm not a RUclipsr, I don't have the intuition to judge whether it's worth making another video or not. But as far as I can see, no one on YT has explained in depth how templates are written in the modelfile, just SYSTEM section, and not explaining its impact or effect. And of course there's the difference between the two APIs I talked about earlier and how the chat API is used. I think it will be helpful for developers who want to build servers in the cloud!

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      @@kachunchau4945 Yes you're correct, you would use the system prompt to instruct the model how to respond to you. I recommend also giving it an example exchange so it understand the format. I wrote a system prompt for a simple classification task which you can adapt to your use case. I quickly tested this and it works even with small models.
      """
      You are a professional classifier, your job is to be given names and classify them as one of the following categories: Male, Female, Unknown. If you are unsure, respond with "Unknown". Respond only with the classification and nothing else.
      Here is an example exchange:
      user: Mark
      assistant: Male
      user: Jessica
      assistant: Female
      user: Xorbi
      assistant: Unknown
      """
      The above is your system prompt, and your user prompt would be the thing you want to classify.

    • @kachunchau4945
      @kachunchau4945 11 месяцев назад

      @@decoder-shthank you so much, that is very helpful for me. I will try it later. But addition to SYSTEM, do I need to write a template ?

  • @MacProUser99876
    @MacProUser99876 11 месяцев назад

    Can you please show multimodal models like LLAVA?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      I'd love to! What would you like to see about them?

  • @theubiquitousanomaly5112
    @theubiquitousanomaly5112 10 месяцев назад

    Dude you’re the best.

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Thanks for watching, dude 🤙🏻

  • @CodingerdaLogician
    @CodingerdaLogician 5 месяцев назад

    Same as adding a system prompt in curl, right? without having to create a new model.

  • @eointolster
    @eointolster 11 месяцев назад

    Well done man

  • @Chrosam
    @Chrosam 11 месяцев назад

    If you ask it a follow-up question it already forgot what you're talking about.
    How do we keep a context ?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching!
      It could be a number of things:
      - small models sometimes lose track of what they’re talking about, big models usually do better
      - some models are optimized for chatting, others are not
      - you may have history disabled in ollama (though I don’t think that’s the default). From the ollama cli, type “/set history”

  • @gokudomatic
    @gokudomatic 11 месяцев назад

    Very nice!
    But how to do that using docker instead of directly a local install of ollama?

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Assuming you already have the ollama docker image installed and running (hub.docker.com/r/ollama/ollama)...
      Then you can just attach to the container's shell with `docker exec -it container_name bash`.
      From here, use (and install if necessary) an editor like vim or nano to create and edit your custom ModelFile, then use ollama to create the model as usual.
      Ollama will move your modelfile into the attached volume so that it will be persisted between restarts 👍

  • @android69_
    @android69_ 9 месяцев назад +1

    how do you load your own model, not from the website?

    • @decoder-sh
      @decoder-sh  9 месяцев назад

      I've got the answer right here :) ruclips.net/video/fnvZJU5Fj3Q/видео.html

  • @kebman
    @kebman 2 месяца назад

    I'm not a software engineer, and I have over three decades of experience not being a software engineer. I know how to program though. I've done it for like a while. Also I've been teaching programming, so there's that. In other words.

  • @PiotrMarkiewicz
    @PiotrMarkiewicz 11 месяцев назад

    Is there any way to add information to model? Like training update?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      There is! I plan on doing several videos on different ways to add information to models - the two main ways to do this are with fine tuning, and retrieval augmented generation (RAG)

  • @ArunJayapal
    @ArunJayapal 11 месяцев назад

    Good work. 👍
    About the phi model? Can it run on a laptop inside a virtualbox vm? The host machine with 2cpu and 6gb ram?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Thanks for watching! It will probably be a little slow if it only has access to cpu, but I think it should at least run. Try it and report back 🫡

    • @ArunJayapal
      @ArunJayapal 11 месяцев назад

      @@decoder-sh it does run. But out of curiosity what configuration did you use for the video?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      @@ArunJayapal I'm running it on an M1 macbook pro, which has no issues with small models. I don't know what the largest model I can run is, but I know it's at least 34B

  • @AI-PhotographyGeek
    @AI-PhotographyGeek 11 месяцев назад

    Great, easy to understand! 😊 Please continue making such videos, otherwise I may Unsubscribe.😅 😜

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Don't worry, I intend to! Thanks for watching

  • @GeorgeDonnelly
    @GeorgeDonnelly 11 месяцев назад

    Subscribed! Thanks!

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Thank you! More videos coming soon

  • @johnefan
    @johnefan 11 месяцев назад

    Great video, love the format. Is there a way to contact you?

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Hey thanks! I’m still setting up my domain and contact stuff (content is king), but for the time being you can send me a DM on Twitter if that works for you x.com/decoder_sh

    • @johnefan
      @johnefan 11 месяцев назад

      @@decoder-sh Great, thanks. Started following you on Twitter, looks like your DMs are not open

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Hey I wanted to follow up and let you know I created a quick site and contact form! decoder.sh/ (https coming as soon as DNS propagates, sorry)

  • @lsmpascal
    @lsmpascal 10 месяцев назад

    Can I suggest a video which I think will be usefull for a lot of people : how to optimise a server to run a model using ollama.
    I’m currently trying to do so. The goal is to have a Mistral running on a vultr < 300€/month.
    But I’m failing. Ollama is here, Mistral too, but The perf are terrible.
    I guess I’m not the only guy searching for this kind of thing.

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      Ollama is not designed to handle multiple users (I'm guessing that's your use case for a $450/mo server?), for that I would look into something like vLLM, LMDeploy, or HF's text-generation-inference. With that said, I plan to do a video on cloud deploys to support multiple concurrent requests in the future!

    • @lsmpascal
      @lsmpascal 10 месяцев назад

      I'm looking forward watching this one, because i'm currently totally lost.
      Ah, a last thing, I love the way your videos are made. Clean but not too present style and interesting content. Keep it this way!
      Thank you very much. @@decoder-sh

  • @OptaIgin
    @OptaIgin 9 месяцев назад

    What's the difference between copying a model and creating from a model?

    • @decoder-sh
      @decoder-sh  9 месяцев назад

      Interesting question... It seems that in both cases (`ollama cp baseModel modelCopy` and `ollama create myModel -f modelfile` where modelfile uses "FROM baseModel:latest"), a new manifest file is created, but no new model blobs are created. This means that both actions are storage-efficient. You can verify this yourself by using `du` to print the directory size of `~/.ollama/models` before and after each of those actions.

  • @deepjyotibaishya7576
    @deepjyotibaishya7576 7 месяцев назад

    How to train with own dataset

  • @nicolawirz7938
    @nicolawirz7938 7 месяцев назад

    why does your terminal like this on Mac?

  • @harshith24
    @harshith24 9 месяцев назад

    if I run the command ollama run phi , will phi model get installed in my c drive ???

    • @decoder-sh
      @decoder-sh  9 месяцев назад

      It will! Ollama pulls a hash of the latest version of the model. If you don't have that model downloaded, or if you have an older version downloaded, ollama will download the latest model and save it to your disk.

  • @harishraju4321
    @harishraju4321 9 месяцев назад

    is this considered as 'fine-tuning' an LLM?

    • @decoder-sh
      @decoder-sh  9 месяцев назад

      Definitely not! This is basically just using a system prompt to steer the behavior of the model. Fine tuning involves retraining part of the model on new data - I intend to do a video about that soon though :)

  • @kamleshpaul414
    @kamleshpaul414 11 месяцев назад

    can we use ollam to pull from huggingface our own model?

    • @decoder-sh
      @decoder-sh  11 месяцев назад +2

      Yes in fact one of my upcoming videos will walk through how to do that!

    • @kamleshpaul414
      @kamleshpaul414 11 месяцев назад +1

      @@decoder-sh Thank you so much

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      This one's for you! ruclips.net/video/fnvZJU5Fj3Q/видео.html

  • @stoicnash
    @stoicnash 8 месяцев назад

    Thank you!

  • @OlleApläpp
    @OlleApläpp 11 месяцев назад

    Great, thanks.

  • @federicoloffredo1656
    @federicoloffredo1656 11 месяцев назад

    Hi, what about windows users?

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Unfortunately windows is not supported natively, but you can still install ollama on Linux (in windows) via WSL. Probably suboptimal though

    • @decoder-sh
      @decoder-sh  11 месяцев назад

      Looks like it’s coming soon! x.com/alexreibman/status/1757333894804975847?s=46

  • @daveys
    @daveys 10 месяцев назад

    Phi is too halucinatory for my liking, but unfortunately mixtral is too large and intense for my crappy old laptop. One thing for certain, LLM’s are a power hungry beast!

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      That’s fair, I’ve found starling-lm to be a strong light model, and some flavor of mistral (eg dolphin-mistral) for 7B

    • @daveys
      @daveys 10 месяцев назад

      @@decoder-sh - Mixtral ground my old laptop (4th gen 4 core i5 with an onboard graphics and 8GB RAM) to a halt…still ran but one word every 1-2mins wasn’t a great user experience. Phi was quicker, but like talking to a maths professor on acid.

    • @decoder-sh
      @decoder-sh  10 месяцев назад +1

      @@daveys I mean honestly, that sounds like a fun way to spend a Sunday afternoon. Yeah I wouldn't expect mixtral to do well on consumer hardware, especially integrated graphics. I'd experiment with a 7b model first and see if it behaves more like a literature professor on mushrooms, then maybe try a 34b model if you still get reasonable wpm.

    • @daveys
      @daveys 10 месяцев назад

      @@decoder-sh - enjoyable if you were the professor but not waiting for the LLM to answer a question!! I knew local AI would be bad on that machine, to be honest I was surprised it ran at all, but I’ll stick to ChatGPT at the moment and wait until I upgrade my laptop before I start messing with any more LLM stuff.

  • @prashlovessamosa
    @prashlovessamosa 11 месяцев назад

    thanks man

  • @lucasbarroso2776
    @lucasbarroso2776 7 месяцев назад

    I would love to see a video on model files. Specifically how to train a model to do a specialized task, I am trying to use Llama 2 to consolidate facts in articles.
    "Do these facts mean the same thing?
    Fact 1: "Starbucks's dtock went down by 13%
    Fact 2: Starbucks has a new bobba tea flavour"
    Response: {isSame:false}

  • @marsrocket
    @marsrocket 10 месяцев назад

    Excellent video, although I think you could raise the skill lower level you’re targeting. Nobody who is going to install and use Ollama on their own doesn’t know what > means.

    • @decoder-sh
      @decoder-sh  10 месяцев назад

      I’m getting that impression, too! I’m going to try to make future videos a bit faster and more focused on doing the thing than explaining the language. Will probably continue explaining tools and logic.

  • @VertegrezNox
    @VertegrezNox 11 месяцев назад

    Nothing about this involved customization. Clickbait channel

    • @decoder-sh
      @decoder-sh  10 месяцев назад +1

      Full fine tuning video coming in a couple weeks, this is a video for beginners 🫡

  • @user-jk9zr3sc5h
    @user-jk9zr3sc5h 11 месяцев назад

    You edited the system prompt....

    • @decoder-sh
      @decoder-sh  11 месяцев назад +1

      Yes and fine tuning is coming too! Thanks for watching

  • @OgeIloanusi
    @OgeIloanusi 4 месяца назад

    Thank you!