EASILY Train Llama 3 and Upload to Ollama.com (Must Know)

Поделиться
HTML-код
  • Опубликовано: 9 окт 2024

Комментарии • 101

  • @manuelbradovent3562
    @manuelbradovent3562 Месяц назад +3

    Thanks Mervin. Just did my first finetuning!! Colab stopped much earlier as expected , win11 didnt work but performed all again today in wsl2 on my laptop, which worked as a charm.

  • @brons_n
    @brons_n 2 месяца назад +10

    Best finetuning tutorial

  • @francosbenitez
    @francosbenitez 2 месяца назад +2

    Man, you explained everything so so well!

  • @danielhanchen
    @danielhanchen 2 месяца назад +1

    Fantastic detailed tutorial Mervin! Absolutely love this!

  • @carlosdanielolivera9787
    @carlosdanielolivera9787 22 дня назад +1

    Great tutorial mate!

  • @lemonsqueeezey
    @lemonsqueeezey 2 месяца назад

    Thanks for this tutorial! I usually use Unsloth but their Ollama notebook was more advanced so having the video is very helpful.

  • @Hex0dus
    @Hex0dus 2 месяца назад +28

    It seems we've got different definitions of the word easy.

    • @unclecode
      @unclecode 2 месяца назад +5

      Hahaha, trust me, this is considered very easy in the realm of coding fine-tuning!

    • @pensiveintrovert4318
      @pensiveintrovert4318 2 месяца назад +1

      He meant to say that he spent 20X the time and it was easy to post edit it to appear effortless.

    • @yvessomda4547
      @yvessomda4547 Месяц назад

      😂😂😂😂

  • @EM-yc8tv
    @EM-yc8tv 2 месяца назад

    I watched one of your Florence-2 videos a couple weeks ago and was very impressed by your workflows. Now with Llama 3.1, you can get even better vision (at least for the 8B parameter model). The model I came across was Llama-3.1-Unhinged-Vision-8B by FiditeNemini. It pairs very nicely with mradermacher's Dark Idol 3.1 Instruct models, surely it would work with several other finetunes. Perhaps someone might have done or will do vision projector models for the Llama-3.1 70B and 405B models.

  • @MrMroliversmith
    @MrMroliversmith 2 месяца назад +3

    OOOOO! SO CLOSE! Great video :) This ALMOST worked ... but failed with the errors " xFormers wasn't built with CUDA support/ your GPU has capability (7, 5) (too old)". I'm running this on an AWS EC2 G4dn.xlarge (16GB VRAM). Gonna try again with TorchTune instead. Wish me luck!

  • @JungHeeyun-t3x
    @JungHeeyun-t3x 2 месяца назад

    It is super clear to understand and apply into my use case. Thank you so much!!

  • @GurkaATR
    @GurkaATR Месяц назад +2

    Nice can why do it without uploading that to ollama or hugging face i mean like offline fine tuning?

  • @Derick99
    @Derick99 2 месяца назад +6

    Maybe im off here's but like is there a way to just use Llama 3.1 and upload your files to it somehow, or do you gave to go throghh this process? Plus i dont want my private data on hugging face

  • @rodrimora
    @rodrimora 2 месяца назад +3

    Is finetuning the best way to give data to a model? I think if the information is updated quickly, like documentation etc. I don't think fine tuning is the best way? That would be RAG now that there are long context available for llama3.1.
    I have always considered using fine-tuning a model to change "behaviour" or provided static data, like teaching other languages, or uncesoring. RAG to give it my own data

    • @j0hnc0nn0r-sec
      @j0hnc0nn0r-sec 2 месяца назад +1

      Do both

    • @MervinPraison
      @MervinPraison  2 месяца назад +2

      @@j0hnc0nn0r-sec Yes Agreed, Try doing both for better response. Finetuning + RAG

  • @gr8tbigtreehugger
    @gr8tbigtreehugger 2 месяца назад

    Super awesome tutorial! Many thanks, Mervin!

  • @Dr.UldenWascht
    @Dr.UldenWascht 2 месяца назад

    Brother, you are becoming the guy with the coolest nickname among me and my friends, like, "Hey did you watch The Amazing Guy's new video?"

  • @Mrroot-nr8xk
    @Mrroot-nr8xk 2 месяца назад

    Hi! awesome video, i didn't understand the input format: what's the difference between "instruction" and "input"? Thanks for your time!

    • @AbhishekKumar-rl1pj
      @AbhishekKumar-rl1pj 2 месяца назад +1

      Instruction is the thing that you want the model to do. For example in Medical Chatbot an Instruction might be : Please see my report and tell me what i am suffering from. And the input will have the context for instruction. In our case input will contain the report.

  • @steeple001
    @steeple001 Месяц назад

    Excellent thank you so much,

  • @returncode0000
    @returncode0000 2 месяца назад

    Were the heck did you get those 4 A6000s? I only have 1 RTX4090 😃 What I've heared is that 24GB VRAM isn't enough, right? How long run the training and what were the costs? Anyway, great video, thanks!

  • @pavankumarreddy9871
    @pavankumarreddy9871 Месяц назад +1

    My system configuration i5 processor and 8GB , is it sufficient ? As it is lagging ?

  • @nikoG2000
    @nikoG2000 2 месяца назад +1

    Do you have 4x A6000 on your local machine? I have RTX 4090. I use it for computer vision models finetuning and I finetuned and ran some smaller LLMs.

    • @MervinPraison
      @MervinPraison  2 месяца назад +2

      Yes, I have 4x A6000 in the cloud
      I bought from here Massed compute: bit.ly/mervin-praison
      Coupon: MervinPraison (50% Discount)

  • @faaf42
    @faaf42 24 дня назад

    Could you have maybe your face a bit smaller when the code is shown? Now it's behind the face (which is ok to show!)

  • @joshuatorres3342
    @joshuatorres3342 2 месяца назад

    Really cool, thank you.

  • @TGajanan
    @TGajanan 2 месяца назад +4

    Can you please tell us how this is can secure companies data, we are saving our model at olama to get the end results

    • @Leto2ndAtreides
      @Leto2ndAtreides 2 месяца назад +3

      Training a local model means that it's as secure as the regular corporate network its on.
      Unless you end up making it accessible through the internet to other parties, it should not be accessible by them.

    • @unclecode
      @unclecode 2 месяца назад +1

      That's simple, don't save in ollama! Keep it private in HF.

    • @Pregidth
      @Pregidth 2 месяца назад +1

      @@unclecode Keeping it private on HF does not imply that the data is not on their server... This needs to run completly local if possible. Any idea? Thanks

  • @AbhinavKumar-tx5er
    @AbhinavKumar-tx5er 2 месяца назад

    A silly question maybe. What if I have to upgrade the model? Can I push the model again with the same name? and how to define the parameters

  • @free_thinker4958
    @free_thinker4958 2 месяца назад

    Is it possible to do a unsupervised learning by Giving the model first a large corpus of data of a specific domain to make it context aware first and then use supervised fine-tuning??

  • @batajoonp
    @batajoonp 21 день назад

    is it possible to fine tune using online news article dataset in any regional language to train llama3.1 to response in that regional language?

  • @IAMTHEMUSK
    @IAMTHEMUSK 25 дней назад

    How do you choose between fine tuning or rag?

  • @anoopgupta9285
    @anoopgupta9285 Месяц назад

    what if I'm giving same prompt for the same type of data generation for whole dataset then will it affect training or It will be fine tuned nicely and I've 2000 data rows then how much epoch should I run?

  • @KleiAliaj
    @KleiAliaj 2 месяца назад

    great vieo Mervin.
    I have one simple question . can i change the alpaca prompt language besides english, lets say in french, if i will use a french dataset for french language. Does it work like that ?

  • @SarahH-t3u
    @SarahH-t3u 2 месяца назад

    Hi, I have a question if you don't mind. If I plan to use my fine-tuned model with Ollama, but keep it private at the same time (not publicly available in the Ollama models list), is that possible? I want to integrate it, so running it locally won't work for me.

  • @AntonyPraveenkumar
    @AntonyPraveenkumar 2 месяца назад

    I had fine-tuned the llama3.1 more than 10 times with alpaca format & using the unsloth if its comes to deployment and testing unsloth models really bad. they don't have any standard document for deployment. My personal suggestion go with standard format fine-tuning instead of alpaca format.

    • @MervinPraison
      @MervinPraison  2 месяца назад

      Can you please provide more details in my discord ?
      Just would like to analyse the results and why it is not performing better

    • @AntonyPraveenkumar
      @AntonyPraveenkumar Месяц назад

      @@MervinPraison Currently they updated the script. the way of passing the data into llama3.1 model in unsloth please check once.

  • @enesgucuk
    @enesgucuk 2 месяца назад

    Can I use this code with my Local machine or is this just for Cloud Computing ?

  • @deepadharshinipalrajan8849
    @deepadharshinipalrajan8849 2 месяца назад

    Are we able to fine tune the model which is available in the ollama?

  • @mikemomos1099
    @mikemomos1099 Месяц назад

    In this video you showed how to train using terminal. Can we train it on google colab and upload ?????

    • @nguyentran7068
      @nguyentran7068 Месяц назад

      You can. You can immediately find it by googling google colab unsloth fine tuning and the answer is on top

  • @zareefbeyg
    @zareefbeyg 2 месяца назад

    How to add llama 3.1 in "laravel PHP" website?
    Create a video on this topic. Please 🙏🙏🙏

  • @pigreatlor
    @pigreatlor Месяц назад

    good tutorial

  • @yinghaohu8784
    @yinghaohu8784 2 месяца назад

    Nice video. Can I ask a question. If I just want to have it locally and merged. How to do it ?
    model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit")
    Is that correct ?

  • @Ajith-i2t
    @Ajith-i2t 2 месяца назад

    Hello sir, Can you tell me how to fine tune and deploy llama3 models on Amazon Sagemaker using notebooks ?

  • @sgtnik4871
    @sgtnik4871 2 месяца назад

    how small can a custom dataset be? is there an automated way to create a dataset, e.g. use an existing llm to undestand the dynamic input, create based on that the data for the dataset?

    • @nguyentran7068
      @nguyentran7068 Месяц назад

      It can be 1 row if you like but check for PEFT techniques because fine tuning a small dataset can lead to overfitting

  • @wohorexy6913
    @wohorexy6913 2 месяца назад

    Hello Melvin, I find that llama3.1 8b is not great at calculation, can I fine tune it?

  • @michaelmueller5211
    @michaelmueller5211 Месяц назад

    im new to training llms, can i use my own data for training llms, ie- scraped data, if so how/what should i research?

    • @nguyentran7068
      @nguyentran7068 Месяц назад

      You can use any data. Just make sure you format it to CSV or JSON file with input output columns like you see in the vids. Upload it to your code with pandas or directly to Hugging Face repo and start training with dataset library

  • @YTber1
    @YTber1 2 месяца назад

    Hey, I already have a dataset and tokenizer in JSON format for the Georgian language. I tried to fine-tune Mistral, but the model failed to deliver reasonable text. I was training it in Paperspace but did not like their service that much. So now, I want to know what's the best 8B or 7B small model that can learn a foreign language like Georgian with one GPU. Also, what are the easy ways to do this task? I know it's actually a very hard task, but I want some advice.

    • @MervinPraison
      @MervinPraison  2 месяца назад

      Generally not all LLM support every language. It's based on the tokenizer they use.
      Gemma is one of the model which supports many languages but not all. Try fine tuning Gemma with Georgian Language.
      Hopefully in the near future, there will be models which supports all languages. Also try this Llama 3.1

  • @kannansingaravelu
    @kannansingaravelu 2 месяца назад

    Hi Mervin. I am trying to sign up for massed compute, but the coupon code is not recognised. Getting this message "Coupon code is not valid for this GPU Type and/or Quantity." Could you tell me where these code can be applied?

    • @MervinPraison
      @MervinPraison  2 месяца назад +1

      I will check and get back to you soon

    • @MervinPraison
      @MervinPraison  2 месяца назад

      @kannansingaravelu Please try A6000 or A5000 GPU's
      Those are the one's which avail 50% for now.

  • @chien67
    @chien67 2 месяца назад

    thx for sharing

  • @One.manuel
    @One.manuel 2 месяца назад

    Why using all of those alpaca questions and answers if you want to train your model in a dif way?

  • @coenkuijpers1800
    @coenkuijpers1800 2 месяца назад

    You are testing the fine tuned model with the data used for training the model. That is not showing that the model is working. You don't even need a model to do that, as you already have the date.

  • @gemini22581
    @gemini22581 2 месяца назад

    Why don't u use RAG?

  • @artur50
    @artur50 2 месяца назад

    All data stays local? And how long did it take you?

    • @Leto2ndAtreides
      @Leto2ndAtreides 2 месяца назад

      Where else can it go when you're using local models?

    • @artur50
      @artur50 2 месяца назад

      @@Leto2ndAtreides huggingface for instance…

    • @MervinPraison
      @MervinPraison  2 месяца назад

      It took approx 15 mins for me. But it varies based on the computer spec, the model, the dataset and also the training configuration you are using.

  • @mountshasta2002
    @mountshasta2002 2 месяца назад

    cant load the code link for the life of me 502 bad gateway

  • @timmcgirl5588
    @timmcgirl5588 2 месяца назад

    Open Interpreter + Groq + Llama 3.1 + n8n + Gorilla AI = Lightning speed 100% autonomous agent that automates all workflows with a simple prompt, all open source and free, access to over 1600 API's.

  • @taloot123
    @taloot123 2 месяца назад

    i want to train it on specific hardware documentation,,, let say arduino esp32 is this wil help generate better code for it,,,

    • @MC-hc7wx
      @MC-hc7wx Месяц назад

      You can also use RAG for this type of tasks. Put your doc in a vectorial database and let your model query from it, then you're sure it won't hallucinate and you can keep adding the most uptodate documentation without tempering with the model's training.

  • @intelectualoides8429
    @intelectualoides8429 2 месяца назад

    how can I do this in the cloud?

  • @deepadharshinipalrajan8849
    @deepadharshinipalrajan8849 2 месяца назад

    Is Unsloth support only with GPU?

    • @deepadharshinipalrajan8849
      @deepadharshinipalrajan8849 2 месяца назад

      Why we are not using the model which is available in our ollama instead why we are taking the base model from hugging face?

  • @lemon268
    @lemon268 2 месяца назад +1

    is it possible to run this on macbook m2 air

    • @benoitcorvol7482
      @benoitcorvol7482 2 месяца назад +1

      I will try on m3 air, 16gb, and let you know otherwise use a vm

    • @unclecode
      @unclecode 2 месяца назад

      Absolutely you can, specially 8B models, using Ollama,

    • @MervinPraison
      @MervinPraison  2 месяца назад +2

      Yes you can. Try MLX: ruclips.net/video/sI1uKhagm7c/видео.html

  • @Mangini037
    @Mangini037 2 месяца назад +3

    52 Easy Steps

  • @davideallocca2063
    @davideallocca2063 2 месяца назад

    is a 4090 enough to train like you did ?

    • @kabab-case
      @kabab-case Месяц назад

      That more than enough

    • @shivpawar135
      @shivpawar135 29 дней назад

      Stop flexing bro I know you are being sarcay😂😊

  • @rajshah7033
    @rajshah7033 2 месяца назад

    what python app is this?

  • @justfatherblog
    @justfatherblog Месяц назад

    how to fix "Error: no slots available after 10 retries" ?

  • @john_blues
    @john_blues 2 месяца назад

    This was good, but I feel like you ran through everything too fast.

  • @mr_pip_
    @mr_pip_ 2 месяца назад

    LOL .. did you just say "as simple as that" ?? ^^

  • @kabab-case
    @kabab-case Месяц назад

    There is a problem with all of your videos and you not saying ( why??? ) !

    • @MervinPraison
      @MervinPraison  Месяц назад

      Do you want me to explain “why” to fine tune ?

    • @kabab-case
      @kabab-case Месяц назад

      @@MervinPraison no i want you to explain why we should include all of the library and other codes we don't know what they doing and why should use them.

  • @krzysztof5776
    @krzysztof5776 2 месяца назад

    Unsloth doesnt support MAC, thank you good bye

    • @IAMTHEMUSK
      @IAMTHEMUSK 25 дней назад

      I struggled as fuck to run it on windows. Are they using Linux?

  • @RubbinRobbin
    @RubbinRobbin Месяц назад

    Can we email u