Generative AI Fine Tuning LLM Models Crash Course

Поделиться
HTML-код
  • Опубликовано: 4 июн 2024
  • This video is a crash course on understanding how finetuning on LLM models can be performed uing QLORA,LORA, Quantization using LLama2, Gradient and Google Gemma model. This crash course includes both theoretical intuition and practical intuition on making you understand how we can perform finetuning.
    Timestamp:
    00:00:00 Introduction
    00:01:20 Quantization Intuition
    00:33:44 Lora And QLORA Indepth Intuition
    00:56:07 Finetuning With LLama2
    01:20:16 1 bit LLM Indepth Intuition
    01:37:14 Finetuning with Google Gemma Models
    01:59:26 Building LLm Pipelines With No code
    02:20:14 Fine tuning With Own Cutom Data
    Code Github: github.com/krishnaik06/Finetu...
    -------------------------------------------------------------------------------------------------
    Support me by joining membership so that I can upload these kind of videos
    / @krishnaik06
    -----------------------------------------------------------------------------------
    ►Generative AI On AWS: • Starting Generative AI...
    ►Fresh Langchain Playlist: • Fresh And Updated Lang...
    ►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
    ►AWS Bedrock Playlist: • Generative AI In AWS-A...
    ►Llamindex Playlist: • Announcing LlamaIndex ...
    ►Google Gemini Playlist: • Google Is On Another L...
    ►Langchain Playlist: • Amazing Langchain Seri...
    ►Data Science Projects:
    • Now you Can Crack Any ...
    ►Learn In One Tutorials
    Statistics in 6 hours: • Complete Statistics Fo...
    End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
    Machine Learning In 6 Hours: • Complete Machine Learn...
    Deep Learning 5 hours : • Deep Learning Indepth ...
    ►Learn In a Week Playlist
    Statistics: • Live Day 1- Introducti...
    Machine Learning : • Announcing 7 Days Live...
    Deep Learning: • 5 Days Live Deep Learn...
    NLP : • Announcing NLP Live co...
    ---------------------------------------------------------------------------------------------------
    My Recording Gear
    Laptop: amzn.to/4886inY
    Office Desk : amzn.to/48nAWcO
    Camera: amzn.to/3vcEIHS
    Writing Pad:amzn.to/3OuXq41
    Monitor: amzn.to/3vcEIHS
    Audio Accessories: amzn.to/48nbgxD
    Audio Mic: amzn.to/48nbgxD

Комментарии • 48

  • @yogeshmagar452
    @yogeshmagar452 29 дней назад +17

    Krish Naik respect Button❤

  • @BabaAndBaby11
    @BabaAndBaby11 29 дней назад +1

    Thank you very much Krish for uploading this.

  • @souvikchandra
    @souvikchandra 3 дня назад

    Thank you so much for such an comprehensive tutorial. Really love your teaching style. Could you also refer some books on LLM fine tuning.

  • @prekshamishra9750
    @prekshamishra9750 29 дней назад +2

    Krish...yet again!! I was just looking for your finetuning video here and you uploaded this..I cant thank you enough..really 👍😀

    • @sohampatil8681
      @sohampatil8681 29 дней назад

      Can we connect brother. I am new into generative AI and wanted to know the basics .

  • @Mohama589
    @Mohama589 27 дней назад +1

    full respect bro , from morocco MA.

  • @senthilkumarradhakrishnan744
    @senthilkumarradhakrishnan744 28 дней назад

    just getting your video at the right time !! Cudos brother

  • @lalaniwerake881
    @lalaniwerake881 6 дней назад

    Amazing content, big fan of you :) Much love from Hawaii

  • @abhishekvarma4449
    @abhishekvarma4449 6 дней назад

    Thanks Krish it's very helpful

  • @tejasahirrao1103
    @tejasahirrao1103 29 дней назад +3

    Thank you krish

  • @rutujasurve4172
    @rutujasurve4172 28 дней назад

    Brilliant brilliant 🙌

  • @shalabhchaturvedi6290
    @shalabhchaturvedi6290 29 дней назад

    Big salute!

  • @virkumardevtale9671
    @virkumardevtale9671 28 дней назад

    Thanks you very much sir🎉🎉🎉

  • @shakilkhan4306
    @shakilkhan4306 28 дней назад

    Thanks man!

  • @sadiazaman7903
    @sadiazaman7903 22 дня назад

    Thank you for an amazing course as always. Can we please get these notes as well. they are really good for quick revision.

  • @foysalmamun5106
    @foysalmamun5106 29 дней назад

    Hi @krishnaik06,
    Thank you again for anther Crash Course.
    may I know which tools/software are you using for presentation?

  • @AshokKumar-mg1wx
    @AshokKumar-mg1wx 29 дней назад

    Krish bro ❤

  • @tejasahirrao1103
    @tejasahirrao1103 29 дней назад +1

    Please make a complete playlist to secure a job in the field of Ai

  • @JunaidKhan80121
    @JunaidKhan80121 19 дней назад +1

    Can anybody tell me how to fine-tune llm for multiple tasks?

  • @Jeganbaskaran
    @Jeganbaskaran 28 дней назад

    Krish, most of the fine tuning done by the existing dataset from HF. however converting the dataset as per the format its a challenging for any domain dataset. How we can train our own data to finetune the model so that accuracy ll be even better. Any thoughts?

  • @yashshukla7025
    @yashshukla7025 29 дней назад +1

    Can you make a good video around how to decide hyper parameters when training gpt 3.5

  • @AntonyPraveenkumar
    @AntonyPraveenkumar 16 дней назад

    Hi Krish, the video is really good and more understanding. but I have one reason how to you choose this the right dataset and why? why you choosing that format_func function to format the dataset into the some kind of format. if you have any tutorial or blog please share the link.

  • @EkNidhi
    @EkNidhi 25 дней назад

    we want more video on fine tuning projects

  • @rotimiakanni8436
    @rotimiakanni8436 14 дней назад

    Hi Krish. What device do you use to write on...like a board

  • @anupampandey1235
    @anupampandey1235 29 дней назад

    hello krish sir thank you for amazing lecture can please share the notes of session

  • @rebhuroy3713
    @rebhuroy3713 24 дня назад

    Hi Krish, i Have seen entire video. i am confused with 2terms. some times you said its possible to train with my own data (own data refers from a url , pdfs , simple text etc) but when you are trying to train the llm model you are giving inputs as in certain format like### question : ans.
    Now if i want to train my llm in real life scenario i don't have my data in this instruction format right in that case what to do. And its not possible to transform my raw text to into that format right how to handle that situation . is it a only way to fine tune in specific format or i can train given in raw text format i know a process where i need to convert my text to chunks then pass to llm. those are really confusing can you clear those things

  • @nitinjain4519
    @nitinjain4519 28 дней назад

    Can anyone suggest how to analyze audio for soft skills in speech using Python and llm models?

  • @sibims653
    @sibims653 14 дней назад

    What documentation did you refer to in this video?

  • @stalinjohn721
    @stalinjohn721 8 часов назад

    how to finetune and quantize the phi3 mini model ,

  • @rakeshpanigrahy7000
    @rakeshpanigrahy7000 21 день назад

    Hi sir, I have tried your llama finetuning notebook to run on colab with free T4 gpu but it is throwing OOM error. So could you please guide

  • @adityavipradas3252
    @adityavipradas3252 21 день назад

    RAG or fine-tuning? How should one decide?

  • @abhaypatel2585
    @abhaypatel2585 2 дня назад

    actually sir this step cant able to run
    !pip install -q datasets
    !huggingface-cli login
    due to this dataset cant be load nd getting error in other step
    so is thier is any solution for this ?????

  • @arunkrishna1036
    @arunkrishna1036 23 дня назад

    After the fine tuning process in this video, isn't it the same old model that is used here test the queries? We should have tested the queries with the "new_model" isn't it?

  • @rahulmanocha4533
    @rahulmanocha4533 28 дней назад

    If i would like to join data science community group where i can get the access, please let me know.

  • @hassanahmad1483
    @hassanahmad1483 29 дней назад

    How to deploy these?...I have seen deployment of custom LLM models...how to do this?

  • @SheiphanJoseph
    @SheiphanJoseph 28 дней назад

    Please also provide the source. Research paper/Blog you might have referred for this video.

  • @charithk2160
    @charithk2160 20 дней назад

    hey krish , can you by any chance share the notes used in the video. would be really helpful. thanks !!

  • @rishiraj2548
    @rishiraj2548 28 дней назад

    🙏💯💯

  • @avdsuresh
    @avdsuresh 28 дней назад

    Hello Krishna sir ,
    Please make a playlist for genai and lanchain

    • @krishnaik06
      @krishnaik06  28 дней назад

      Already made please check

    • @avdsuresh
      @avdsuresh 28 дней назад

      @@krishnaik06 Thank you for replying me

  • @nitishbyahut25
    @nitishbyahut25 29 дней назад

    Pre-requisites?

  • @salihedneer8975
    @salihedneer8975 29 дней назад

    Prerequisite ?

  • @deepaksingh-qd7xm
    @deepaksingh-qd7xm 29 дней назад

    i dont know why i feel training a whole model from scratch is much more easier for me than to fine tune it ..............

    • @kartik1409
      @kartik1409 2 дня назад

      Ya if u see training the model from scratch for your dataset might look better and optimal but the energy is used in training a model from scratch is too much so finetuning a pretrained model is considered a better option than training model for specific data everytime....

  • @shivam_shahi
    @shivam_shahi 29 дней назад

    EK HI DIL HAI
    KITNE BAAR JITOGE SIR?

  • @RensLive
    @RensLive 23 дня назад

    I understand this video just like your hairs sometime nothing some time something ❤🫠