Generative AI Fine Tuning LLM Models Crash Course
HTML-код
- Опубликовано: 4 июн 2024
- This video is a crash course on understanding how finetuning on LLM models can be performed uing QLORA,LORA, Quantization using LLama2, Gradient and Google Gemma model. This crash course includes both theoretical intuition and practical intuition on making you understand how we can perform finetuning.
Timestamp:
00:00:00 Introduction
00:01:20 Quantization Intuition
00:33:44 Lora And QLORA Indepth Intuition
00:56:07 Finetuning With LLama2
01:20:16 1 bit LLM Indepth Intuition
01:37:14 Finetuning with Google Gemma Models
01:59:26 Building LLm Pipelines With No code
02:20:14 Fine tuning With Own Cutom Data
Code Github: github.com/krishnaik06/Finetu...
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
►Generative AI On AWS: • Starting Generative AI...
►Fresh Langchain Playlist: • Fresh And Updated Lang...
►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
►AWS Bedrock Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD
Krish Naik respect Button❤
Thank you very much Krish for uploading this.
Thank you so much for such an comprehensive tutorial. Really love your teaching style. Could you also refer some books on LLM fine tuning.
Krish...yet again!! I was just looking for your finetuning video here and you uploaded this..I cant thank you enough..really 👍😀
Can we connect brother. I am new into generative AI and wanted to know the basics .
full respect bro , from morocco MA.
just getting your video at the right time !! Cudos brother
Amazing content, big fan of you :) Much love from Hawaii
Thanks Krish it's very helpful
Thank you krish
Brilliant brilliant 🙌
Big salute!
Thanks you very much sir🎉🎉🎉
Thanks man!
Thank you for an amazing course as always. Can we please get these notes as well. they are really good for quick revision.
Hi @krishnaik06,
Thank you again for anther Crash Course.
may I know which tools/software are you using for presentation?
Krish bro ❤
Please make a complete playlist to secure a job in the field of Ai
Can anybody tell me how to fine-tune llm for multiple tasks?
Krish, most of the fine tuning done by the existing dataset from HF. however converting the dataset as per the format its a challenging for any domain dataset. How we can train our own data to finetune the model so that accuracy ll be even better. Any thoughts?
Can you make a good video around how to decide hyper parameters when training gpt 3.5
Hi Krish, the video is really good and more understanding. but I have one reason how to you choose this the right dataset and why? why you choosing that format_func function to format the dataset into the some kind of format. if you have any tutorial or blog please share the link.
we want more video on fine tuning projects
Hi Krish. What device do you use to write on...like a board
hello krish sir thank you for amazing lecture can please share the notes of session
Hi Krish, i Have seen entire video. i am confused with 2terms. some times you said its possible to train with my own data (own data refers from a url , pdfs , simple text etc) but when you are trying to train the llm model you are giving inputs as in certain format like### question : ans.
Now if i want to train my llm in real life scenario i don't have my data in this instruction format right in that case what to do. And its not possible to transform my raw text to into that format right how to handle that situation . is it a only way to fine tune in specific format or i can train given in raw text format i know a process where i need to convert my text to chunks then pass to llm. those are really confusing can you clear those things
Can anyone suggest how to analyze audio for soft skills in speech using Python and llm models?
What documentation did you refer to in this video?
how to finetune and quantize the phi3 mini model ,
Hi sir, I have tried your llama finetuning notebook to run on colab with free T4 gpu but it is throwing OOM error. So could you please guide
RAG or fine-tuning? How should one decide?
actually sir this step cant able to run
!pip install -q datasets
!huggingface-cli login
due to this dataset cant be load nd getting error in other step
so is thier is any solution for this ?????
After the fine tuning process in this video, isn't it the same old model that is used here test the queries? We should have tested the queries with the "new_model" isn't it?
If i would like to join data science community group where i can get the access, please let me know.
How to deploy these?...I have seen deployment of custom LLM models...how to do this?
Please also provide the source. Research paper/Blog you might have referred for this video.
hey krish , can you by any chance share the notes used in the video. would be really helpful. thanks !!
🙏💯💯
Hello Krishna sir ,
Please make a playlist for genai and lanchain
Already made please check
@@krishnaik06 Thank you for replying me
Pre-requisites?
Prerequisite ?
i dont know why i feel training a whole model from scratch is much more easier for me than to fine tune it ..............
Ya if u see training the model from scratch for your dataset might look better and optimal but the energy is used in training a model from scratch is too much so finetuning a pretrained model is considered a better option than training model for specific data everytime....
EK HI DIL HAI
KITNE BAAR JITOGE SIR?
I understand this video just like your hairs sometime nothing some time something ❤🫠