Part 1-Road To Learn Finetuning LLM With Custom Data-Quantization,LoRA,QLoRA Indepth Intuition
HTML-код
- Опубликовано: 28 ноя 2024
- Quantization is a common technique used to reduce the model size, though it can sometimes result in reduced accuracy.
Quantization-aware training is a method that allows practitioners to apply quantization techniques without sacrificing accuracy. It is done in the model training process rather than after the fact. The model size can typically be reduced by two to four times, and sometimes even more.
Fine Tuning Playlist: • Steps By Step Tutorial...
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
►AWS Bedrock Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD
Your DS-ML videos helped me to crack Big4 3 years ago & again after so many years I am coming back to my favourite teacher to learn about GenAI. Understanding these complicated subjects and then teaching it to us requires another level of dedication . Thanks a lot Krish for constant hardwork. You are a boon to all the students out there.
Founder of ineuron and still consistent on RUclips and teaching 😮. Great
Yesterday only I requested this video, and here it is today.
Thankyou so so much Krish!! Can't express enough gratitude.
Thanks for explaining quantization, please also explain more topics such as PEFT LoRA QLoRA AI3 MoE, you are the only person who can explains these topics with intuition and mathematical concepts. Keep it up. ❤
Lots of love and respect from Lahore Pakistan.i have very clear understanding of artificial intelligence by watching your RUclips videos
I am an GenAI Engineer and this is the most in-depth video on quantization i had ever seen.
Thank you very much for your hard work, I don't know what I'll do if you don't post this video. I think your the only creator in this entire RUclips community who described this topic so well. I am waiting for your LoRA, & QLoRA video.
Thanks.
You are really awesome Krish sir , my most of the skills are gift from you.... But still not getting job need help
I was really struggling in understanding fine tuning and can't find proper tutorials for it but from this video I am positive that after the end of this playlist I will understand fine-tuning properly.
Krish, you are making us also EVOLVE along with you :-) Thanks a lot for this beautiful explanation.
Really awesome content and as always no words to say about your teaching skill.Thanks for sharing the knowledge
We are waiting for part 2
Can you please upload videos indepth of how different prompting techniques like chain of thought, self consistency, knowledge generation etc were practically used with which the outputs of the models based on use cases are getting improved..
I am requesting for so long time please make videos on these topics please
Sir! make whole playlist on Data Science end to end
Sir make world best Data Science course on youtube cover all topics
We are waiting🥱🥱
Sir, please make a video on the model pruning technique for llms
Please upload the fine-tune video fast as much as possible sor
Hey krish you mentioned that you learnt generative ai theories 2-3 months ago. Could you please share the resources you used during your learning journey? Any guidance would be greatly appreciated!
Please make a video llm for robotics please with practical
very nice session, learned to much about quantization.
hi can you tell whether training custom embeddings improves performance?
Great
Waiting for part 2 sir❤
Thank you sir
Krish iam having a doubt , if i have fintunned a model with my custom data , and my data will be keep on adding for every week , so in that case do i need to finetune again and again every week ?
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
Hi sir good morning I have one doubt you will help me. I am practicing cotton disease project. I will give predict without cotton images I want show "there is no image"
Sir, This was a better explaination of Quantization. But why this is not covered in your paid course of Mastering LLM on Ineuron? In that course, only a high level overview is given without the mathematical intution.
❤❤❤ thank
krish can you please make video on kubernetes like docker?
Excellent series
sir i actually had a question what are the pre-requisites to understand each and every bit of fine tuning these large language models? can you point me to some of your resources?
What is the software name in your M2 sir. for the writing you are using?
Hello sir, I am searching for an internship related to data science, data analytics, machine learning...
If there is any platform form or organization then please let me know.
It will be very helpful for me.
Awesome 💯
Hello Krish sir good Evening
Why ineuron support team is not responding ?
I'm Trying from last Friday regarding neuro lab and from last three days regarding assessment evaluation but no one responding.
PTQ and QAT is not clear.
PTQ: Train, then shrink (fine-tune first, then quantize).
QAT: Train with a smaller (combine fine-tuning and quantization).
u r right. In both cases Quantization is happening which will lead to loss of info. In QAT he mentioned that there is no loss of info. Couldn't understand how
Hi Krishnaik, can you please create a Series on securing LLM responses, and Guardrails as it is burning topic now a days. Sincere Request.
Hi, sir...There is a mistake when you explain what exponent and mantissa are.
exponent is not the integer value. that is how many times you will multiple by 10. In this case, the number would be stored as 10 * 0.732
Amazing many thanks 🙏
good explanation
Question : can i watch this playlist with basic knowledge of langcahin and RAG or do i need to watch krish naiks 'updated langchain' playlist to understand this playlist?
May I know what app you are using to perform the writing and drawing on the screen?
Do the remaining vedios about qlora lora peft asap please please please🙏
Can u please add that first video in this same play list?
How do we handle hallucination when finetuing is done ?
Is QAT not applicable on pre trained model??
hey everyone,
I'm Ani,
and im seriously confused few things and i need a help from anyone please,
So i want to create a good AI models so I have started with learning AI/Ml I have some codeing experience before so iwas learning DSA but from 1.5 months i'm sutck in it. today when i was watching this video i got my interest back but the thing is i don't have any Practical experience in it so i dont really understand it properly, can you please clarify what must i do shall i continue learning this or i must first do Mechine Learning then come back to this
anyone can tell me quickly how can i get only response from llama 2 response. it give first input and then response but i want only response
What is the software name of your notepade?
pdf of such notes are highly appreciated.
I feel like exponent ad mentissa is wrong in the explaination
Kyu pda rhe ho giru g iska kya fyada h.tabhi to hm dekhenge
Sooo many adds in your video,
Great