Wonderful! Thanks for the video.. After the changes it performed some what better; lora_config = LoraConfig( lora_alpha=128, lora_dropout=0.05, r=256, bias="none", target_modules="all-linear", task_type="CAUSAL_LM", )
Hey, great video. i wanted to know good sources of information regarding generative ai and llms. I am mostly interested in training efficient llms. Thanks.
Wait until you find that it appends a secret "diversity constraint" to your original query every time you search for, say, "whıte paint" or the like. Thanks but no thanks.
after fine tuning , if i ask general question it still gives sql response text = "Quote: Our doubts are traitors," device = "cuda:0" inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Quote: Our doubts are traitors, and we must not doubt. Context: CREATE TABLE table_name_7 (doubt VARCHAR,
WOW, even I, who knows nothing about SQL or how to train an LLM, learned so much from your presentation. Thank you!
You're very welcome!
Thanks straightforward explanation. 👍🏻 btw how to evaluate this finetune model results is it good to use?
Amazing, great job, clearly shows how we can fine-tuning Gemma, looking forward to the next video
Glad you enjoyed it!
Wonderful! Thanks for the video.. After the changes it performed some what better; lora_config = LoraConfig(
lora_alpha=128,
lora_dropout=0.05,
r=256,
bias="none",
target_modules="all-linear",
task_type="CAUSAL_LM",
)
Hi,
Nice video. Can you make a video for running gemma on local machine using pytorch? Thank you
Great content ,i was wondering how should i save the model and push the model to hugging face hub after the fine tunning ?
+1
hi did you do it ?
Hey, great video. i wanted to know good sources of information regarding generative ai and llms. I am mostly interested in training efficient llms. Thanks.
Sure, I'll create a video on this soon!
Thanks for the video, bro! Do you have the code on Github?
Yes I have! Check the description section! I'm glad you liked the video.
Thank you for the video. How to read a spreadsheet data using Gemma and perform mathematical operations on it?
I'm glad you liked the video, I'll create a video on this soon!
Great
Thank you!
How to avoid repeatative response from the model.generate() ?
Now how to download this ai model and used it locally
How to upload save the fine-tuned model locally? please provide the code for that
Sure!
Wait until you find that it appends a secret "diversity constraint" to your original query every time you search for, say, "whıte paint" or the like.
Thanks but no thanks.
after fine tuning , if i ask general question it still gives sql response
text = "Quote: Our doubts are traitors,"
device = "cuda:0"
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Quote: Our doubts are traitors, and we must not doubt.
Context: CREATE TABLE table_name_7 (doubt VARCHAR,
Because you have fine-tuned the LLM for your use case!