can you change your camera settings to mirror your video so that we can see the text correctly in the background (manchester united and other drawings on the board)
Hey, thanks for. the video, you quite a smart guy, my question is about the Vision features of Qwen2 VL, can i wire my camera to make it able to my model to see?
Can you please let me know how I could do inference, I had done training with the radio ui, have model saved in /LLaMA-Factory/saves/Qwen2VL-2B-Chat/lora/train_2024-09-13-12-36-15
Helped me in my fine tuning on image dataset
How to load your own dataset?
awaited video.. Thank you
Hope you enjoyed it!
thank you so much
You're welcome!
Should the custom dataset be in sharegpt format always for fine-tuning? If not, can we use another JSON format?
How to use different datasets and inferencing???
How can we change it rank(r) and alpha value?
Hi, pls let us know how to use the custom dataset from local folder or dataset from huggingface, which parameter we should change.
multimodal llm can fine-tuning for sentiment analysis
can you change your camera settings to mirror your video so that we can see the text correctly in the background (manchester united and other drawings on the board)
Sure will do that.....
Awesome video. I am the first one to comment. 🎉🎉
Thanks
LlaMA-Factory is buggy. You can NOT finetune with vision tower open.
Hey, thanks for. the video, you quite a smart guy, my question is about the Vision features of Qwen2 VL, can i wire my camera to make it able to my model to see?
i like you videos but the flag behind you uhhh
Can you please let me know how I could do inference, I had done training with the radio ui, have model saved in /LLaMA-Factory/saves/Qwen2VL-2B-Chat/lora/train_2024-09-13-12-36-15
Thank you So much