Full text tutorial: www.mlexpert.io/prompt-engineering/stable-vicuna Full Prompt Engineering with LangChain tutorial: www.mlexpert.io/prompt-engineering
Thanks for the video. I would recommend not to spend time for comparative reading the generated results . More importantly, does any of this model you mentioned in this be trained/used fed with personal data? If yes to the question mentioned above, does any of this model you mentioned in this video store/access to the personal data? Thanks for your answer and video. It helps really. Keep up with good work!
It's a very good project. I would like to ask a video if you find it is valuable. maybe showing the process behind trained model. For instance how is the starting point of create this model. I know they used vicuna to start but the part where they point three-stage RLHF pipeline, train the base Vicuna model with supervised finetuning (SFT) using a mixture of three datasets, trlx to train a reward model and trlX to perform Proximal Policy Optimization (PPO) reinforcement learning to perform RLHF training of the SFT model, I really don't found no one explaining how it works.
errore loading 33% 😞 Traceback (most recent call last): File “C:\oobabooga_windows\text-generation-webui\server.py”, line 59, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “C:\oobabooga_windows\text-generation-webui\modules\models.py”, line 219, in load_model model = LoaderClass.from_pretrained(checkpoint, **params) File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained return model_class.from_pretrained( File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2795, in from_pretrained ) = cls._load_pretrained_model( File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 3123, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 664, in _load_state_dict_into_meta_model param = param.to(dtype) RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 283115520 bytes.
Full text tutorial: www.mlexpert.io/prompt-engineering/stable-vicuna
Full Prompt Engineering with LangChain tutorial: www.mlexpert.io/prompt-engineering
Thanks for the video. I would recommend not to spend time for comparative reading the generated results .
More importantly, does any of this model you mentioned in this be trained/used fed with personal data?
If yes to the question mentioned above, does any of this model you mentioned in this video store/access to the personal data?
Thanks for your answer and video. It helps really. Keep up with good work!
It's a very good project. I would like to ask a video if you find it is valuable. maybe showing the process behind trained model. For instance how is the starting point of create this model. I know they used vicuna to start but the part where they point three-stage RLHF pipeline, train the base Vicuna model with supervised finetuning (SFT) using a mixture of three datasets, trlx to train a reward model and trlX to perform Proximal Policy Optimization (PPO) reinforcement learning to perform RLHF training of the SFT model, I really don't found no one explaining how it works.
Great video. Is the fine-tuning of stable_vicune code open-sourced?
How to get a faster response time?
very helpful
I love your russian accent 🎉
Sorry, more Bulgarian accent 😅
errore loading 33% 😞
Traceback (most recent call last):
File “C:\oobabooga_windows\text-generation-webui\server.py”, line 59, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “C:\oobabooga_windows\text-generation-webui\modules\models.py”, line 219, in load_model
model = LoaderClass.from_pretrained(checkpoint, **params)
File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2795, in from_pretrained
) = cls._load_pretrained_model(
File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 3123, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File “C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 664, in _load_state_dict_into_meta_model
param = param.to(dtype)
RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 283115520 bytes.