👋🏻I'm launching a free community for those serious about learning Data & AI soon, and you can be the first to get updates on this by subscribing here: www.datalumina.io/newsletter
Thanks, Dave with some trials it seems that this version of falcon works for short questions, I am finding that in some cases the LLM spits several repeated sentences, may need some tweaking in the output to clean it up Great alternative for certain uses
In the special_tokens_map.json file of the HF repo there are some special tokens defined that differ from what OpenAI or others use a little bit. Integrating those into a prompt template of the chains seemed to improve the results for me (Also wrote on example in the HF comments). Three interesting ones in particular: >>QUESTIONSUMMARY>ANSWER
@17:30 interesting how my OpenAI output/summary is different, than yours: " This article explains how to use Flowwise AI, an open source visual UI Builder, to quickly build large language models apps and conversational AI. It covers setting up Flowwise, connecting it to data, and building a conversational AI, as well as how to embed the agent in a Python file and run queries. It also shows how to use the agent to ask questions and get accurate results."
That is what I wanted to ask. I mean I loaded this model into the google collab free tier and it took 15gb of ram and 14gb of GPU memory, I cant imagine what hardware you should have to run something like this locally. Also, I can't imagine that hugging face would give you their resources just like that. His setup seems very strange.
Hey man, love your videos. Two questions: Q1. 11:50 are you talking about embedding? Q2. From your experience/deduction /observation of the LLM on huggingface, can you train a model like MosaicML MPT-7B, through in QLorRA in the mix and train it to be like GPT4 or even slightly better in terms of understanding/alignment - could using tree of thought mitigate or solve a small percentage of that?
Really great video on this hot topic of open llm vs closed ones... It will be really interesting to see how to self host a open llm to not go through any external inference API.
But don't you get a free amount of tokens for free that recharge every month using OpenAI, or not? So unless you go over the amount you shouldn't get charged.
Nice tutorial Dave, but isn't it unfair to compare two models with different parameters count? falcon -7b has 7billion where as text-davinci-003 has almost 175 billion parameters?
How are you runing a .py file as a jypeter Notebook on the side like that how are you taking each line inside it's one block to the side interactive this setup looks neat
UNFAIR ADVANTAGE .What do you think, as a European citizen, would you have to sue Europe, which hinders the development of progress offered by artificial intelligence and thus causes enormous damage in Europe's lagging behind the whole world. isn't the EU a responsible institution
👋🏻I'm launching a free community for those serious about learning Data & AI soon, and you can be the first to get updates on this by subscribing here: www.datalumina.io/newsletter
Can we try fine tuning the Falcon for future video
Just what i was searching for. Thanks for this. bravo!
Thanks!
Interesting! I was exploring the same thing just an hour ago on HF and ran into this video as I opened the RUclips. Good content.
Thanks! 🙏🏻
Thanks, Dave
with some trials it seems that this version of falcon works for short questions,
I am finding that in some cases the LLM spits several repeated sentences, may need some tweaking in the output to clean it up
Great alternative for certain uses
Perfect timing, need to implement some LLM for a work project 🙌
👌🏻
In the special_tokens_map.json file of the HF repo there are some special tokens defined that differ from what OpenAI or others use a little bit. Integrating those into a prompt template of the chains seemed to improve the results for me (Also wrote on example in the HF comments). Three interesting ones in particular:
>>QUESTIONSUMMARY>ANSWER
@17:30 interesting how my OpenAI output/summary is different, than yours:
" This article explains how to use Flowwise AI, an open source visual UI Builder, to quickly build
large language models apps and conversational AI. It covers setting up Flowwise, connecting it to
data, and building a conversational AI, as well as how to embed the agent in a Python file and run
queries. It also shows how to use the agent to ask questions and get accurate results."
How to run the FalconModel locally. Does providing a key run the model in HuggingFace server?
I’m sure this will be basic question, but where is the inference running here? Is it local, or is it on huggingface’s resources?
That is what I wanted to ask. I mean I loaded this model into the google collab free tier and it took 15gb of ram and 14gb of GPU memory, I cant imagine what hardware you should have to run something like this locally. Also, I can't imagine that hugging face would give you their resources just like that. His setup seems very strange.
Hey Dave, love the video! How did you create your website, looks amazing bro 👌
Hey man, love your videos. Two questions:
Q1. 11:50 are you talking about embedding?
Q2. From your experience/deduction /observation of the LLM on huggingface, can you train a model like MosaicML MPT-7B, through in QLorRA in the mix and train it to be like GPT4 or even slightly better in terms of understanding/alignment - could using tree of thought mitigate or solve a small percentage of that?
A1 - No I don't use embeddings in this example. Just plain text sent to the APIs
A2 - Not sure about that
Thanks Dave for another great video! Do you know if I can perhaps download falcon locally and then use it privatelly - without HF API?
Thanks Katarzyna! I am not sure about that.
Excellent detailed information
Really great video on this hot topic of open llm vs closed ones...
It will be really interesting to see how to self host a open llm to not go through any external inference API.
Thanks! Yes that is very interesting indeed!
Great video, I have a doubt, what are the requirements to run locally Falcon-7B instruct? Can I use a CPU?
15GB GPU Memory
@@fullcrum2089 Thank you so much! That's a Lot 😱
As always GREAT video!!!! Thanks!!!!
I feel like using chunk size of 1000 with 200 overlaps will improve the results
Do you have a video on pre training an LLM?
Can i run this 7b model without gpu my system ram is 32 gb
Great video mate, thank you!
Imagine combine it with Obsidian, Notion or other similar software
But don't you get a free amount of tokens for free that recharge every month using OpenAI, or not? So unless you go over the amount you shouldn't get charged.
Nice tutorial Dave, but isn't it unfair to compare two models with different parameters count? falcon -7b has 7billion where as text-davinci-003 has almost 175 billion parameters?
It's definitely unfair, but that's why it's interesting to see the performance of a much smaller, free to use model.
How are you runing a .py file as a jypeter Notebook on the side like that how are you taking each line inside it's one block to the side interactive
this setup looks neat
Check out this video: ruclips.net/video/zulGMYg0v6U/видео.html
@@daveebbelaar Merci
I am new to Data science and want to know more about it to become a Pro. Please mentor me.
Subscribe and check out the other videos on my channel ;)
UNFAIR ADVANTAGE
.What do you think, as a European citizen, would you have to sue Europe, which hinders the development of progress offered by artificial intelligence and thus causes enormous damage in Europe's lagging behind the whole world. isn't the EU a responsible institution
i like how degraded our society is
Excellent detailed information