I am currently writing the scraping function of an agent that leads to a very similar project as yours. Thanks. I am learning programming as I go. I used to have your energy. I wish I had it now in this exciting time to be alive. Thanks.
@@benjaminfauchald2990 Could be, I'm not aware of public schedule in this sense. I know that secret services have been able to do that for over 15 years.
Great video! thank you ! how would you fine tune the function if you need to leverage different data sources to answer one question? by example, my prompt could require to combine elements from ChatGPT + GoogleSearch (weather) + Website (how to contact customer support?) + a vectorDB. "I need advice to dress for a wedding in NewYork next weekend?" > Chat(Fashion style in NYC) + Google (Weather in NYC next week) + Macy website ("to contact support...") + vector DB (containing different suits and shirts products from Macy). thanks for your guidance!
This is great and extremely powerful! Was wondering if there would be anyway to print out the human context "snippets" out along with the llm response?
Hello Liam, I am asking here as this is your latest video, I have seen your videos regarding Langchain, I am curious how to optimize inference while using LLMs in Langchain using TensorRT or Onnx Runtime... As in Industries, It's very obvious, to save the time as well as computation cost... In TensorRT while using open source models, we have techniques like quantization and a few more more for that... So In Langchain is there any way to do this?
I am a full-stack developer. What is the best library or project to train AI on private data? I watched the PrivateGPT video, but honestly, PGPT is very slow. 😅
A question for those more knowledgeable than me in this respect. Would it be more effective to train the model on this data rather than feeding it the pdf, or is the effectiveness the same?
Training is more difficult and expensive because it typically requires buying access to $200,000 computers for a few hours at LEAST. It’s the difference between making someone read a book about birds to learn all about them and internalize that knowledge vs just giving them a guide about birds to keep in their backpack and they can just use that whenever necessary, much less hassle than spending the time learning and internalizing. This way you can carry around tons of guides more easily and you don’t need to train on all of them you just keep them around as reference easy peasy. Lol sorry for the long winding windy explanation 😂
@Seth Taddiken I appreciate it, I think I heard somewhere there is a cheaper way to train models on a small amount of data now than it used to be. That's why I was wondering if it would make the quality of the results better, more wholesome, and less hallucinations, or if it didn't make a noticeable difference.
@@coolstoryai You can use Low Rank Adaption (LoRA) to train just a subset of the network weights and leave the original network in tact, a LoRA is like a business suit for the model. It can put on a cowboy suit and pretend to be a cowboy but this method isn’t great for injecting knowledge I don’t think, just biasing behavior is my guess. Parsing and organizing documents/books/etc into vectorized databases and doing semantic searches let’s the AI reference the actual source text that you want it to talk about, which makes sense to me for a lot of applications.
@LiamOttley hey could you do this one with specific LLMs instead? Like if you ask a coding question you get replits hugginface llm or if you want a longer answer you call on an LLM built for that and if you need to ingest data you call on one that can handle it? BTW you can schedule a call with me for ideas;);)
Please to bReak it down and HOW TO DEAL WITH PRICE FOR EVERY REQUEST, especially for big data and and many users are going to use the same chat from one website?
Leave your questions below! 😎
📚 My Free Skool Community: bit.ly/3uRIRB3
🤝 Work With Me: www.morningside.ai/
📈 My AI Agency Accelerator: bit.ly/3wxLubP
Great video! Thanks for making this wonderful content for free. AI Consulting is about to get big 😎
Thanks for watching mate
This is exactly what I was talking about in our call. A hybrid model.
Awesome video! Thanks Liam!
Great video man, cheers from rainy Auckland!
Awesome video @LiamOttley mate!
i learn so much with each video you put out.
love the Friday sessions on Discord. can't wait to see what's next 💪💪👏👏
Awesome work mate!
Thanks for the support mate 🤟🏽
I am currently writing the scraping function of an agent that leads to a very similar project as yours. Thanks. I am learning programming as I go. I used to have your energy. I wish I had it now in this exciting time to be alive. Thanks.
Will it charged to any token fees from openAI, if the knowledgebase is within correct knowledgebase? Thanks
This becomes very interesting when Neuralink provides access to everyones memories.
Oh, you mean in August?
@@benjaminfauchald2990 Could be, I'm not aware of public schedule in this sense. I know that secret services have been able to do that for over 15 years.
Dude, you are amazing. Congratulations!
Thank you!! Glad I could help
Thanks you for making these amazing video. You are a Legend in my world.
Great video! thank you ! how would you fine tune the function if you need to leverage different data sources to answer one question? by example, my prompt could require to combine elements from ChatGPT + GoogleSearch (weather) + Website (how to contact customer support?) + a vectorDB. "I need advice to dress for a wedding in NewYork next weekend?" > Chat(Fashion style in NYC) + Google (Weather in NYC next week) + Macy website ("to contact support...") + vector DB (containing different suits and shirts products from Macy). thanks for your guidance!
Keep going
could you make a complete friendly beginner tutorial on how to make it step by step pls? That video could be pure Gold
This is great and extremely powerful! Was wondering if there would be anyway to print out the human context "snippets" out along with the llm response?
Hello Liam, I am asking here as this is your latest video, I have seen your videos regarding Langchain, I am curious how to optimize inference while using LLMs in Langchain using TensorRT or Onnx Runtime... As in Industries, It's very obvious, to save the time as well as computation cost... In TensorRT while using open source models, we have techniques like quantization and a few more more for that... So In Langchain is there any way to do this?
What about privacy? No connection to OpenAI servers for multiple files/data sources?
I am a full-stack developer. What is the best library or project to train AI on private data? I watched the PrivateGPT video, but honestly, PGPT is very slow. 😅
Very interested in engaging you build soemthing very similar for me. What's the price range are w looking at? BTW Great stuff! Thanks
Hi Dennis, fill out the form here and we'd be happy to help: morningside.ai
How do you set the pinecone endpoint?
thanks dude
You are an Amazing person sharing your work with the world. Always stay blessed brother. Is it possible to intergrate with telegram bot?
Hey Liam i got a question would a hybrid bot or the chatbot with hundreds of files like in the video you mentioned What would be better?
Depends on if you have distinct data types/sources!
I can't find the Link for this though?
A question for those more knowledgeable than me in this respect. Would it be more effective to train the model on this data rather than feeding it the pdf, or is the effectiveness the same?
Training is more difficult and expensive because it typically requires buying access to $200,000 computers for a few hours at LEAST.
It’s the difference between making someone read a book about birds to learn all about them and internalize that knowledge vs just giving them a guide about birds to keep in their backpack and they can just use that whenever necessary, much less hassle than spending the time learning and internalizing. This way you can carry around tons of guides more easily and you don’t need to train on all of them you just keep them around as reference easy peasy. Lol sorry for the long winding windy explanation 😂
@Seth Taddiken I appreciate it, I think I heard somewhere there is a cheaper way to train models on a small amount of data now than it used to be. That's why I was wondering if it would make the quality of the results better, more wholesome, and less hallucinations, or if it didn't make a noticeable difference.
@@coolstoryai You can use Low Rank Adaption (LoRA) to train just a subset of the network weights and leave the original network in tact, a LoRA is like a business suit for the model. It can put on a cowboy suit and pretend to be a cowboy but this method isn’t great for injecting knowledge I don’t think, just biasing behavior is my guess. Parsing and organizing documents/books/etc into vectorized databases and doing semantic searches let’s the AI reference the actual source text that you want it to talk about, which makes sense to me for a lot of applications.
@Seth Taddiken I see. Thank you for explaining that and breaking it down
@@coolstoryai of course. I’ve heard questions like this a lot, figured I’d practice answering it 😝
Here is l have been seeing a lot of chances of multiple business and benefits is it right
Good jon
I gotta learn to get this done for a discord bot with my own data. XD
Any advice on how to integrate PrivateGPT?
Same process can be used to route to local models. Just change the handler functions to use your local models
@@LiamOttley thanks a lot man! You are great!
how about the size of the loading, can it go up to 5 GB each or more! let me know if you are aware of that , thanks
Pinecone is probably your best bet for that much data check out my hormoziGPT video
Hi Liam ,I will get in touch with you through Linkedin
Great!!
Even similar functionality exists in the llama index
@LiamOttley hey could you do this one with specific LLMs instead? Like if you ask a coding question you get replits hugginface llm or if you want a longer answer you call on an LLM built for that and if you need to ingest data you call on one that can handle it? BTW you can schedule a call with me for ideas;);)
Please to bReak it down and HOW TO DEAL WITH PRICE FOR EVERY REQUEST, especially for big data and and many users are going to use the same chat from one website?
Nice
🙏🏼
where is the code ?
This is one of the very very basic videos. Add some complex stuff like connecting vector store and MySQL database.
Time is going on but nothing is coming to me as beneficial for my life here is some expectation only remaining in this business