I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!
Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.
Hi Jason! another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai. every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.
Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products; Thanks for sharing! will definitely try to optimise my LLM apps!
🎯 Key Takeaways for quick navigation: 19:51 💡 *Analyze token consumption for cost optimization.* 20:19 💻 *Install Lens Smith and set up.* 21:01 🛠️ *Setup environment variables for connection.* 21:43 📊 *Implement tracking methods for insights.* 22:12 📚 *Utilize Lanching for research projects.* 23:06 📝 *Log project activities for monitoring.* 24:03 💰 *Analyze token costs for optimization.* 24:31 📉 *Reduce GPT-4 usage for cost savings.* 25:12 📄 *Implement content summary for efficiency.* 26:09 ✂️ *Optimize script tool for better results.* Made with HARPA AI
We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos
Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.
See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!
A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍
I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.
I am a newbie when it comes to build AI powered apps. Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us
You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time
This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool. There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.
Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects. Its not just in AI. Its always been like that since forever if you tried to follow hype. Unless you got backed by big companies or investor planning way ahead with costs is always be a must.
Great video, thanks! Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context? (Like a RAG for conversation memory)
Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.
Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks
Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?
14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?
🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?
@@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!
Hi Jason, thank you for the video impressive work ! while building the app what do you think of using if /else chain that will reroute to a particular llm ?
Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?
So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah. PS: It would be great to see a sales agent video soon ;)
Since GPT development is rapid, I think making fine-turning model is risky due to time consuming. The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.
Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI
Prompt Engineer and LLM developer here. GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better. Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)
That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.
Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.
I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!
This is the best AI content I have seen all week. Thank you for this.
Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.
Hi Jason!
another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai.
every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.
Exactly what I do!
Thanks!
Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products;
Thanks for sharing! will definitely try to optimise my LLM apps!
Thanks!
A step by step build of an agent architecture would be invaluable! Thank you for the video.
this one
🎯 Key Takeaways for quick navigation:
19:51 💡 *Analyze token consumption for cost optimization.*
20:19 💻 *Install Lens Smith and set up.*
21:01 🛠️ *Setup environment variables for connection.*
21:43 📊 *Implement tracking methods for insights.*
22:12 📚 *Utilize Lanching for research projects.*
23:06 📝 *Log project activities for monitoring.*
24:03 💰 *Analyze token costs for optimization.*
24:31 📉 *Reduce GPT-4 usage for cost savings.*
25:12 📄 *Implement content summary for efficiency.*
26:09 ✂️ *Optimize script tool for better results.*
Made with HARPA AI
We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos
Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.
Tks very much for this video. I have been having problems with the cost of my agents. I will do this tips and clue that you gave. Thks again.
A step by step build of an agent architecture would be very helpful. I am looking forward of it.
Your content is just superb as always Jason!
Love your content man. You have helped me really expand my knowledge and push my boundaries
Cette chaîne est la meilleure école existante à ce jour.
Merci Jason
See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!
Thank you!
Excellent. Most of his videos are but this one was especially useful to me.
A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍
Excellent video great to hear real world experience from a real Dev
This is a great video! Exactly what I was looking for!
Great Jason , You have help me understanding alot
Excellent video. Subbed and hope you keep the content coming!
Excellent video! I just ran into issues with memory for conversations and I really like the strategies you've offered in this. Thank you.
I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.
Thanks Jason, your content is always on point and very insightful. Keep it up man!
I am a newbie when it comes to build AI powered apps.
Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us
Yes, please do a video on multi agent methods
this was an intense, highly informative lecture. Thanks Jason, appreciate your work!
This video came at the perfect time. Thank you
ikr, grateful to Jason
Thanks for sharing your insights from your work. It's very helpful!
I'm taking all of this for my startup. This is the way and creates a moat for you assuming you hold on to the weights afterwards
When building multi agent orchestration systems, what is your preferred stack? Do you use langchain, autogen or just native APIs?
You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time
Thank you Jason for your hard work to put this together.
This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool.
There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.
Thanks a lot, like James Briggs and some other, your content is outstandingly great. These are really important information that I need at work 🙏☺
Yes please for a video deepdiving into agent architecture for autogen
Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects.
Its not just in AI. Its always been like that since forever if you tried to follow hype.
Unless you got backed by big companies or investor planning way ahead with costs is always be a must.
27 minutes of solid gold! Thanks Jason
Great video, thanks!
Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context?
(Like a RAG for conversation memory)
This channel is so underrated
Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.
We love you Jason. Thanks a lot!
I am intrested learning about architecture. By the way, Amazing videos...
Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks
Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?
Excellent work Jason
Excellent tutorial. Thank you!
Great vid! Keep up the good work
really love your videos, are there any packages or libraries to use these 7 methods you discussed
Many thanks for your useful video.
Have you evaluated Nemo from Nvidia ?
Would love a deeper dive into Ecoassistant. In a couple of weeks, we're about to look at some optimization strategies! Thank you!
"Comment if you want a video about this" your videos are so good I will click anyways ❤️
14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?
BRO, this is gold!!!
🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?
This is a very good video. Appreciate it.
Excellent video! Saved me lots of time trying to figure that out. Keep up the great work!
at 8:05 you made an obvious mistake with the maths, your probably meant the cheapest model not mistral. since it would 50x cheaper not 214x cheaper
Ahh I highlight the wrong row, if should be mistral 7b, thanks for spotting this mate!
@@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!
Wow, super practical tips.
such a good video!
nice video, thanks!
man, super useful video, thanks !
We love your videos 🎉❤
ive always wanted to do this but im too dumb and lazy lmao, good to see someone like you is doing it
thanks for your video! :)
Fine tuning for token reduction is a key technique I’ve used
Hi Jason,
thank you for the video impressive work !
while building the app what do you think of using if /else chain that will reroute to a particular llm ?
Really helpful content
I've done that before @18:46. It works pretty well esp when you combine with SPR (popularized by David Shapiro).
Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?
Thanks! MemGPT is super interesting architecture, I haven’t really run it in product though, do you know any applications build with MemGPT?
Yeah I think there is a lot of potential, I’m not aware of any commercial application using it tho, but going to test it in some projects@@AIJasonZ
For the cascade method how will measure the score for each new question while on the production?
So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah.
PS: It would be great to see a sales agent video soon ;)
I don’t know if this is a stupid question but why doesn’t ChatGPT already implement these features for themselves? Or do they already do these?
What other services have you found for deployment that are cost friendly? You have to install vms containers and more
For fine tuning a small model from a large one, what about OpenAI terms of service? Has it changed to allow?
Thank you very much for the video
Would appreciate a course or even a comment on what knowledge you need and what concepts you should know to be an AI & ML Engineer
Superb 🏆
Since GPT development is rapid, I think making fine-turning model is risky due to time consuming.
The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.
If you use the big ones like azure bedrock etc, they are so expensive on deploy with the compute
Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI
Prompt Engineer and LLM developer here.
GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better.
Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)
Thank you Jason!
I 👍2nd the motion for your:
EcoAssistant 📹Video.!!
you really should not use Ai's for multiplication, use a calculator. Find Tool Ai is an important Ai to save money. Button Ai is another good one.
Isn't it against OpenAI's ToS to use the output as training data?
But what if I need an AI that needs to be trained with one data snapshot?
Reminds me of that scene in Silicon Valley where AI Dinesh speaks to AI Gilfoyle
ecoassistant video please!
You should change your name to jAIson
Hahah love it
How come you don't use state of the art open source LLM models? It should be strong enough right?
The current issue with them is calling the Tool. Maybe Code LLama 70b could do it now.
companies put in “fair usage” clauses to cap or throttle users. ask you smart “sales agent” about that idea.
I managed to build the clone for AI GF for free now with local LLM.
Very interesting
That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.
did you get the ai girlfriend to work? Because you can now create ai sales agent for your website to talk to. hope to hear from you
Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.
it's Real life silicon Valley serie Scenario where two ai start talking to each lol.
This is the core technic with rabbit r1