I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!
🎯 Key Takeaways for quick navigation: 19:51 💡 *Analyze token consumption for cost optimization.* 20:19 💻 *Install Lens Smith and set up.* 21:01 🛠️ *Setup environment variables for connection.* 21:43 📊 *Implement tracking methods for insights.* 22:12 📚 *Utilize Lanching for research projects.* 23:06 📝 *Log project activities for monitoring.* 24:03 💰 *Analyze token costs for optimization.* 24:31 📉 *Reduce GPT-4 usage for cost savings.* 25:12 📄 *Implement content summary for efficiency.* 26:09 ✂️ *Optimize script tool for better results.* Made with HARPA AI
Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.
Hi Jason! another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai. every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.
Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products; Thanks for sharing! will definitely try to optimise my LLM apps!
We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos
Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.
See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!
I am a newbie when it comes to build AI powered apps. Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us
You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time
This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool. There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.
I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.
A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍
14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?
Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.
Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects. Its not just in AI. Its always been like that since forever if you tried to follow hype. Unless you got backed by big companies or investor planning way ahead with costs is always be a must.
@@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!
Great video, thanks! Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context? (Like a RAG for conversation memory)
Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?
Prompt Engineer and LLM developer here. GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better. Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)
Since GPT development is rapid, I think making fine-turning model is risky due to time consuming. The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.
🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?
Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks
Hi Jason, thank you for the video impressive work ! while building the app what do you think of using if /else chain that will reroute to a particular llm ?
Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI
So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah. PS: It would be great to see a sales agent video soon ;)
Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?
Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.
That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.
In my experience, gpt4 turbo is faster, cheaper, however, less stable performance & a bit “dumber” than. Gpt4 32k; E.g. when I build agents, I found gpt4 turbo often ignore some instructions & forget doing some steps; while using 32k the performance is much more stable
I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!
🎯 Key Takeaways for quick navigation:
19:51 💡 *Analyze token consumption for cost optimization.*
20:19 💻 *Install Lens Smith and set up.*
21:01 🛠️ *Setup environment variables for connection.*
21:43 📊 *Implement tracking methods for insights.*
22:12 📚 *Utilize Lanching for research projects.*
23:06 📝 *Log project activities for monitoring.*
24:03 💰 *Analyze token costs for optimization.*
24:31 📉 *Reduce GPT-4 usage for cost savings.*
25:12 📄 *Implement content summary for efficiency.*
26:09 ✂️ *Optimize script tool for better results.*
Made with HARPA AI
This is the best AI content I have seen all week. Thank you for this.
Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.
Hi Jason!
another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai.
every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.
Exactly what I do!
Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products;
Thanks for sharing! will definitely try to optimise my LLM apps!
Your content is just superb as always Jason!
We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos
Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.
Tks very much for this video. I have been having problems with the cost of my agents. I will do this tips and clue that you gave. Thks again.
A step by step build of an agent architecture would be invaluable! Thank you for the video.
this one
Love your content man. You have helped me really expand my knowledge and push my boundaries
Excellent video great to hear real world experience from a real Dev
Thanks!
Excellent. Most of his videos are but this one was especially useful to me.
See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!
Thank you!
I've done that before @18:46. It works pretty well esp when you combine with SPR (popularized by David Shapiro).
I am a newbie when it comes to build AI powered apps.
Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us
Cette chaîne est la meilleure école existante à ce jour.
Merci Jason
Yes, please do a video on multi agent methods
This video came at the perfect time. Thank you
ikr, grateful to Jason
Excellent video. Subbed and hope you keep the content coming!
Great Jason , You have help me understanding alot
This is a great video! Exactly what I was looking for!
When building multi agent orchestration systems, what is your preferred stack? Do you use langchain, autogen or just native APIs?
You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time
This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool.
There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.
Excellent video! I just ran into issues with memory for conversations and I really like the strategies you've offered in this. Thank you.
Thanks for sharing your insights from your work. It's very helpful!
I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.
A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍
Thanks!
A step by step build of an agent architecture would be very helpful. I am looking forward of it.
I'm taking all of this for my startup. This is the way and creates a moat for you assuming you hold on to the weights afterwards
27 minutes of solid gold! Thanks Jason
Thanks Jason, your content is always on point and very insightful. Keep it up man!
Thank you Jason for your hard work to put this together.
this was an intense, highly informative lecture. Thanks Jason, appreciate your work!
Yes please for a video deepdiving into agent architecture for autogen
14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?
Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.
Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects.
Its not just in AI. Its always been like that since forever if you tried to follow hype.
Unless you got backed by big companies or investor planning way ahead with costs is always be a must.
We love you Jason. Thanks a lot!
Great vid! Keep up the good work
Excellent!👍 Applying that Assistant Hierarchy to your Sales Agent would be a good video.
at 8:05 you made an obvious mistake with the maths, your probably meant the cheapest model not mistral. since it would 50x cheaper not 214x cheaper
Ahh I highlight the wrong row, if should be mistral 7b, thanks for spotting this mate!
@@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!
This channel is so underrated
Thanks a lot, like James Briggs and some other, your content is outstandingly great. These are really important information that I need at work 🙏☺
Excellent work Jason
Great video, thanks!
Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context?
(Like a RAG for conversation memory)
Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?
"Comment if you want a video about this" your videos are so good I will click anyways ❤️
Excellent tutorial. Thank you!
I am intrested learning about architecture. By the way, Amazing videos...
This is a very good video. Appreciate it.
Excellent video! Saved me lots of time trying to figure that out. Keep up the great work!
man, super useful video, thanks !
Would appreciate a course or even a comment on what knowledge you need and what concepts you should know to be an AI & ML Engineer
Fine tuning for token reduction is a key technique I’ve used
Prompt Engineer and LLM developer here.
GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better.
Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)
really love your videos, are there any packages or libraries to use these 7 methods you discussed
Since GPT development is rapid, I think making fine-turning model is risky due to time consuming.
The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.
For the cascade method how will measure the score for each new question while on the production?
For fine tuning a small model from a large one, what about OpenAI terms of service? Has it changed to allow?
We love your videos 🎉❤
Wow, super practical tips.
🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?
Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks
What other services have you found for deployment that are cost friendly? You have to install vms containers and more
Many thanks for your useful video.
Have you evaluated Nemo from Nvidia ?
If you use the big ones like azure bedrock etc, they are so expensive on deploy with the compute
thanks for your video! :)
Thank you very much for the video
I don’t know if this is a stupid question but why doesn’t ChatGPT already implement these features for themselves? Or do they already do these?
Hi Jason,
thank you for the video impressive work !
while building the app what do you think of using if /else chain that will reroute to a particular llm ?
Isn't it against OpenAI's ToS to use the output as training data?
Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI
BRO, this is gold!!!
nice video, thanks!
So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah.
PS: It would be great to see a sales agent video soon ;)
ive always wanted to do this but im too dumb and lazy lmao, good to see someone like you is doing it
Really helpful content
How come you don't use state of the art open source LLM models? It should be strong enough right?
The current issue with them is calling the Tool. Maybe Code LLama 70b could do it now.
Superb 🏆
you really should not use Ai's for multiplication, use a calculator. Find Tool Ai is an important Ai to save money. Button Ai is another good one.
Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?
Thanks! MemGPT is super interesting architecture, I haven’t really run it in product though, do you know any applications build with MemGPT?
Yeah I think there is a lot of potential, I’m not aware of any commercial application using it tho, but going to test it in some projects@@AIJasonZ
such a good video!
Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.
That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.
Mixtral 8x7b*
But what if I need an AI that needs to be trained with one data snapshot?
ecoassistant video please!
companies put in “fair usage” clauses to cap or throttle users. ask you smart “sales agent” about that idea.
Can someone please explain me how GPT4 32k is more powerful than GPT 4 128k Turbo? I thought GPT 4 128k Turbo was the best Open AI model.
its not idk why he says that
In my experience, gpt4 turbo is faster, cheaper, however, less stable performance & a bit “dumber” than. Gpt4 32k;
E.g. when I build agents, I found gpt4 turbo often ignore some instructions & forget doing some steps; while using 32k the performance is much more stable
Would love a deeper dive into Ecoassistant. In a couple of weeks, we're about to look at some optimization strategies! Thank you!
Reminds me of that scene in Silicon Valley where AI Dinesh speaks to AI Gilfoyle
You should change your name to jAIson
Hahah love it
I managed to build the clone for AI GF for free now with local LLM.
did you get the ai girlfriend to work? Because you can now create ai sales agent for your website to talk to. hope to hear from you
Low-cost LLMs will win. Opensource, low parameter count, fast inference architecture, compute distributed to regional servers.
Security as hardware appliance, ie. Pluton chip.