The REAL cost of LLM (And How to reduce 78%+ of Cost)

Поделиться
HTML-код
  • Опубликовано: 19 дек 2024

Комментарии •

  • @jasonfinance
    @jasonfinance 10 месяцев назад +58

    I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!

  • @goutamkelam6117
    @goutamkelam6117 10 месяцев назад +1

    🎯 Key Takeaways for quick navigation:
    19:51 💡 *Analyze token consumption for cost optimization.*
    20:19 💻 *Install Lens Smith and set up.*
    21:01 🛠️ *Setup environment variables for connection.*
    21:43 📊 *Implement tracking methods for insights.*
    22:12 📚 *Utilize Lanching for research projects.*
    23:06 📝 *Log project activities for monitoring.*
    24:03 💰 *Analyze token costs for optimization.*
    24:31 📉 *Reduce GPT-4 usage for cost savings.*
    25:12 📄 *Implement content summary for efficiency.*
    26:09 ✂️ *Optimize script tool for better results.*
    Made with HARPA AI

  • @que-tangclan
    @que-tangclan 10 месяцев назад +18

    This is the best AI content I have seen all week. Thank you for this.

  • @kguyrampage95
    @kguyrampage95 10 месяцев назад +12

    Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.

  • @timothyspottering
    @timothyspottering 10 месяцев назад +23

    Hi Jason!
    another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai.
    every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.

  • @Joe-bp5mo
    @Joe-bp5mo 10 месяцев назад +14

    Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products;
    Thanks for sharing! will definitely try to optimise my LLM apps!

  • @Ke_Mis
    @Ke_Mis 10 месяцев назад +9

    Your content is just superb as always Jason!

  • @gsolaich
    @gsolaich 10 месяцев назад +1

    We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos

  • @betun130
    @betun130 6 месяцев назад

    Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.

  • @leandroimail
    @leandroimail 10 месяцев назад +2

    Tks very much for this video. I have been having problems with the cost of my agents. I will do this tips and clue that you gave. Thks again.

  • @michaelwallace4757
    @michaelwallace4757 10 месяцев назад +9

    A step by step build of an agent architecture would be invaluable! Thank you for the video.

  • @ZacMagee
    @ZacMagee 10 месяцев назад +4

    Love your content man. You have helped me really expand my knowledge and push my boundaries

  • @oscarcharliezulu
    @oscarcharliezulu 10 месяцев назад +3

    Excellent video great to hear real world experience from a real Dev

  • @xugefu
    @xugefu 10 месяцев назад +1

    Thanks!

  • @serenditymuse
    @serenditymuse 10 месяцев назад +3

    Excellent. Most of his videos are but this one was especially useful to me.

  • @oryxchannel
    @oryxchannel 9 месяцев назад +1

    See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!

    • @AIJasonZ
      @AIJasonZ  9 месяцев назад

      Thank you!

  • @matten_zero
    @matten_zero 10 месяцев назад +1

    I've done that before @18:46. It works pretty well esp when you combine with SPR (popularized by David Shapiro).

  • @Beloved_Digital
    @Beloved_Digital 10 месяцев назад +1

    I am a newbie when it comes to build AI powered apps.
    Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us

  • @TimBnb
    @TimBnb 10 месяцев назад +1

    Cette chaîne est la meilleure école existante à ce jour.
    Merci Jason

  • @misterloafer5021
    @misterloafer5021 10 месяцев назад +4

    Yes, please do a video on multi agent methods

  • @taylorthompson4212
    @taylorthompson4212 10 месяцев назад +4

    This video came at the perfect time. Thank you

  • @holdingW0
    @holdingW0 10 месяцев назад +1

    Excellent video. Subbed and hope you keep the content coming!

  • @addisobi772
    @addisobi772 4 месяца назад

    Great Jason , You have help me understanding alot

  • @shervintheprodigy6402
    @shervintheprodigy6402 4 месяца назад

    This is a great video! Exactly what I was looking for!

  • @Max-zy2ie
    @Max-zy2ie 10 месяцев назад +3

    When building multi agent orchestration systems, what is your preferred stack? Do you use langchain, autogen or just native APIs?

  • @momentumsoftio
    @momentumsoftio 9 месяцев назад

    You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time

  • @matten_zero
    @matten_zero 10 месяцев назад +17

    This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool.
    There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.

  • @JohnByrneLSM
    @JohnByrneLSM 10 месяцев назад

    Excellent video! I just ran into issues with memory for conversations and I really like the strategies you've offered in this. Thank you.

  • @JimMendenhall
    @JimMendenhall 10 месяцев назад +1

    Thanks for sharing your insights from your work. It's very helpful!

  • @clamhammer2463
    @clamhammer2463 10 месяцев назад +1

    I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.

  • @nicechannel9720
    @nicechannel9720 10 месяцев назад +1

    A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍

  • @ursusss
    @ursusss 9 месяцев назад

    Thanks!

  • @chengchangyu
    @chengchangyu 3 месяца назад

    A step by step build of an agent architecture would be very helpful. I am looking forward of it.

  • @matten_zero
    @matten_zero 10 месяцев назад

    I'm taking all of this for my startup. This is the way and creates a moat for you assuming you hold on to the weights afterwards

  • @richuanglin6824
    @richuanglin6824 10 месяцев назад

    27 minutes of solid gold! Thanks Jason

  • @gabrieleguo
    @gabrieleguo 10 месяцев назад

    Thanks Jason, your content is always on point and very insightful. Keep it up man!

  • @MrTalhakamran2006
    @MrTalhakamran2006 10 месяцев назад

    Thank you Jason for your hard work to put this together.

  • @the-ghost-in-the-machine1108
    @the-ghost-in-the-machine1108 10 месяцев назад

    this was an intense, highly informative lecture. Thanks Jason, appreciate your work!

  • @nexusinfosec
    @nexusinfosec 10 месяцев назад +1

    Yes please for a video deepdiving into agent architecture for autogen

  • @nikilragav
    @nikilragav 9 месяцев назад

    14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?

  • @kernsanders3973
    @kernsanders3973 9 месяцев назад

    Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.

  • @savire.ergheiz
    @savire.ergheiz 10 месяцев назад +1

    Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects.
    Its not just in AI. Its always been like that since forever if you tried to follow hype.
    Unless you got backed by big companies or investor planning way ahead with costs is always be a must.

  • @omarzidan6840
    @omarzidan6840 10 месяцев назад

    We love you Jason. Thanks a lot!

  • @GjentiG4
    @GjentiG4 10 месяцев назад

    Great vid! Keep up the good work

  • @mattbegley1345
    @mattbegley1345 9 месяцев назад

    Excellent!👍 Applying that Assistant Hierarchy to your Sales Agent would be a good video.

  • @kguyrampage95
    @kguyrampage95 10 месяцев назад +4

    at 8:05 you made an obvious mistake with the maths, your probably meant the cheapest model not mistral. since it would 50x cheaper not 214x cheaper

    • @AIJasonZ
      @AIJasonZ  10 месяцев назад +2

      Ahh I highlight the wrong row, if should be mistral 7b, thanks for spotting this mate!

    • @kguyrampage95
      @kguyrampage95 10 месяцев назад

      @@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!

  • @sewingsugar9892
    @sewingsugar9892 10 месяцев назад +1

    This channel is so underrated

  • @vinception777
    @vinception777 10 месяцев назад +1

    Thanks a lot, like James Briggs and some other, your content is outstandingly great. These are really important information that I need at work 🙏☺

  • @RichardGetzPhotography
    @RichardGetzPhotography 10 месяцев назад

    Excellent work Jason

  • @MaximIlyin
    @MaximIlyin 10 месяцев назад +1

    Great video, thanks!
    Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context?
    (Like a RAG for conversation memory)

  • @yazanrisheh5127
    @yazanrisheh5127 10 месяцев назад +1

    Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?

  • @hidroman1993
    @hidroman1993 10 месяцев назад +5

    "Comment if you want a video about this" your videos are so good I will click anyways ❤️

  • @aiforsocialbenefit
    @aiforsocialbenefit 10 месяцев назад

    Excellent tutorial. Thank you!

  • @bhaumiks.6543
    @bhaumiks.6543 8 месяцев назад

    I am intrested learning about architecture. By the way, Amazing videos...

  • @zhubarb
    @zhubarb 10 месяцев назад

    This is a very good video. Appreciate it.

  • @jim02377
    @jim02377 10 месяцев назад +1

    Excellent video! Saved me lots of time trying to figure that out. Keep up the great work!

  • @ivant_true
    @ivant_true 10 месяцев назад

    man, super useful video, thanks !

  • @tks5182
    @tks5182 10 месяцев назад

    Would appreciate a course or even a comment on what knowledge you need and what concepts you should know to be an AI & ML Engineer

  • @Ryan-yj4sd
    @Ryan-yj4sd 10 месяцев назад

    Fine tuning for token reduction is a key technique I’ve used

  • @YoannGrudzien
    @YoannGrudzien 10 месяцев назад

    Prompt Engineer and LLM developer here.
    GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better.
    Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)

  • @rishi8413
    @rishi8413 10 месяцев назад

    really love your videos, are there any packages or libraries to use these 7 methods you discussed

  • @WaxN-ey6vj
    @WaxN-ey6vj 10 месяцев назад

    Since GPT development is rapid, I think making fine-turning model is risky due to time consuming.
    The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.

  • @subratnayak2682
    @subratnayak2682 10 месяцев назад

    For the cascade method how will measure the score for each new question while on the production?

  • @headrobotics
    @headrobotics 10 месяцев назад

    For fine tuning a small model from a large one, what about OpenAI terms of service? Has it changed to allow?

  • @hackerborabora7212
    @hackerborabora7212 10 месяцев назад

    We love your videos 🎉❤

  • @tirthb
    @tirthb 9 месяцев назад

    Wow, super practical tips.

  • @roke4025
    @roke4025 10 месяцев назад

    🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?

  • @alibahrami6810
    @alibahrami6810 10 месяцев назад

    Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks

  • @prestonmccauley43
    @prestonmccauley43 10 месяцев назад

    What other services have you found for deployment that are cost friendly? You have to install vms containers and more

  • @rchaumais
    @rchaumais 9 месяцев назад

    Many thanks for your useful video.
    Have you evaluated Nemo from Nvidia ?

  • @prestonmccauley43
    @prestonmccauley43 10 месяцев назад

    If you use the big ones like azure bedrock etc, they are so expensive on deploy with the compute

  • @SophieCheung
    @SophieCheung 10 месяцев назад

    thanks for your video! :)

  • @RolandoLopezNieto
    @RolandoLopezNieto 10 месяцев назад

    Thank you very much for the video

  • @noodjetpacker9502
    @noodjetpacker9502 10 месяцев назад

    I don’t know if this is a stupid question but why doesn’t ChatGPT already implement these features for themselves? Or do they already do these?

  • @breathandrelax4367
    @breathandrelax4367 10 месяцев назад

    Hi Jason,
    thank you for the video impressive work !
    while building the app what do you think of using if /else chain that will reroute to a particular llm ?

  • @mjkbird
    @mjkbird 10 месяцев назад

    Isn't it against OpenAI's ToS to use the output as training data?

  • @funny_tiger11
    @funny_tiger11 10 месяцев назад

    Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI

  • @evermorecurious91
    @evermorecurious91 10 месяцев назад

    BRO, this is gold!!!

  • @archerkee9761
    @archerkee9761 9 месяцев назад

    nice video, thanks!

  • @mosca204
    @mosca204 10 месяцев назад

    So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah.
    PS: It would be great to see a sales agent video soon ;)

  • @ryzikx
    @ryzikx 10 месяцев назад

    ive always wanted to do this but im too dumb and lazy lmao, good to see someone like you is doing it

  • @JashAmbaliya
    @JashAmbaliya 10 месяцев назад

    Really helpful content

  • @the_real_cookiez
    @the_real_cookiez 10 месяцев назад

    How come you don't use state of the art open source LLM models? It should be strong enough right?

    • @helix8847
      @helix8847 10 месяцев назад

      The current issue with them is calling the Tool. Maybe Code LLama 70b could do it now.

  • @seamussmyth2312
    @seamussmyth2312 10 месяцев назад

    Superb 🏆

  • @450aday
    @450aday 9 месяцев назад +1

    you really should not use Ai's for multiplication, use a calculator. Find Tool Ai is an important Ai to save money. Button Ai is another good one.

  • @CoriolanBataille
    @CoriolanBataille 10 месяцев назад

    Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?

    • @AIJasonZ
      @AIJasonZ  10 месяцев назад +1

      Thanks! MemGPT is super interesting architecture, I haven’t really run it in product though, do you know any applications build with MemGPT?

    • @CoriolanBataille
      @CoriolanBataille 10 месяцев назад

      Yeah I think there is a lot of potential, I’m not aware of any commercial application using it tho, but going to test it in some projects@@AIJasonZ

  • @SergiySev
    @SergiySev 10 месяцев назад

    such a good video!

  • @jakobbourne6381
    @jakobbourne6381 10 месяцев назад

    Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.

  • @LaelAl-Halawani-c4l
    @LaelAl-Halawani-c4l 9 месяцев назад

    That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.

  • @Tanvir1337
    @Tanvir1337 10 месяцев назад +2

    Mixtral 8x7b*

  • @vinitvsankhe
    @vinitvsankhe 10 месяцев назад

    But what if I need an AI that needs to be trained with one data snapshot?

  • @xonack
    @xonack 10 месяцев назад +1

    ecoassistant video please!

  • @simonmassey8850
    @simonmassey8850 10 месяцев назад

    companies put in “fair usage” clauses to cap or throttle users. ask you smart “sales agent” about that idea.

  • @sanesanyo
    @sanesanyo 10 месяцев назад

    Can someone please explain me how GPT4 32k is more powerful than GPT 4 128k Turbo? I thought GPT 4 128k Turbo was the best Open AI model.

    • @ryzikx
      @ryzikx 10 месяцев назад

      its not idk why he says that

    • @AIJasonZ
      @AIJasonZ  10 месяцев назад

      In my experience, gpt4 turbo is faster, cheaper, however, less stable performance & a bit “dumber” than. Gpt4 32k;
      E.g. when I build agents, I found gpt4 turbo often ignore some instructions & forget doing some steps; while using 32k the performance is much more stable

  • @joshuahsu5589
    @joshuahsu5589 10 месяцев назад

    Would love a deeper dive into Ecoassistant. In a couple of weeks, we're about to look at some optimization strategies! Thank you!

  • @ianalmeida4759
    @ianalmeida4759 10 месяцев назад

    Reminds me of that scene in Silicon Valley where AI Dinesh speaks to AI Gilfoyle

  • @r3kRaP
    @r3kRaP 9 месяцев назад +2

    You should change your name to jAIson

    • @AIJasonZ
      @AIJasonZ  9 месяцев назад

      Hahah love it

  • @nufh
    @nufh 10 месяцев назад

    I managed to build the clone for AI GF for free now with local LLM.

  • @aifortheworld7152
    @aifortheworld7152 10 месяцев назад

    did you get the ai girlfriend to work? Because you can now create ai sales agent for your website to talk to. hope to hear from you

  • @jonmichaelgalindo
    @jonmichaelgalindo 10 месяцев назад

    Low-cost LLMs will win. Opensource, low parameter count, fast inference architecture, compute distributed to regional servers.

    • @MsDuketown
      @MsDuketown 9 месяцев назад

      Security as hardware appliance, ie. Pluton chip.