RouteLLM achieves 90% GPT4o Quality AND 80% CHEAPER

Поделиться
HTML-код
  • Опубликовано: 24 июл 2024
  • НаукаНаука

Комментарии • 227

  • @matthew_berman
    @matthew_berman  17 дней назад +56

    My "AI Stack" is RouteLLM, MoA, and CrewAI. What about you?

    • @craiggriessel1872
      @craiggriessel1872 17 дней назад +1

      AISheldon 🤓

    • @shalinluitel1332
      @shalinluitel1332 17 дней назад +4

      It would be best to have alternatives to all these which are free and open source. Maybe later down the line.. The video is really cool tho! Thanks Matthew

    • @santiagomartinez3417
      @santiagomartinez3417 17 дней назад +8

      Is MoA mixture of agents?

    • @AIGooroo
      @AIGooroo 17 дней назад +20

      Mathew, please do the full tutorial on how to set this up. thank you

    • @smokewulf
      @smokewulf 17 дней назад +15

      RouteLLM, MoA, and Agency Swarm. Should do a video on Agency Swarm. I think it is the best agentic framework

  • @davtech
    @davtech 17 дней назад +122

    Would love to see a tutorial on how to set this up.

    • @AlexBrumMachadoPLUS
      @AlexBrumMachadoPLUS 17 дней назад +3

      Me too ❤

    • @bamit1979
      @bamit1979 17 дней назад

      I think some other AI enthusiast covered it a few days back. It was quite easy. Check RUclips.

    • @ChristianNode
      @ChristianNode 16 дней назад

      get the agents to watch it and do it.

    • @sugaith
      @sugaith 16 дней назад

      On how to set this up IN THE CLOUD as well or preferebly

    • @averybrooks2099
      @averybrooks2099 16 дней назад +3

      Me too but on a local machine instead of a third party service.

  • @clapppo
    @clapppo 17 дней назад +28

    it'd be cool if you did a vid on setting it up and running it locally

    • @anubisai
      @anubisai 16 дней назад

      Olama or LLM studio?

  • @velocityerp
    @velocityerp 17 дней назад +10

    Matthew - for those of us who develop line-of-business apps for SME businesses - local LLM deployment is a must. Would certainly like to see you demo RouteLLM with orchestration - Thanks!

  • @cool1297
    @cool1297 17 дней назад +72

    Please do a tutorial for local installation for this. Thanks

    • @camelCased
      @camelCased 17 дней назад +4

      What exactly? As I understand, RouteLLM is not an LLM itself but just a router.
      You can install local LLMs very easily using Backyard AI.

    • @m8hackr60
      @m8hackr60 17 дней назад +2

      Sign me up for the full tutorial!

    • @DihelsonMendonca
      @DihelsonMendonca 16 дней назад

      ​@@camelCased Or LM Studio

    • @bigglyguy8429
      @bigglyguy8429 16 дней назад +1

      @@camelCased But how to use the router with Backyard?

    • @camelCased
      @camelCased 16 дней назад

      @@bigglyguy8429 Why would you want to use the router at all, if running LLM models locally?

  • @josephremick8286
    @josephremick8286 17 дней назад +24

    I am a cyber security analyst who knows very little about coding so, between your videos and just straight asking ChatGPT or Claude, I am ham-fisting my way through getting AI to run locally. Please keep making tutorial videos - I am excited to see how to impliment RouteLLM!

    • @s2turbine
      @s2turbine 17 дней назад +4

      I agree, I'm pretty much in the same boat as you. The problem is that my knowledge is outdated by the time I finally figure things out because there is so much advancement in so little time. I think we need a "checkpoint" how-to on how to do things now, as opposed to 3 months ago.

    • @DihelsonMendonca
      @DihelsonMendonca 16 дней назад

      If you don't know much about anything, like me, but want to run LLMs locally, you just need to install LM Studio. No need to understand anything. On the software, it has even the option to download and install them, and run. That's what I use. Now that I learned a bit more, I will try to install Open WebUI, Ollama and Docker, these are way more complicated. 🎉❤

  • @bernieapodaca2912
    @bernieapodaca2912 16 дней назад +2

    Yes! Please show us a comprehensive breakdown of this great tool!
    I’m also interested in your sponsor’s product, LangTrace. Can you possibly show us how to use it?

  • @caseyvallett8953
    @caseyvallett8953 17 дней назад +6

    Absolutely do a detailed tutorial on how to get this up and running!

  • @AngeloXification
    @AngeloXification 17 дней назад +3

    I feel like everyone is realising things at the same time. I started 2 projects, the first an LLM co-ordination system and a chain of thought processing on specific models

  • @aiforculture
    @aiforculture 16 дней назад +2

    Great breakdown, much appreciated. I definitely foresee local LLMs becoming dominant for organisations as soon as next year. My advice during consults is for them not to invest a massive amount in high-end data secure cloud systems, but just to hang on a little, work with dummy data on current models to build up foundational knowledge, and then once local options exist they can start diving into more sensitive analytics.

  • @MichaelLloydMobile
    @MichaelLloydMobile 17 дней назад +5

    Yes, please provide a tutorial on setting up the described language model.

  • @mrbrent62
    @mrbrent62 16 дней назад +1

    I also saw where they will have 20TB m.2 drives in a couple of years. Running this LLM locally will be really cool.

  • @AshishKumar-hg2cl
    @AshishKumar-hg2cl 16 дней назад +1

    Hey Matt, yes it would be great if you could show a demo of how to setup this model on Azure OpenAI or Azure Databrix and then use it in the application.

  • @jamesvictor2182
    @jamesvictor2182 16 дней назад

    Just popping up to say thanks Matthew. You have become almost my only required source for AI news because your take is right up my street every time. Great work, keep it coming

  • @wardehaj
    @wardehaj 17 дней назад +1

    Thanks for this video. Very informative.
    Please make a full tutorial about the setup of route llm and what the recommendations of the local pc should be. Thank you in advance!

  • @joe_limon
    @joe_limon 17 дней назад +7

    There seems to be a hold up on the highest end models as the leading companies continually try to improve safety while watching their competition. Nobody seems to want to jump in and release a new/better model at risk of the potential "dangerous" label being applied to them. So a lot of the progress remains hidden in the lab, waiting for competition to finally engage.

    • @steveclark9934
      @steveclark9934 17 дней назад +1

      Improve safety really means neuter.

    • @davidk.8686
      @davidk.8686 16 дней назад

      So far with LLM's "data is code" ... it is inherently unsafe, unless something fundamentally changes

  • @CookTheBruce
    @CookTheBruce 15 дней назад

    Yes! The tutorial. Great vid. Sharing with my crew...Just beginning an AI Consultant agency and cost is an existential threat!!!

  • @D0J0Master
    @D0J0Master 16 дней назад +1

    How would this effect mixture of agents? Could we have multiple route llms combined together since they use such lower compute?

  • @madelles
    @madelles 17 дней назад +1

    It would be interesting to see how this will work on your AI benchmark. Please do a setup and test

  • @dezigns333
    @dezigns333 17 дней назад +16

    It's time people admit that benchmarking off GPT4 is stupid. When GPT4 came out it was amazing. Now its no better than any other LLM. Ever since OpenAI introduced cheaper Turbo models, the quality has gone down hill. They sacrificed intelligence for speed to the point where they have plateaued in quality and its not getting better no matter how new models they release.

    • @orthodox_gentleman
      @orthodox_gentleman 17 дней назад

      Thanks for being real bro. I absolutely agree with you. I barely even use ChatGPT anymore because it sucks.

    • @irql2
      @irql2 16 дней назад

      "Now its no better than any other LLM" -- do you really believe this? Seems like you do. That's certainly a take.

    • @kyleabent
      @kyleabent 16 дней назад

      I agree man I don't care about speed as much as I care about accuracy. I'll happily wait for a better response than rapidly go through 2-3 quick responses that need more time in the oven.

  • @MarcvitZubieta
    @MarcvitZubieta 17 дней назад +1

    Yes! please we need a full tutorial!

  • @antonio-urbanculture
    @antonio-urbanculture 16 дней назад

    Yes I really like your idea of a complete install and running tutorial. Go for it. 🙏 Thanks 👍

  • @parimalthakkar1796
    @parimalthakkar1796 15 дней назад

    Would love a local setup tutorial! Thanks 😊

  • @johngrauel1661
    @johngrauel1661 16 дней назад

    Yes - please do a full tutorial on setup and use. Thanks.

  • @MEvansMusic
    @MEvansMusic 5 дней назад

    can this be used to route between agents as opposed to model instances? for example routing to chain of thought agent vs simple q and a agent?

  • @NNokia-jz6jb
    @NNokia-jz6jb 17 дней назад +5

    So, how to run it. And on what hardware?

  • @jlwolfhagen
    @jlwolfhagen 16 дней назад

    Would love to see a tutorial on setting up RouteLLM! 🙂

  • @limebulls
    @limebulls 16 дней назад

    Yes please full set up!

  • @kamilnowak4329
    @kamilnowak4329 17 дней назад

    The only channel where i actually watch ads. Very interesting stuff

  • @davieslacker
    @davieslacker 17 дней назад

    I would love to catch a tutorial of you setting it up!

  • @danielhenderson7050
    @danielhenderson7050 17 дней назад +1

    I think you misrepresented the graph. The "ideal router" point on the graph is likely just that - the ideal. I don't think that's claiming actual results

  • @dantfamily9831
    @dantfamily9831 16 дней назад

    I'd be interested in what hardware is needed to run something like this locally. I was waiting until late fall or early next year to buy, but I might need to get an intern system to train up. I am big on local control except when needed to reach out.

  • @MagusArtStudios
    @MagusArtStudios 16 дней назад

    First thing I did a year and a half ago was routing different LLMs via a zero-shot classifier. Looks like Route has done the same thing lol. I figured it was common sense.

  • @galdakaMusic
    @galdakaMusic 16 дней назад

    We need something locally for non difficult pourpouses. For example local home Assitant control.

  • @harshshah0203
    @harshshah0203 17 дней назад +1

    Yes do make a whole tutorial on it

  • @Idea-LabAi
    @Idea-LabAi 17 дней назад

    Please do a tutorial. And need to measure performance to validate the performance - cost graph.

  • @KingMertel
    @KingMertel 16 дней назад

    Hey Matt, what are these routers exactly? (They are not LLM I understand) And how do they determine where to route to?

  • @nate2139
    @nate2139 16 дней назад

    This sounds interesting, but does it offer the same capability that the OpenAI API offers with customizable assistants, RAG, and function calling? I still have yet to find anything that compares. Would love to see something open source that can do this.

  • @mafo003
    @mafo003 16 дней назад

    Ive seen you do techdev before and would love to see you do this one as well please.

  • @rafaeldelrey9239
    @rafaeldelrey9239 16 дней назад

    The article used GPT 4, not GPT4-O, which is already 50% of GPT4 cost. Or am I missing something?

  • @socialexperiment8267
    @socialexperiment8267 17 дней назад +1

    Danke! As always great!🎯👍

  • @leonwinkel6084
    @leonwinkel6084 16 дней назад

    For coding this would be insane. Mixed local and api endpoints

  • @Ed-Shibboleth
    @Ed-Shibboleth 17 дней назад

    That's good stuff. I will take a look at the codebase. Thanks for sharing

  • @solifugus
    @solifugus 16 дней назад

    Yes please... Full tutorial on setting this up to run locally. Also, I'd like to know how to setup multi-modal so I can show my images and casually talk to it (local).

  • @rilum97
    @rilum97 17 дней назад

    You are so consistent bro, keep it up 🙌

  • @MoadKISSAI
    @MoadKISSAI 16 дней назад

    Always yes for full tutorial

  • @imramugh
    @imramugh 15 дней назад

    I’d love to see a demo if possible.

  • @ralfw77
    @ralfw77 17 дней назад

    Hi Mathew,
    I love your channel. I’m curious if you would be willing to explore Pi ai? It doesn’t compare to the others in the same way. Maybe it’s hard to test. But very interesting. It’s trained to be empathetic and you can actually have a conversation with voice that feels satisfying.

  • @knecting
    @knecting 16 дней назад

    Hey Matt, please do a tutorial on setting this up.

  • @3enny3oy
    @3enny3oy 17 дней назад

    You should consider including Semantic Kernel and GraphRAG in that ideal stack

  • @MattReady
    @MattReady 16 дней назад

    I’d love a guide to easily set this up for myself

  • @PatrickWriter
    @PatrickWriter 16 дней назад

    Yes please make a tutorial on the routerLLM.

  • @martingauthier5245
    @martingauthier5245 16 дней назад

    It would be really cool to have a tutorial on how to implement this with ollama

  • @aleksandreliott5440
    @aleksandreliott5440 3 дня назад

    I would love to see a tutorial on how to get this running locally.

  • @AseemChishti
    @AseemChishti 13 дней назад

    Yes, give a walkthrough video for RouteLLM

  • @xhy20x
    @xhy20x 16 дней назад +1

    Please do a demonstration

  • @thecatsupdog
    @thecatsupdog 16 дней назад

    Does your local model search the internet and summarize a few web pages? That's what chatgpt does for me, and that's all I need.

  • @audiovisualsoulfood1426
    @audiovisualsoulfood1426 13 дней назад

    Would also love to see the tutorial :)

  • @sophiophile
    @sophiophile 16 дней назад

    After developing exclusively on GPT models, then joining an org with a ridiculous amount of free GCP credits and being pushed to use Gemini family instead- I can honestly say that while differences on benchmarks may seem small, they end up being really extreme in practice. I spent days smashing my head against a wall trying to get Gemini to provide quality responses, and after switching to 4o, I was literally ready to deploy.
    There still don't seem to be great benchmarks that represent performance of generative models well.

  • @orthodox_gentleman
    @orthodox_gentleman 17 дней назад

    This wasn’t just released. It had been around for a while. Now that GPT-4o and Claude 3.5 Sonnet exist things are much cheaper. I can understand using a local LLM with these two but overall the cost savings are not as big of a deal as before.

    •  16 дней назад

      API for claude and GPT is sitll expensive.

  • @macjonesnz
    @macjonesnz 16 дней назад

    I think they are saying the brown dot is where an ideal LLM would be placed, I'm not sure that Route LLM is better than Claude 3 Opus. SO not sure where on that chart their router actually is. probably down with Llama 3 8b. Cause it's only job its to route.

  • @ashtwenty12
    @ashtwenty12 17 дней назад

    Could you do a tutorial on RAG (retrieval augmented generation) ? I think I'll be pretty massive thing in agentic archetecure. Also I think RAG might soon be more than just text and PDFs 😂 in the not too distant future.

  • @sapito169
    @sapito169 17 дней назад

    wonderfull
    know you can offer a low cost service and a primun service at diferent prices

  • @ritviksinghal9190
    @ritviksinghal9190 16 дней назад

    An implementation would be interesting

  • @executivelifehacks6747
    @executivelifehacks6747 16 дней назад

    I suspect these features, plus dedicated non-GPU hardware will eventually reduce energy costs per "thought" to less than the human brain. Currently perplexity using Sonnet 3.5 thinks GPT4 uses 25x more.

  • @hipotures
    @hipotures 16 дней назад

    Reading and watching anything about AI is like a live broadcast of the Manhattan Project in 1942. The current year is 1944?

  • @opita
    @opita 16 дней назад

    Can you please look into alloy voice assistant

  • @phieyl7105
    @phieyl7105 12 дней назад

    Problem with this method is that there are some trade offs. While it maybe cheaper at answering a question directly; you sacrifice its social intelligence. Even though you get the right answer, the way the answer is phrased can be the difference between either a toddler or a graduate student. Personally I wauld want to talk with the graduate student.

  • @geekswithfeet9137
    @geekswithfeet9137 15 дней назад

    Every single time I’ve seen a claim like this, the output in real usage never compares

  • @woszkar
    @woszkar 16 дней назад

    Is this an LLM that we can use in LM Studio?

    •  16 дней назад

      its just a proxy to send queries to two models, weak vs strong. It's not a new LLM.

  • @parthwagh3607
    @parthwagh3607 16 дней назад

    yes we need detailed video

  • @andresfelipehiguera785
    @andresfelipehiguera785 17 дней назад

    A tutorial would be great!

  • @monnef
    @monnef 16 дней назад

    Promising, but a bit mess with naming. They are using GPT-4 to mean at least GPT-4 Turbo and GPT-4 Omni in various places. I am not even sure if on some place they don't really mean the older model GPT-4.

  • @heltengundersen
    @heltengundersen 6 дней назад

    Claude 2.5 Sonnet missing from the chart.

  • @BradleyKieser
    @BradleyKieser 17 дней назад

    Yes please, do the tutorial.

  • @HawkX189
    @HawkX189 17 дней назад +1

    Let me launch this... Online models are saving themselves yet because of context.

  • @angelwallflower
    @angelwallflower 15 дней назад

    yes I vote for tutorial for set up please thank you

  • @mikezooper
    @mikezooper 17 дней назад

    It doesn’t change anything. LLMs are good at certain tasks (most of which aren’t as useful as we need, and most don’t help us earn money). AI has plateaued. They haven’t replaced software engineers.

  • @tytwh
    @tytwh 16 дней назад +1

    Do you and Wes Roth collaborate? They uploaded an identically titled video 2 hours ago.

  • @RaedTulefat
    @RaedTulefat 16 дней назад

    Yes Please. a tutorial!

  • @davidk.8686
    @davidk.8686 16 дней назад

    When "data = code", how can you have security while having a actually useful / powerful AI?

  • @MPXVM
    @MPXVM 16 дней назад

    If runs on local machine, why needs OPENAI_API_KEY ?

    •  16 дней назад +1

      because it still needs to query weak models (like mistral) and strong models like GPT

  • @mdubbau
    @mdubbau 17 дней назад

    Please do a tutorial on setti g up

  • @mickmickymick6927
    @mickmickymick6927 17 дней назад

    95% of my queries, even GPT 4o or Sonnet 3.5 can't answer so I don't know what your queries are that local models usually handle fine.

  • @nashad6142
    @nashad6142 17 дней назад

    Yessss! Go open source

  • @rawleystanhope3251
    @rawleystanhope3251 16 дней назад

    Full tutorial pls

  • @keithhunt8
    @keithhunt8 16 дней назад

    Yes, please.🙏

  • @Kutsushita_yukino
    @Kutsushita_yukino 17 дней назад

    how??

  • @keithycheung
    @keithycheung 15 дней назад

    Please do a tutorial !

  •  16 дней назад

    While this looks promising, it is just a router that forwards simple queries to weak models while forwarding hard queries to strong models. This assumes that the queries can be divided between strong and weak models. If your work is truly intensive, I don't see much reduction here as it still requires querying strong models most of the time.

  • @Alice_Fumo
    @Alice_Fumo 17 дней назад

    I really don't find this to be a big deal. I expect people select the model to use themselves on a per-task basis on what they believe is the most appropriate one for the task. For me the decision process is really simple:
    1. is it code or requires complex problem-solving? -> Claude 3.5 Sonnet
    2. Do I want to have a deep conversation with a creative partner -> Claude 3 Opus
    3. Is it anything the other models would refuse? -> GPT-4o
    4. Is it too private for any of the above? -> Local LLM
    I don't need a router for this and I wouldn't trust it to reliably choose the same way I would either.

  • @jackbauer322
    @jackbauer322 17 дней назад

    don't ask in the comments each time JUST DO IT !!!

  • @WylieWasp
    @WylieWasp 15 дней назад

    4:59 you lost me completely with langtrace! What does it do in why would I want it?

  • @chrismann1916
    @chrismann1916 15 дней назад

    Now, who has this in production?

  • @trelligan42
    @trelligan42 17 дней назад

    @7:07, "causal" not "casual". #FeedTheAlgorithm

  • @samuelopoku4868
    @samuelopoku4868 15 дней назад

    If I could like and subscribe harder I would. Tutorial would be fantastic thanks 👍🏿

  • @分享免费AI应用
    @分享免费AI应用 16 дней назад

    90% GPT4o Quality? More like 100% snake oil! Where do I sign up for this "RouteLLM" deal?

  • @user-em2hr4gj1f
    @user-em2hr4gj1f 17 дней назад

    Can you make a comparison video with LangGraph X GraphRAG?

  • @yazanrisheh5127
    @yazanrisheh5127 16 дней назад

    do a tutorial on this please

  • @SvenReinck
    @SvenReinck 17 дней назад

    That’s probably what Apple does with Apple Intelligence and Private Cloud Compute.