Run ANY Open-Source LLM Locally (No-Code LMStudio Tutorial)

Поделиться
HTML-код
  • Опубликовано: 7 июн 2024
  • LMStudio tutorial and walkthrough of their new features: multi-model support (parallel and serialized) and JSON outputs.
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? ✅
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Rent a GPU (MassedCompute) 🚀
    bit.ly/matthew-berman-youtube
    USE CODE "MatthewBerman" for 50% discount
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    LMStudio - lmstudio.ai/
    LMStudio Tutorial 1 - • Run ANY Open-Source Mo...
    Disclosures:
    I'm an investor in LMStudio and CrewAI
  • НаукаНаука

Комментарии • 279

  • @bonkywonky1
    @bonkywonky1 2 месяца назад +59

    Project idea: create a bunch of agents that are experts in specific areas, like coding, Wikipedia, reasoning, law, etc, and then an orchestrating agent. The orchestrator will be the only one the user then interacts with. The orchestrator then figures out how to respond to user queries by finding all available agents available and selecting either one or multiple to produce the best answer possible.
    Either the agents have descriptions of what it’s good at or if the orchestrator can see each agents metadata and even just recognize what they’re good at by just seeing how they’re setup, that would be even better.

    • @positivevibe142
      @positivevibe142 2 месяца назад +2

      There are coding copilots/agents out there already, like Pythagora GPT Pilot, Devin, Devika, Auto-GPT, Github copilot, Warp...etc...
      Personally, I rely heavily on the current Claude 3 Opus, OpenAI ChatGPT 4 looks like a joke next to it! 😅

    • @Thedeepseanomad
      @Thedeepseanomad 2 месяца назад +4

      Yes. a specialized and optimized LLM will outperform a general model. If one could successfully train custom specializations, having a constellations of models for specialized tasks could result in very high capacity (like a model that is 7B for Cyberpunk story structure, 7B for dialogue in a Cyberpunk setting, 7B for pacing in adventure stories, etc, you could have a a setup running like 28B for only making a cyberpunk stories, but only running a single model at a time.

    • @executivelifehacks6747
      @executivelifehacks6747 2 месяца назад +1

      I have to say I use Claude 3 Opus as a first choice for AI.

    • @bigglyguy8429
      @bigglyguy8429 2 месяца назад

      @@positivevibe142 I have both and I probably won't renew Opus as it's not giving me anything GPT doesn't, but GPT can do more

    • @agentxyz
      @agentxyz 2 месяца назад +8

      great idea! you could call it "mixture of experts"

  • @karankatke
    @karankatke 2 месяца назад +11

    We need more usecase and practical guides with LM studio. Love your videos. ❤

  • @starblaiz1986
    @starblaiz1986 2 месяца назад +28

    I've been using LMStudio since your last tutorial on it, and I can attest that it's FANTASTIC to use and takes all of the headache out of setting up local AI's. TIP: In LMStudio's settings you can specify the folder to download AI models to. It's worth getting a small dedicated flash drive to store them on. That way you can play about with them without having to worry about hard drive space as the smallest models are about 5GB and the largest can get into tripple digits. Yes loading them up will take slightly longer, but inferrance won't be affected as that's done entirely from RAM (and if it's too big for the RAM then it will use your main hard drive for VRAM just like it would normally, so having it on external USB flash doesn't affect it).

    • @jayr7741
      @jayr7741 2 месяца назад

      Hey should I consider perplexity ai pro subscription to analysis my previous year questions of exam or free LMStudio will also be good please reply

    • @serikazero128
      @serikazero128 2 месяца назад +1

      does LM studio allow chat with documents?
      I'm trying to set this up properly, but even Private GPT doesn't work as it used to

    • @myandrobox3427
      @myandrobox3427 2 месяца назад +1

      ​@@serikazero128Anything LLM is good for interacting with docs

    • @serikazero128
      @serikazero128 2 месяца назад +1

      @@myandrobox3427 thanks, I'll look into it

    • @bigglyguy8429
      @bigglyguy8429 2 месяца назад

      @@jayr7741 Use Perplexity for now, unless you have a really high-end machine that can run big models (2x 3090 at 24GB each).

  • @JohnLewis-old
    @JohnLewis-old 2 месяца назад +1

    Always love your reviews. Thanks!

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 2 месяца назад +16

    Thank you for the video, and thank you for disclosing that you are an investor in both LMStudio and CrewAI. I wish you could mention it in the video for better transparency.

    • @xtramoist9999
      @xtramoist9999 2 месяца назад +3

      Agreed. Would be highly regarded.

    • @matthew_berman
      @matthew_berman  2 месяца назад +6

      I considered this, maybe I should have. I didn’t want to be…show-off-y.

    • @qwazy0158
      @qwazy0158 2 месяца назад +1

      Are either of these public? Or are both private companies ?

  • @OproDarius
    @OproDarius 2 месяца назад +1

    Such an awesome software, can't wait to see the local open source software delivering agent and llms in 5 years, will be such a ride!

  • @BudoReflex
    @BudoReflex 2 месяца назад +5

    I love how you move through topics and keep a concise summary of what is happening without going down rabbit holes. I learn a lot very quickly.

  • @OzzyMoto2K10
    @OzzyMoto2K10 2 месяца назад +1

    Great video, Matt - you have no longer jumped the shark. :)

  • @krisknap
    @krisknap 2 месяца назад

    Thanks for sharing this update and demonstrating with the examples!

  • @lancemarchetti8673
    @lancemarchetti8673 2 месяца назад +3

    The system requirements are very helpful. Thanks

  • @TheExcellentVideoChannel
    @TheExcellentVideoChannel 2 месяца назад

    Thanks Mat, what would we do without you to guide us on this journey!! I started on ollama and ran into some issues that I don't want to solve just yet but it looks like LMStudio is what I need to move to to get around the issues. Nice and timely tutorial/overview.

  • @riyadwahib4755
    @riyadwahib4755 2 месяца назад

    Thanks Matt! Great video as usual :) Yes please would be nice to see you build something with powering agents!

  • @morena-jackson
    @morena-jackson 2 месяца назад +1

    Love this type of video, thanks so much for really going into LMStudio. I"ve had the program for a dew months now but never really played with.

    • @bigglyguy8429
      @bigglyguy8429 2 месяца назад +1

      If you're using it for role-play also looking into Faraday, which is more setup for role-play and I find it runs the same models faster.

    • @morena-jackson
      @morena-jackson 2 месяца назад

      ​@@bigglyguy8429 thank you!!

  • @panagiotisgalinos1335
    @panagiotisgalinos1335 2 месяца назад

    Man, i like your videos. Very informative.

  • @DanielArnolf
    @DanielArnolf 2 месяца назад +3

    This is getting cooler by the day, what a gift !!! Thanks for your professionalism and dedication.

  • @wakingdreamsroleplay
    @wakingdreamsroleplay 2 месяца назад

    I am not a coder so I love your videos. This agents stuff excites and terrifies me. I would love to see a project that involves writing complicated documents such as text-based games (for improvised drama type activities) where the character descriptions have clue info about other players. So, the writing agent creates setting and background and character descriptions but an editor needs to go through the full document and check and make sure clues found in one character file also show up in other files and send back to writer to reiterate until everything checks out. I can do part of this with prompting but it always needs a lot of manual editing and so a way to automate would be nice. If you can demonstrate some similar process, I would certainly be thrilled. ;-)

  • @armans4494
    @armans4494 2 месяца назад +1

    Yes, please. Also publish your endpoints for consumers 🎉❤

  • @eagleterry3349
    @eagleterry3349 2 месяца назад +1

    Looking forward to seeing you build a project .

  • @supahfly_uk
    @supahfly_uk 24 дня назад

    Wow this is amazing, thank you.

  • @vibeymonk
    @vibeymonk 2 месяца назад +1

    Loved this video, please make a playlist out of it! More inclined for people like me.

  • @screamingiraffe
    @screamingiraffe 2 месяца назад

    best video so far, thank you for this

  • @lucademarco5969
    @lucademarco5969 2 месяца назад

    I would really love to see document q&a using lmstudio, because I think a lot of companies are interested in this kind of ai use.

  • @sperazza
    @sperazza 2 месяца назад +1

    fantastic, really great video

  • @issiewizzie
    @issiewizzie 2 месяца назад +9

    I'm hoping in the future, there will be a way to train an LLM or specialised model more easily for a beginner. Almost iPhone-friendly.

    • @Yakibackk
      @Yakibackk 2 месяца назад

      No hope bro

    • @alx8439
      @alx8439 2 месяца назад +1

      Oobabooga has it in UI already. You just need to have hardware which can cope it

  • @anubisai
    @anubisai 2 месяца назад

    Great video, Matt.

  • @REDULE26
    @REDULE26 2 месяца назад +1

    Nice video as always 👍

  • @justinrose8661
    @justinrose8661 2 месяца назад +1

    Thanks Matt! You're my favorite AI RUclipsr. I'd love to see you build something cool, we all would I'm sure.

  • @jackflash6377
    @jackflash6377 2 месяца назад +9

    YES to the agents locally.
    Any interest in checking out Devika? Claim to be open source Devin.

    • @matthew_berman
      @matthew_berman  2 месяца назад +3

      I spent a good bit of time trying to get it to work but couldn’t. I’ll certainly do a review when I get it working though. Based on its popularity, I suspect it’ll evolve quickly.

    • @rudolfviljoen2847
      @rudolfviljoen2847 2 месяца назад +1

      ​@@matthew_berman If you do make a video on devika please please do a section on using it with a local LLM

  • @teddygbg
    @teddygbg 2 месяца назад +2

    Thanks for this Matthew! Super helpful overview.

  • @vladyslavkorenyak872
    @vladyslavkorenyak872 2 месяца назад

    Wow, this is Gold! I would love a tutorial on how to integrate this into a website.

  • @johnp9091
    @johnp9091 2 месяца назад +1

    Such a good way to test drive some of the local models. Great job on this and all of your other tutorials! I've really learned a lot from you vids.

  • @colmxbyrne
    @colmxbyrne 2 месяца назад +2

    LM Studio really useful

  • @mathieuboisvert6865
    @mathieuboisvert6865 2 месяца назад +1

    Fantastic, I was just playing with this today. How would you integrate AnythingLLM with these different models running in parallel and interacting through crewAI?

  • @punishedproduct
    @punishedproduct 2 месяца назад +1

    Agents working locally!!❤

  • @hiddenkirby
    @hiddenkirby 2 месяца назад

    Thanks for this video. I love this development setup. How do you properly serve it all from a cloud?

  • @drlordbasil
    @drlordbasil 2 месяца назад

    LM studios and their model server is soooo easy.

    • @drlordbasil
      @drlordbasil 2 месяца назад +1

      im hoping they add combining and fine tuning functions or at least image/other gen models in future.

  • @aott6799
    @aott6799 2 месяца назад

    Really excellent videos on LMStudio. Does it have the capability to access local files to update chats with data newer than the cut-off date of the LLM? I'd like to be able to input locally stored ebooks and generate summaries along the lines of "What is 'This Book' about?"

  • @YusriSalleh
    @YusriSalleh 2 месяца назад

    Excellent video. Tqvm! Is there any video considering various option of hardwares for running local? Say various Nvidia GPUS, or AMD Rocm or even apple metal ?

  • @mikhailkalashnik0v
    @mikhailkalashnik0v 2 месяца назад +1

    Great video thanks! Any known good models I can use for infosec & or application security (pen testing)?

  • @TomHimanen
    @TomHimanen 2 месяца назад +1

    Please demo LM Studio and Crew AI as combo! Also this was a great demo, thanks!

  • @tech-vp5xe
    @tech-vp5xe 2 месяца назад +1

    Hey Matt, been following you for a long time awesome work. Does LM Studio allow different models to exist on separate GPU's

    • @matthew_berman
      @matthew_berman  2 месяца назад

      Like if you have multiple GPUs? I don’t think so

  • @333dsteele1
    @333dsteele1 2 месяца назад +1

    Great video

  • @myandrobox3427
    @myandrobox3427 2 месяца назад

    This is awesome! Thanks for sharing again!! I have quick question... I want to run this on server like you showed, create a nocode app using APIs, and have users access this application. Kinda creating for a local group of users. How do you think this is going to work in terms of machine requirements. Please guide if it's good approach! 🙏🏻

  • @agentxyz
    @agentxyz 2 месяца назад +1

    ty. great video

  • @neugen1019
    @neugen1019 2 месяца назад +5

    Matt you tricked us yesterday with that 01 thing and that voice distraction

    • @erikjohnson9112
      @erikjohnson9112 2 месяца назад +3

      I think that was actually real. The presenter just wants attention, they want to get noticed when they speak (a form of narcissism).
      I found the product to be interesting and the presentation to be distracting (a negative because it draws away from what is being presented).

    • @Akcvs
      @Akcvs 2 месяца назад

      @@erikjohnson9112 actually I think you are the narcissist and are just projecting. Transgender people have existed in every region of the Earth since before civilization itself. They're a real naturally occurring demographic. Get over it

  • @user-ef4df8xp8p
    @user-ef4df8xp8p 2 месяца назад

    LMStudio is cool.....Please, make more videos on this tool....

  • @fruitpnchsmuraiG
    @fruitpnchsmuraiG 11 дней назад

    Hey, can you share some advice for an undergrad to get into generative models, where to begin or rather what should i learn to understanding the working of LLMs and playing around with them?

  • @alanmckeon8321
    @alanmckeon8321 2 месяца назад

    Could you use these to help build an application?

  • @Myplaylist892
    @Myplaylist892 Месяц назад

    Is it possible to set folders within the LLMStudio in order to do some document querying?

  • @dieselphiend
    @dieselphiend 2 месяца назад

    Considering what I was going through to install models previously, this is dumbfoundingly simple.

  • @NahFam13
    @NahFam13 2 месяца назад

    I'd love to see videos on running specific models.
    I've been able to run almost every model I've downloaded but I can't seem to get StarCoder or StarCoder2 regardless of the preset I use and I've love to get it running without hallucinations, or looping the same sentence.
    Another thing I'd love to know is what happened to TheBloke!! I heard he stopped converting models and I'm sure the reason behind has to be epic.

  • @MeinDeutschkurs
    @MeinDeutschkurs 2 месяца назад

    Throttle: genius! LM-Studio got definitely improved. Gorgeous video!

  • @rodrimora
    @rodrimora 2 месяца назад

    Does it support exl2 quants? If not that would be the only thing imssing to make the switch from textgen web ui

  • @carthagely122
    @carthagely122 2 месяца назад

    Thanks alot

  • @faultyogi
    @faultyogi Месяц назад

    👌"I have learned a lot."

  • @thewatersavior
    @thewatersavior 2 месяца назад +1

    branching chat would be a dope chrome plugin

  • @mendthedivide
    @mendthedivide 2 месяца назад

    what kinda chip does your laptop have Matthew Berman? m1 m2 m3?

  • @profitunist5876
    @profitunist5876 23 дня назад

    Hi, I was wondering how you'd download gated models, like meta-llama/Meta-Llama-3-8B? There are quantized versions from other authors but I'd prefer to download the actual one released by Meta. Thanks

  • @therighteousagent
    @therighteousagent 2 месяца назад

    will there be a memory and roleplay function implemented?

  • @WylieWasp
    @WylieWasp 2 месяца назад

    Hi Matthew just try to sign up for the newsletter something is broken just to let you know

  • @thegooddoctor6719
    @thegooddoctor6719 2 месяца назад

    So the big question is, When are they going to start charging you money to use LMStudio ?????? You got the best content as usual !!!!!!!!!!

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 2 месяца назад

    I'm especially interested in the ability to generate multiple responses from the same model then selecting the best one. Can LM Studio do that at this time?

  • @KaiPhox
    @KaiPhox 2 месяца назад

    There are many models of Grok to download. Do I need all the files or is there a one click download for LM studio?

  • @trazercreations8478
    @trazercreations8478 2 месяца назад +3

    also you can use this with AnythingLLM for documents

    • @racerx1777
      @racerx1777 2 месяца назад

      DO NOT USE ANYTHINGLLM it is no longer free! Im starting to see a pattern with this video creator. As soon as he releases a video these so called free things all of the sudden become paid for versions! I used AnythingLLM one night on this computer after watching this guys video on AnythingLLM the very next morning at work i went to put it on a computer at work and it was no longer free but subscription based! I AM DONE WITH THE MONEY GAMES BASED ON PRINCIPAL! THEY ARE ALL TRYING TO CASH IN ON THIS CRAP THAT AMOUNTS TO NOTHING MORE THAN A GIMMICK A AI TREND IF YOU WILL!!! BANKRUPT THESE PEOPLE!

  • @babbagebrassworks4278
    @babbagebrassworks4278 2 месяца назад

    Doesn't work on my Pi5 but Ollama does. They only seem to use memory when answering a prompt, so I can multiple version of Ollama running with different models, as long as I only prompt one at a time. AMD Ryzen AI and Intel Core Ultra have NPUs onboard now so no need for big GPU card.

  • @Al-Storm
    @Al-Storm 2 месяца назад

    I run anythingllm on top of this. It has a nice rag setup.

  • @Mangini037
    @Mangini037 2 месяца назад +1

    Yes pleeeease do a video using AutoAgent tutorial. Thanks.

  • @jeffg4686
    @jeffg4686 2 месяца назад

    They should get a visual node setup like comfy UI but for creating purely llm based apps.
    Some generate button or whatever spits out the python code (or just runs it)
    Not that it's hard to write, but some might prefer the node setup.

  • @nannan3347
    @nannan3347 2 месяца назад

    If LMStudio had a built in RAG it would be perfection.

  • @irom77
    @irom77 2 месяца назад

    Hey, what laptop would you recommend for this stuff ?

  • @infernosfmatt
    @infernosfmatt 2 месяца назад

    Can u make or do you have a vid on how LLMs are created?

  • @sizwemsomi239
    @sizwemsomi239 2 месяца назад

    this is fire

  • @chrisb9045
    @chrisb9045 2 месяца назад

    Hi, it is possible to upload to LM studio your own document files ex excel files, photos, pdf, txt?

  • @horriblyblue6203
    @horriblyblue6203 2 месяца назад

    Hello Matthew, does LMStudio supports AMD RX GPUs to power LLM? Because i can't figure it out, and it still uses my CPU only.

  • @Paulina-ds5bg
    @Paulina-ds5bg 2 месяца назад

    Can someone suggest the way/tool when can I train open source model with custom data? For example with pdf.? And giving a questions in terms the given data?

  • @TheBlaser55
    @TheBlaser55 2 месяца назад

    Mathew, it would be great if there was a reference listing of all your videos so we could just pick a topic and find your videos that apply so we can watch them, again..... I think you did a video with agents before that did something simular, just wish they were easier to find

  • @imacuser101
    @imacuser101 2 месяца назад +1

    use autogen to create a series of agents with their agent builder to run a task

  • @mydogsbutler
    @mydogsbutler 2 месяца назад +1

    One thing which would be useful... how to sync models between LM studio and Ollama (to avoid duplication to save space).
    LM Studios defaults Local models in windows is: C:\Users\User\.cache\lm-studio\models
    When I point it to Ollama's in Windows 11 WSL2 ubuntu 22.04 it doesn't work ( \\wsl.localhost\Ubuntu-22.04\|usr\share\ollama\.ollama\models)
    Anyone know the answer? Is it even possible?

  • @00111000
    @00111000 2 месяца назад

    In terms of being lightweight, how does this compare to Ollama?

  • @jackiekerouac2090
    @jackiekerouac2090 2 месяца назад

    Would that version be good for a professional translator from English to Spanish?

  • @berer.
    @berer. 2 месяца назад +2

    Can you install Grok from the download provided by Elon Musk? Or do you always have to use their dl link?

  • @timduck8506
    @timduck8506 Месяц назад

    Hmm so which is better for a newbie VS coder dev, LM studio or Ollama

  • @JoaoWilliamRodriguesCardoso
    @JoaoWilliamRodriguesCardoso 2 месяца назад

    Would there be any app similar to LM studio for mobile phone? I would love to run these open source LLM on my phone.

  • @robottalks7312
    @robottalks7312 2 месяца назад

    crew ai with claude 3 is it possible ? comparison with Devin

  • @ErickJohnson-qx8tb
    @ErickJohnson-qx8tb 2 месяца назад

    You need to do a troubleshooting shoot video when the download folder gets moved it ruins everything and I cant seem to line realign path

  • @Brax1982
    @Brax1982 Месяц назад

    I don't think that's how the compatibility filter works. I checked for a couple models that are in a repo with tensor files. They did not show up if that filter was active. I guess they include "does not work on LM Studio" as being not compatible with your system. Because it isn't compatible with their tool? At first I thought that it's not true that every model on HF could be found on there. But for those that were missing, I then saw that they show up if that filter is off. Of course, this is a sample size of trying 3 models or something. May be outliers. But do any models that are not GGUF work on LM Studio?

  • @jamesnaftalin6103
    @jamesnaftalin6103 2 месяца назад

    I have quite an old machine with not much of a GPU, can you make LM Studio run in the cloud?

  • @phobes
    @phobes 2 месяца назад

    I like LM Studio but I've had an issue with every single model hallucinating when I try to use the built-in server.

  • @duckpear2442
    @duckpear2442 2 месяца назад +1

    2 questions: (1) do these models run offline (good for private data?); and (2) is LMStudio better than ollama?

  • @ricardocnn
    @ricardocnn 2 месяца назад

    It also integrates with langchain and llamaindex

  • @farorasyid1832
    @farorasyid1832 2 месяца назад

    hi. what common computer spec for this ? thanks

  • @DaveEtchells
    @DaveEtchells Месяц назад

    Amazing to me how little RAM the models use.
    Time for me to get a maxed M3 Max MBP I guess, although gonna wait till after the May 7 event JIC. (I know very unlikely to have any MBP hardware impact, but I’m cautious 🙂)

  • @dreamyrhodes
    @dreamyrhodes 2 месяца назад

    Wow finally an UI with documentation? I always hated that they throw all that GGUF, GPTQ, 4Q 5Q _K_M_S at you without ever telling what it means and what it needs to run.

  • @Parisneo
    @Parisneo 2 месяца назад

    LM studio is a nice project. The only complaint I have is that it is not open source.
    People can use LM Studio along with lollms as it can be run as a server and they seem to get very good output. So yeah, this is a very cool and useful tool.

  • @u-save5989
    @u-save5989 2 месяца назад

    How to set it up on a server so I can sell access to chats of trained agents. Like GPTs in OpenAI shop. What GPU RAM CPU needed to use it per 5k daily users and if concurrent - how to calculate this and not crash server. Also - GROK, is it supported?

  • @alexlavertyau
    @alexlavertyau 2 месяца назад

    Does LLMStudio support any kind of image generation like MidJourney or ChatGPT plus ?

    • @EM-yc8tv
      @EM-yc8tv 2 месяца назад +1

      It's got multimodal image interpretation ability, with a Vision Adapter. For image generation, AFAIK LM Studio doesn't do that.... ComfyUI and Automatic1111 are what you'd be looking for to use all the Stable Diffusion models.

  • @cucciolo182
    @cucciolo182 2 месяца назад

    Do you recommend any tool to run "Text to image LLM NSFW " ? locallly

  • @yuzual9506
    @yuzual9506 2 месяца назад

    I want to build an agent team. At first i transcribe an audio file with whisper large v3 , the next agent can do a PowerPoint présentation. The next one can whrite a macro for PowerPoint an the next one can implémente it an generate a full PowerPoint présentation with template i have selected ! What do you think ?

  • @GetzAI
    @GetzAI 2 месяца назад

    Matthew, you should conduct a Mac vs PC Local LLM comparisons. M2,M3, RTX 4090. MBP vs Studio.

  • @aimademerich
    @aimademerich 2 месяца назад

    Phenomenal