LM Studio 2.0: Image Analysis Revolution with AI Unleashed

Поделиться
HTML-код
  • Опубликовано: 28 май 2024
  • 👋 Welcome to an all-encompassing guide on LM Studio! In today's video, I'm thrilled to walk you through the steps to download, install, and operate large language models locally on your computer, ensuring 100% data privacy. Whether you're a beginner or looking to explore advanced features like running vision models, this tutorial has got you covered! 🎥✨
    🔹 What You Will Learn:
    Download & Install LM Studio: Learn how to seamlessly set up LM Studio on your system.
    Operate Large Models Locally: Dive into how to run models like Llama 38B locally for utmost privacy.
    Run Multiple Models: Discover how to operate various models simultaneously without any hassle.
    Server Integration: Integrate AI models with your applications using simple steps.
    Vision Models: Explore the multimodal model Lava, which can describe images uploaded by you.
    👉 Don’t forget to subscribe and click the bell icon to stay updated on our latest videos. Hit like and share this video to help others learn about these fantastic capabilities!
    🔗 Resources:
    Patreon: / mervinpraison
    Ko-fi: ko-fi.com/mervinpraison
    Discord: / discord
    Twitter / X : / mervinpraison
    Code: mer.vin/2024/04/lm-studio-2-0/
    Timestamps:
    0:00 - Introduction to LM Studio
    0:18 - Downloading and Installing LM Studio
    0:24 - Running Your First Model Locally
    0:33 - Step-by-Step Setup for Multiple Models
    1:00 - Server Setup and Application Integration
    1:05 - Running a Multimodal Vision Model
    1:46 - Integrating Models with Python Applications
    2:59 - Final Thoughts and Wrap-up
    #LMStudio2 #Local #AI #LMStudio #LMStudioTutorial #LMStudio #ArtificialIntelligence #LLM #OpenSourceLLM #LMStudioFirstSteps #LMStudioAI #LMStudioGuide #Llama #HowToLMStudioLLM #UsingLMStudioQuickGuide #LMStudioServer #Llama3LMStudio #LMStudioInstallGuide #LMStudioLLM #PresetsInLMStudio #HowToConfigureLMStudio #LocalServerOnLMStudio #RunAILocally #OpenSourceLLMs #OpenSourceLLM #LLMLocalOnPC #RunLLMsLocally
  • ХоббиХобби

Комментарии • 26

  • @LifeAsARecoveringPansy
    @LifeAsARecoveringPansy Месяц назад +3

    Again, another great tutorial, short and to the point. I was just looking into whether or not phi-3 would be integrated with LLaVA like phi-2 was. It is great to know that it was.

  • @Copa20777
    @Copa20777 29 дней назад +2

    Man i watch these videos with a cup of coffee❤

  • @RetiredVet1
    @RetiredVet1 26 дней назад +1

    I enjoyed this short video. I have been having problems with LM Studio and I figured out why. I can't run multiple models because you have to have a GPU. This message appears at the bottom. Makes sense and so I will stop trying. I thought it had to do with memory, it does, with GPU memory, which I don't have.
    Full GPU Offload (minimum est. 2.63 GB)
    In multi-model sessions, LM Studio only support full GPU offload. Ensure your system has enough VRAM to support the model.
    The second issue I had was that Models would sometimes not work in chat. People would tell me that I had the GPU turned on, but when I looked at the GPU settings under Presets on the right side when I had chat running, I would see no checkmark in the checkbox, but the slider would read 8 or 10.
    I finally tried checking the checkbox and then uncheck it and the slider would read 0 and things worked.
    I run on a Linux machine running POP_OS 20.04, a Ubuntu derivative, so I don't know if this applies to Mac's or Windows, It was confusing at first, but now it makes more sense.
    I like LM Studio. It is a good way to investigate models. Now it is less frustrating to get models to work.

  • @elijahthomas7833
    @elijahthomas7833 29 дней назад +1

    happy to see those subscribers going up well deserved!

  • @YusriSalleh
    @YusriSalleh 22 дня назад

    hi mervin, i love your videos. ezy to understand and concise. would love if u can do video on option to run local LLM + vision locally. perhaps use Apple M series, Nvidia RTX GPU or even perhaps Jetson AGX?

  • @MeinDeutschkurs
    @MeinDeutschkurs Месяц назад +1

    Finally! Great! ❤

  • @duanesearsmith634
    @duanesearsmith634 Месяц назад

    I am really enjoying your short and to the point videos like this one. When you are processing files for your KB for RAG or what have you, have you seen people using vision model outputs to "stand in" for images and videos in documents when they are chunked and processed? (I assume you could also do the same with audio?)

  • @ps3301
    @ps3301 29 дней назад

    Can we use llava with open interpreter ?

  • @originalmagneto
    @originalmagneto Месяц назад

    I’m thinking of trying local models, but I don’t know what tools to use. Should I go with ollama, lm studio, or Apple MLX? I have Mac Studio with M1 Max with 32gb ram. I want to use the local model for creating agents with crewAI. The agents should be able to access the internet, scrap articles, write summaries, benchmark the articles and then upload the best ones to a blog on Wordpress, together with SEO optimized titles, hashtags and maybe a generated image? Would this all be possible using only local LLMs?

  • @jets115
    @jets115 Месяц назад

    Can you do a video of deploying and integrating llama.cpp to a python app?

  • @user-gr8on2pc5y
    @user-gr8on2pc5y 29 дней назад

    hi,
    Mervin Praison
    can you make a video of devin AI building an automated bot that books appointements

  •  Месяц назад

    With LM Studio, is it posible to use your own data when running the server ??

  • @williamwong8424
    @williamwong8424 Месяц назад

    can u tell us how to run this model online and able to host it via a url? then have a password to unlock that page so that users can use it

  • @patrickshanahan7505
    @patrickshanahan7505 Месяц назад +1

    I want to develop tool to read and analyze piano sheet music for classical pianists. Is this possible with a multimodal model using sheet music scans?

    • @ArianeQube
      @ArianeQube Месяц назад

      Yes, but you probably need to finetune the model a bit on some sheet music.

    • @MervinPraison
      @MervinPraison  Месяц назад

      Yes, that is a great use case . Will see if I can finetune one, if I could get sometime

    • @patrickshanahan7505
      @patrickshanahan7505 Месяц назад

      @@ArianeQube I have done this initially by coding the music (rhythmic layer) as a word document successfully. But it will be possible to fine tune with the visual input only?

    • @patrickshanahan7505
      @patrickshanahan7505 Месяц назад

      @@MervinPraison I am convinced that this tool could be VERY lucrative. I have developed a model with at least 19 layers of musical information.

  • @HyperUpscale
    @HyperUpscale Месяц назад

    I don't like LM Studio. I don't know why people are still using it...

    • @MuiOmniKing
      @MuiOmniKing Месяц назад

      As someone who develops and builds a lot, LM-Studio is actually quite useful and better in my opinion. Chat wise maybe not but for quick loading testing and building most definitely.

    • @HyperUpscale
      @HyperUpscale Месяц назад

      @@MuiOmniKing I have a not-a-question for you. Have you heard of ollama!

    • @ColinKealty
      @ColinKealty Месяц назад

      Convenience and simplicity
      The biggest issues when getting people to use new tech is to make it accessible and straightforward, LM Studio provides both, I don't use it much either but I'm a power user. Having installed and used LM Studio, I understand entirely why it's popular and loved

    • @HyperUpscale
      @HyperUpscale Месяц назад

      @@ColinKealty Alright. I got your point.
      Let me just simplify, so there is no need to discuss it anymore: LMstudio is a toy (for kids), ollama is a tool - for men.
      Keep playing... I am busy.

    • @ColinKealty
      @ColinKealty Месяц назад +3

      Most awful take I've ever heard LOL