How To Run Open-Source LLMs (e.g. Llama2) Locally with LM Studio!

Поделиться
HTML-код
  • Опубликовано: 3 ноя 2024

Комментарии • 25

  • @amortalbeing
    @amortalbeing Год назад +2

    thanks a lot for sharing this with us

  • @shiccup
    @shiccup Год назад +4

    great information! glad you are uploading this! but i would really appreciate if you upgraded your mic :3

  • @SAVONASOTTERRANEASEGRETA
    @SAVONASOTTERRANEASEGRETA Год назад

    sorry i did not understand which file needs to be modified for the external server and where i should look in the Lm Studio folder. thank you :-)

  • @trobinsun9851
    @trobinsun9851 Год назад +1

    thanks ! Do you know how to connect it with your own data ?

    • @Derick99
      @Derick99 11 месяцев назад

      Privategpt

  • @RuturajPatki
    @RuturajPatki Год назад

    Please share your laptop specifications. Mine works so slow...

  • @ayushrajjaiswal2646
    @ayushrajjaiswal2646 Год назад +1

    how to use lm studio server as a drop-in replacement to OpenAI API?
    please make a video on this ASAP

    • @nosult3220
      @nosult3220 Год назад

      openai.api_key = 'your-actual-api-key'
      openai.api_base = 'localhost:1234/v1'
      mistral_response = openai.ChatCompletion.create(
      model="gpt-3.5-turbo",
      messages=[
      {"role": "user", "content": PROMPT_TEMPLATE},
      ],
      stop=["[/INST]"],
      temperature=0.75,
      max_tokens=-1,
      stream=False,
      ) Here ya go

  • @tufeeapp9393
    @tufeeapp9393 Год назад

    when i tried to download model and then tried to run it's not loading model on studio can you please help me with that?

  • @vitalis
    @vitalis Год назад +3

    what's the difference between this and oobabooga?

    • @ClubMusicLive
      @ClubMusicLive Год назад

      I also would like to know

    • @scienceineverydaylife3596
      @scienceineverydaylife3596  8 месяцев назад

      I haven't played around with Oobabooga - but looks like similar functionalities (although I didn't see a .exe installation of Oobabooga) - in my experience with LMStudio vs other similar offerings, LM studio was the best by far: book.premai.io/state-of-open-source-ai/desktop-apps/

  • @MahadevanIyer
    @MahadevanIyer 10 месяцев назад

    can you run 7b model on normal i5 16gb ddr5 laptop without gpu ?

  • @shiccup
    @shiccup Год назад

    how do i change the chatgpt url with this?

  • @amortalbeing
    @amortalbeing Год назад

    what's the requirements for the 30b and larger models when quantized? how much vram or system ram is needed?

    • @scienceineverydaylife3596
      @scienceineverydaylife3596  Год назад

      It depends on the type of quantization. A rule of thumb is for 8-bit quantization it is the same i.e. 30b parameter model 8-bit would need 30 GB of ram (preferably GPU)

    • @Derick99
      @Derick99 11 месяцев назад

      ​@scienceineverydaylife3596 what would something like 128gb ram but an 8gb gpu 3070 do compared to having multiple graphics cards

    • @scienceineverydaylife3596
      @scienceineverydaylife3596  10 месяцев назад

      @@Derick99 You would probably need to figure out if LMStudio could communicate with multiple GPUs. I know packages like huggingface accelerate can handle multiple GPU configurations quite seamlessly

  • @DihelsonMendonca
    @DihelsonMendonca Год назад +1

    Your microphone is terrible, but the video is great. Thanks. 🎉🎉❤

  • @resourceserp
    @resourceserp Год назад

    whats the memory requirement for windows 10? can it run in cpu mode?

  • @WillyKusimba
    @WillyKusimba Год назад

    Is the SAAS 100% free?