Local AI Coding in VS Code: Installing Llama 3 with continue.dev & Ollama

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024

Комментарии • 40

  • @eckhardEhm
    @eckhardEhm 4 месяца назад +1

    nice clip jan, thanks its working like a charm.
    hardest part was getting rid of amazon aws stuff in vs code that kept on installing amazon Q and stealing the code completion ^^.

    • @iamjankoch
      @iamjankoch  4 месяца назад +1

      I'm glad I didn't have AWS connected to VS Code in that case :D Glad it worked well for you!

  • @HowardGil
    @HowardGil 4 месяца назад +1

    Dope. I'm going on an RV trip for two weeks so will have spotty service but can still ship 🚢

    • @iamjankoch
      @iamjankoch  4 месяца назад

      Sounds awesome, enjoy the trip!

  • @codelinx
    @codelinx Месяц назад

    This is awesome. Currently using codeium, but will install good later and give it a go.

    • @iamjankoch
      @iamjankoch  Месяц назад

      @@codelinx let me know how it goes 💪🤖

  • @jasonp3484
    @jasonp3484 4 месяца назад +1

    Great video my friend. Worked like a charm.I thank you very much!.

    • @iamjankoch
      @iamjankoch  4 месяца назад

      Glad to hear that, happy coding!

  • @Ilan-Aviv
    @Ilan-Aviv Месяц назад

    simple and great explanation ! thank you

    • @iamjankoch
      @iamjankoch  Месяц назад +1

      @@Ilan-Aviv you bet, glad the tutorial was useful! What are you building with AI?

    • @Ilan-Aviv
      @Ilan-Aviv Месяц назад

      @@iamjankoch Building a code assistant agent to work inside VSC editor, to help me with other projects.
      actually I will ask for your advice:
      I have a trading bot written in NodeJS react while this project written by another programmer, for some parts struggling with it development.
      like to have a useful AI assistant to help me find bugs and understand the app structure.
      the app runs server with browser client while it have about 130 files.
      tried to use open AI GPT but it's to many files for it, while it's loosing the context, with the other issues it have.
      I came into the conclusion that the best way is to have a local llm running on my machine.
      if you have any recommendations for the right AI assistant you would use, I'll appreciate you advice.🙏

  • @caiohrgm22
    @caiohrgm22 3 месяца назад +1

    Great video ! really helped me setting things up!!

    • @iamjankoch
      @iamjankoch  3 месяца назад

      @@caiohrgm22 glad to hear that!!!

  • @scornwell100
    @scornwell100 Месяц назад +1

    Doesn't work for me. I can do it from command line but Continue plugin seems not working at all. Did all configuration and it responds with nothing.

    • @iamjankoch
      @iamjankoch  Месяц назад

      @@scornwell100 did you check the issues listed in their GitHub repo? github.com/continuedev/continue
      You can also join their Discord server to get more detailed help: discord.gg/vapESyrFmJ

  • @dzimoremusic5515
    @dzimoremusic5515 6 дней назад

    hatur nuhun pisan ... sehat sehat terus kang

  • @tanercoder1915
    @tanercoder1915 Месяц назад

    Great explanation

    • @iamjankoch
      @iamjankoch  Месяц назад

      @@tanercoder1915 thank you!

  • @PedrinbeepOriginal
    @PedrinbeepOriginal 5 месяцев назад

    Thank you! I was looking for this. Wat are the specs of your mac?

    • @iamjankoch
      @iamjankoch  5 месяцев назад +1

      Glad you enjoyed the video! It’s a 2023 MacBook Pro with M2 Pro and 16 GB RAM

    • @PedrinbeepOriginal
      @PedrinbeepOriginal 5 месяцев назад +1

      @@iamjankoch Nice, thank you for the fast answer, I was thinking if I need a M3/M2 Max with a lot of ram to load Llama 3 in the MacBook.

    • @iamjankoch
      @iamjankoch  4 месяца назад

      @@PedrinbeepOriginal Not really. Granted, I don't do much other heavy work when I'm coding but it runs super smooth with the 16GB. The only time I wish I had more RAM is when doing video editing lol

  • @rodrigoaaronmartineztellez3572
    @rodrigoaaronmartineztellez3572 15 дней назад

    What about using rtx 4060ti 16 vram with ryzen 9 5950x could works well ?

    • @iamjankoch
      @iamjankoch  8 дней назад

      @@rodrigoaaronmartineztellez3572 yes, that should handle Ollama quite well

  • @JaiRaj26
    @JaiRaj26 3 месяца назад

    Is 8gb RAM sufficient? I have enough Storage but when I try to use this after installing, it just doesn't work. Keeps loading.

    • @iamjankoch
      @iamjankoch  3 месяца назад

      I run it on 16 GB. The processor and GPU are quite important for Ollama as well.
      16GB RAM are recommended: github.com/open-webui/open-webui/discussions/736#

    • @scornwell100
      @scornwell100 Месяц назад

      I run it with 11GB of VRAM from command line and it seems fine, but inside VSCode I can't get it to respond, it throws errors that the stream is not readable.

  • @ctopedja
    @ctopedja 8 дней назад

    wish you posted your full config.json on pastebin or somewhere since default config is not even near that you show and guide is useless without full setup

    • @iamjankoch
      @iamjankoch  5 дней назад

      Here you go: gist.github.com/jan-koch/9e4ea0a9e0c049fe4e169d6a5c1e8b74
      Hope this helps

  • @2ru2pacFan
    @2ru2pacFan 3 месяца назад

    Thank you so much!

    • @iamjankoch
      @iamjankoch  3 месяца назад

      @@2ru2pacFan glad you enjoyed the tutorial!

  • @almaoX
    @almaoX 4 месяца назад

    Thnaks alot! =)

  • @user-13853jxjdd
    @user-13853jxjdd 2 месяца назад +1

    luv u bro

    • @iamjankoch
      @iamjankoch  2 месяца назад

      @@user-13853jxjdd glad you enjoyed the video!

  • @superfreiheit1
    @superfreiheit1 19 дней назад

    codearea is to small cant see

  • @josersleal
    @josersleal 4 дня назад

    forgot to mention how much you need to pay for a computer that can handle this otherwise it will not even start or take literally ages to do anything or you have to use the lowest llms that cant do s&T(t. it's all hype to get money into openai and otehrs like

    • @iamjankoch
      @iamjankoch  4 дня назад

      @@josersleal I have an M2 Pro MacBook Pro with 16GB, for reference