Boost Productivity with FREE AI in VSCode (Llama 3 Copilot)

Поделиться
HTML-код
  • Опубликовано: 30 май 2024
  • 🚀 Dive into the future of coding with our detailed guide on integrating Llama 3 into your Visual Studio Code setup! In this video, we walk you through downloading and setting up Llama 3 locally to create a private co-pilot, enhancing your coding efficiency. Learn how to automate code writing, refactoring, and error fixing to boost productivity and code quality dramatically.
    👉 What you'll learn:
    Download and install Llama 3 and Code GPT on VS Code.
    Configure your AI co-pilot for optimal coding support.
    Generate and refactor code effortlessly.
    Connect your code to a SQL database with just a few commands.
    🎯 Why Watch This?
    Enhance your programming skills with AI tools.
    Speed up your coding projects and reduce errors.
    Learn to set up and use one of the most powerful coding tools available.
    📌 Don't forget to:
    Subscribe for more videos on Artificial Intelligence and coding.
    Like this video if you find it helpful, and share it with fellow coders.
    Comment below with any questions or what you'd like to see next!
    🔗 Resources:
    Sponsor a Video: mer.vin/contact/
    Do a Demo of Your Product: mer.vin/contact/
    Patreon: / mervinpraison
    Ko-fi: ko-fi.com/mervinpraison
    Discord: / discord
    Twitter / X : / mervinpraison
    Timestamps:
    0:00 - Introduction to Llama 3 and VS Code integration
    1:00 - Downloading and setting up Llama 3
    2:24 - Configuring AI co-pilot settings
    3:22 - Writing and running your first AI script
    5:00 - Debugging and documentation tips
    #VSCode #Free #Copilot
    #VSCodeCopilot #VisualStudioCode #VsCode #GithubCopilot #VSCode #Copilot #AI #AICoding #GithubCopilotTutorial #GithubCopilotVSCode #LocalCopilot #PrivateCopilot #CodeCopilot #LlamaCopilot #OllamaCopilot #FreeCopilot #FreeVSCodeCopilot #LocalVSCodeCopilot #PrivateVSCodeCopilot #Llama3Copilot #Llama3Code #CodeLlama3 #Llama3VSCode #Llama3VSCode #VSCodeLlama3 #VSCodeExtension #VSCodeExtensionLlama #VSCodeExtensionLlama3
  • ХоббиХобби

Комментарии • 58

  • @jeffwads
    @jeffwads Месяц назад +4

    Very impressed with the 8B L3 in regards to coding. Amazing how much progress they have made.

  • @kannansingaravelu
    @kannansingaravelu 10 дней назад +1

    do we need both llama3:8b and instruct? can we not work only with instruct? Also I see your code works faster - could you specify your PC / system specs and config as it takes a good amount of time on my iMac 2017

  • @joseeduardobolisfortes
    @joseeduardobolisfortes Месяц назад +4

    Very good tutorial. You don't speach about the platform; can I assume it will work both Windows and Linux? Another thing: What's the recommended hardware configuration to install Llama 3 locally in our computers?

  • @Fonzleberry
    @Fonzleberry Месяц назад +1

    Any ideas about how this works on large scripts? What's the context length?

  • @m12652
    @m12652 Месяц назад +1

    The buttons don't do anything... note i'm working off line. The 4 buttons at the bottom of the add-ins panel just copy the code to the chat window. They don't do anything else and once clicked, the AI stops responding to questions. When i asked it what was wrong with the "explain selected code" the AI responded "nothing, its only meant to copy the code. Anyone know if this is broken for me or its simply an incomplete add-in...?

  • @m12652
    @m12652 Месяц назад +1

    Does CodeGPT require me to be logged in? I'm all set up but if I ask it to explain something it just says "something went wrong! Try again.". Then i have to either quit and restart vscode or disable then enable the extension...

  • @martin22336
    @martin22336 Месяц назад +4

    I am using this is insane 😮 I think full stack developers will not like their future holy crap.

  • @thebudaxcorporate9763
    @thebudaxcorporate9763 Месяц назад +1

    wait for implement on streamlit, keep it up bro

  • @bhanujinaidu
    @bhanujinaidu Месяц назад +1

    super video. Thanks

  • @ErfanKarimi-ep7ie
    @ErfanKarimi-ep7ie 2 часа назад

    guys i installed it according to the vid but i cant run the ai and i saw somewhere that i need to put it in PATH but i dont know where the files are installed

  • @prestonmccauley43
    @prestonmccauley43 Месяц назад

    This was a great quick lesson. One thing I was seeing if anyone figured out, often I need to refer to very new documents on API etc, has anyone tied this into like. RAG structure, so we are always looking at latest document?

  • @G3TG0T
    @G3TG0T Месяц назад +3

    Great video! How do I connect to my own local Ollama server running on my local machine with this?

  • @m12652
    @m12652 Месяц назад +1

    Nice one 👍

  • @yagoa
    @yagoa Месяц назад +1

    how do I use another computer running ollama on my LAN?

  • @ah89971
    @ah89971 Месяц назад +1

    Thanks, can you make video of pythagora using llama 3?

  • @harikantipudi8668
    @harikantipudi8668 29 дней назад

    Latency is pretty bad when im using llama3:70b on vscode for CodeGPT. I am on windows . I guess its with the underlying machine . Anything can be done here?

  • @dorrakallel5303
    @dorrakallel5303 Месяц назад +1

    thank you so much for this video , is it open source please? can we find the weights files and use it ?

  • @MosheRecanati
    @MosheRecanati Месяц назад

    Any option to use it with intellij ide?

  • @red_onex--x808
    @red_onex--x808 Месяц назад +1

    awesome info

  • @Techonsapevole
    @Techonsapevole Месяц назад +2

    very nice, and it's just a 8B parameter model

  •  Месяц назад +2

    Thanks for sharing.
    I host the ollama server on a remote server. How do I make it connect to the remote machine instead of localhost?

    • @talniya
      @talniya Месяц назад

      Reply here if you find this

  • @alinciocan5358
    @alinciocan5358 Месяц назад +2

    does it slow down my laptop if I run it locally? would I be better off running haiku on cloud? what would you recommend, I'm just getting into code

    • @LifeAsARecoveringPansy
      @LifeAsARecoveringPansy Месяц назад +1

      I have 8 GB of VRAM and when autocomplete is on for Codyai copilot, my fans turn on full blast on my laptop. I have 64 GB of ram so it doesn't slow my pc down, but if it was running on your CPU and not your GPU it might slow your computer down. I don't think that it will slow your computer down if you have enough VRAM or a ton of ram, but it could depending on your computer's specs.
      There is also an extension called "Groqopilot" on vscode that requires that you supply it with a groq api key, and when you do it will create code for your lightning fast with llama3 70b which is of course the better model of llama3 8b. It doesn't autocomplete but it behaves very much like the tutorial we just watched.

  • @MaorAviad
    @MaorAviad Месяц назад +1

    amazing content! maybe you can create a long video where you use this to create a full stack application

  • @srinivasyadav7448
    @srinivasyadav7448 Месяц назад

    Does it work for react native code?

  • @iukeay
    @iukeay 21 день назад

    This would be amazing if the code was stored for a workspace in a vector store

  • @JohnSmith762A11B
    @JohnSmith762A11B Месяц назад +2

    Excellent and useful tutorial! 👍

  • @programmertelo
    @programmertelo 19 дней назад

    amazing

  • @m12652
    @m12652 Месяц назад +1

    This app looks like a good idea but its a long, long way from finished. Buttons (refactor, explain, document and fix bug in selected code don't do anything but copy selected code to the chat. If you use the clear button it clears the selected model etc. but not the history. I just asked it to write a basic api call for sveltekit and it wrote some pure garbage based on assuming the previous selection was part of the current question. I'm using a 2019 MBP with 32gb ram and its too slow to add any value so far... for me at least

  • @SelfImprovementJourney92
    @SelfImprovementJourney92 Месяц назад +1

    can i use it to write any code I am a beginner do not know anything about coding just starting from zero

    • @MervinPraison
      @MervinPraison  Месяц назад

      Yes you can write most popular programming language

    • @konstantinrebrov675
      @konstantinrebrov675 Месяц назад

      You would need to know at least the basics of coding, and how an application is designed and structured. This writes the code for you but if you cannot read the code or at least understand what it's doing at a high level, then it's too early for you. It gives you 2/3 of the finished product. You just need to know how to integrate that code into your application. You need to know how to create an application, what are the different parts of an application, how to deploy and run an application.

  • @djshiva
    @djshiva 25 дней назад +1

    This is amazing! Thanks for much for this tutorial!

  • @m12652
    @m12652 Месяц назад +2

    Does anyone else get the feeling that the way AI's answer questions is based on the old Microsoft "clippy" assistant... annoyingly eager and can't answer anything much without wrapping it in a paragraph or so of irrelevance.... Very annoying to get 6 or 7 line answers where the only relevant bits are a number or a few words.

    • @Fonzleberry
      @Fonzleberry Месяц назад

      If you're using chat gpt you can change that in settings. I think in things like Ollama, you can also change your settings so that it get's straight to the point.

    • @m12652
      @m12652 Месяц назад +1

      @@Fonzleberry I know thanks... just haven't had much luck though lol, at one point i got fed up and added an instruction to "only answer boolean questions with a yes or a no", I had to restart the model (bakllava) to get it to start answering properly again as it answered all questions with "yes" or "no". I don't get why the default mode is to burry all answers in information not requested. I guess someone redefined the word "conversational". Can't even ask whats 2+2 without an explanation lol

    • @Fonzleberry
      @Fonzleberry Месяц назад

      @@m12652 It will improve with time and use cases. A model fine tuned with META's Messenger/Whatsapp data would have a very different feel.

  • @McAko
    @McAko 3 дня назад

    I prefer to use Continue plugin

  • @sillybilly346
    @sillybilly346 Месяц назад +1

    It only gives option for codellama and not llama instruct, please help

    • @AlexMelemenidis
      @AlexMelemenidis Месяц назад

      I have the same issue. In the CodeGPT menu I only see as options "llama3:8b" and "llama3:70b", but not "llama3:latest" or "llama3:instruct", as I have them available (when I would go to a command line and do ollama list). When I select llama3:8b and enter a prompt, nothing happens. When I choose another model which I have installed, like "mistral" it works just fine...

    • @AlexMelemenidis
      @AlexMelemenidis Месяц назад

      ah okay, so seems to be the name and CodeGPT has a set list of compatible model names? I did another "ollama pull llama3:8b" and now it works.

    • @sillybilly346
      @sillybilly346 Месяц назад +1

      @@AlexMelemenidis yes same here, thanks

  • @abhijeetvaidya1638
    @abhijeetvaidya1638 12 дней назад

    why not use codelama ?

  • @haricharanvalleru4411
    @haricharanvalleru4411 Месяц назад +1

    very helpful tutorial

  • @amitkumarsingh4489
    @amitkumarsingh4489 Месяц назад

    could not see the thescreen @ 2:17 in my vscode

  • @beratyilmaz7951
    @beratyilmaz7951 Месяц назад

    try codeium extension

  • @mafaromapiye539
    @mafaromapiye539 Месяц назад

    AI Technologies are making things easier as it boost one's vast Human General Intelligence capabilities...

  • @rmnilin
    @rmnilin Месяц назад +14

    SPOILER ALERT: this is not amazing, but you'll be able to make scrambled eggs on your laptop while it writes you a crud service that actually doesn't work

  • @Mr76Pontiac
    @Mr76Pontiac Месяц назад

    I'm really not impressed with Llama:8B. I decided to skip Python, and go to Pascal. I asked to create a tic tac toe game, and have had nothing but problems with it. It CONSTANTLY forgets that Pascal is a declarative language and forgets to include the variable definitions, especially the loop variables. When i ask for it to revisit, this last time it decided to rewrite the function to draw the board in a console.log instead of a writeln. I mean, it rewrote the WHOLE function to be completely useless.
    I tried running the 70b, but the engine just kept prioritizing to my GTX970 instead of my RTX3070. The documentation on the site, as well as the Github repo just doesn't explain well enough where to put the weights on where the engine should calculate.
    I could pull the 970 out, but, meh.

  • @MeinDeutschkurs
    @MeinDeutschkurs Месяц назад

    I can just see: Something went wrong, try again.

  • @JohnDoe-ie3ll
    @JohnDoe-ie3ll 7 дней назад

    why you guys are using third party plugins which has limit and then you claim its free.. would be nice to see which doesnt require that shit

  • @krishnak3532
    @krishnak3532 Месяц назад +1

    If I run the local ollama3 will it requires GPU to see faster performance.@mervin