Boost Productivity with FREE AI in VSCode (Llama 3 Copilot)
HTML-код
- Опубликовано: 30 май 2024
- 🚀 Dive into the future of coding with our detailed guide on integrating Llama 3 into your Visual Studio Code setup! In this video, we walk you through downloading and setting up Llama 3 locally to create a private co-pilot, enhancing your coding efficiency. Learn how to automate code writing, refactoring, and error fixing to boost productivity and code quality dramatically.
👉 What you'll learn:
Download and install Llama 3 and Code GPT on VS Code.
Configure your AI co-pilot for optimal coding support.
Generate and refactor code effortlessly.
Connect your code to a SQL database with just a few commands.
🎯 Why Watch This?
Enhance your programming skills with AI tools.
Speed up your coding projects and reduce errors.
Learn to set up and use one of the most powerful coding tools available.
📌 Don't forget to:
Subscribe for more videos on Artificial Intelligence and coding.
Like this video if you find it helpful, and share it with fellow coders.
Comment below with any questions or what you'd like to see next!
🔗 Resources:
Sponsor a Video: mer.vin/contact/
Do a Demo of Your Product: mer.vin/contact/
Patreon: / mervinpraison
Ko-fi: ko-fi.com/mervinpraison
Discord: / discord
Twitter / X : / mervinpraison
Timestamps:
0:00 - Introduction to Llama 3 and VS Code integration
1:00 - Downloading and setting up Llama 3
2:24 - Configuring AI co-pilot settings
3:22 - Writing and running your first AI script
5:00 - Debugging and documentation tips
#VSCode #Free #Copilot
#VSCodeCopilot #VisualStudioCode #VsCode #GithubCopilot #VSCode #Copilot #AI #AICoding #GithubCopilotTutorial #GithubCopilotVSCode #LocalCopilot #PrivateCopilot #CodeCopilot #LlamaCopilot #OllamaCopilot #FreeCopilot #FreeVSCodeCopilot #LocalVSCodeCopilot #PrivateVSCodeCopilot #Llama3Copilot #Llama3Code #CodeLlama3 #Llama3VSCode #Llama3VSCode #VSCodeLlama3 #VSCodeExtension #VSCodeExtensionLlama #VSCodeExtensionLlama3 - Хобби
Very impressed with the 8B L3 in regards to coding. Amazing how much progress they have made.
do we need both llama3:8b and instruct? can we not work only with instruct? Also I see your code works faster - could you specify your PC / system specs and config as it takes a good amount of time on my iMac 2017
Very good tutorial. You don't speach about the platform; can I assume it will work both Windows and Linux? Another thing: What's the recommended hardware configuration to install Llama 3 locally in our computers?
Any ideas about how this works on large scripts? What's the context length?
The buttons don't do anything... note i'm working off line. The 4 buttons at the bottom of the add-ins panel just copy the code to the chat window. They don't do anything else and once clicked, the AI stops responding to questions. When i asked it what was wrong with the "explain selected code" the AI responded "nothing, its only meant to copy the code. Anyone know if this is broken for me or its simply an incomplete add-in...?
Does CodeGPT require me to be logged in? I'm all set up but if I ask it to explain something it just says "something went wrong! Try again.". Then i have to either quit and restart vscode or disable then enable the extension...
I am using this is insane 😮 I think full stack developers will not like their future holy crap.
wait for implement on streamlit, keep it up bro
super video. Thanks
guys i installed it according to the vid but i cant run the ai and i saw somewhere that i need to put it in PATH but i dont know where the files are installed
This was a great quick lesson. One thing I was seeing if anyone figured out, often I need to refer to very new documents on API etc, has anyone tied this into like. RAG structure, so we are always looking at latest document?
Great video! How do I connect to my own local Ollama server running on my local machine with this?
Nice one 👍
how do I use another computer running ollama on my LAN?
Thanks, can you make video of pythagora using llama 3?
Latency is pretty bad when im using llama3:70b on vscode for CodeGPT. I am on windows . I guess its with the underlying machine . Anything can be done here?
thank you so much for this video , is it open source please? can we find the weights files and use it ?
Any option to use it with intellij ide?
awesome info
very nice, and it's just a 8B parameter model
Thanks for sharing.
I host the ollama server on a remote server. How do I make it connect to the remote machine instead of localhost?
Reply here if you find this
does it slow down my laptop if I run it locally? would I be better off running haiku on cloud? what would you recommend, I'm just getting into code
I have 8 GB of VRAM and when autocomplete is on for Codyai copilot, my fans turn on full blast on my laptop. I have 64 GB of ram so it doesn't slow my pc down, but if it was running on your CPU and not your GPU it might slow your computer down. I don't think that it will slow your computer down if you have enough VRAM or a ton of ram, but it could depending on your computer's specs.
There is also an extension called "Groqopilot" on vscode that requires that you supply it with a groq api key, and when you do it will create code for your lightning fast with llama3 70b which is of course the better model of llama3 8b. It doesn't autocomplete but it behaves very much like the tutorial we just watched.
amazing content! maybe you can create a long video where you use this to create a full stack application
Does it work for react native code?
This would be amazing if the code was stored for a workspace in a vector store
Excellent and useful tutorial! 👍
amazing
This app looks like a good idea but its a long, long way from finished. Buttons (refactor, explain, document and fix bug in selected code don't do anything but copy selected code to the chat. If you use the clear button it clears the selected model etc. but not the history. I just asked it to write a basic api call for sveltekit and it wrote some pure garbage based on assuming the previous selection was part of the current question. I'm using a 2019 MBP with 32gb ram and its too slow to add any value so far... for me at least
can i use it to write any code I am a beginner do not know anything about coding just starting from zero
Yes you can write most popular programming language
You would need to know at least the basics of coding, and how an application is designed and structured. This writes the code for you but if you cannot read the code or at least understand what it's doing at a high level, then it's too early for you. It gives you 2/3 of the finished product. You just need to know how to integrate that code into your application. You need to know how to create an application, what are the different parts of an application, how to deploy and run an application.
This is amazing! Thanks for much for this tutorial!
Does anyone else get the feeling that the way AI's answer questions is based on the old Microsoft "clippy" assistant... annoyingly eager and can't answer anything much without wrapping it in a paragraph or so of irrelevance.... Very annoying to get 6 or 7 line answers where the only relevant bits are a number or a few words.
If you're using chat gpt you can change that in settings. I think in things like Ollama, you can also change your settings so that it get's straight to the point.
@@Fonzleberry I know thanks... just haven't had much luck though lol, at one point i got fed up and added an instruction to "only answer boolean questions with a yes or a no", I had to restart the model (bakllava) to get it to start answering properly again as it answered all questions with "yes" or "no". I don't get why the default mode is to burry all answers in information not requested. I guess someone redefined the word "conversational". Can't even ask whats 2+2 without an explanation lol
@@m12652 It will improve with time and use cases. A model fine tuned with META's Messenger/Whatsapp data would have a very different feel.
I prefer to use Continue plugin
It only gives option for codellama and not llama instruct, please help
I have the same issue. In the CodeGPT menu I only see as options "llama3:8b" and "llama3:70b", but not "llama3:latest" or "llama3:instruct", as I have them available (when I would go to a command line and do ollama list). When I select llama3:8b and enter a prompt, nothing happens. When I choose another model which I have installed, like "mistral" it works just fine...
ah okay, so seems to be the name and CodeGPT has a set list of compatible model names? I did another "ollama pull llama3:8b" and now it works.
@@AlexMelemenidis yes same here, thanks
why not use codelama ?
very helpful tutorial
could not see the thescreen @ 2:17 in my vscode
Click the settings icon , it was show just before that
@@MervinPraison solved
try codeium extension
AI Technologies are making things easier as it boost one's vast Human General Intelligence capabilities...
SPOILER ALERT: this is not amazing, but you'll be able to make scrambled eggs on your laptop while it writes you a crud service that actually doesn't work
I'm really not impressed with Llama:8B. I decided to skip Python, and go to Pascal. I asked to create a tic tac toe game, and have had nothing but problems with it. It CONSTANTLY forgets that Pascal is a declarative language and forgets to include the variable definitions, especially the loop variables. When i ask for it to revisit, this last time it decided to rewrite the function to draw the board in a console.log instead of a writeln. I mean, it rewrote the WHOLE function to be completely useless.
I tried running the 70b, but the engine just kept prioritizing to my GTX970 instead of my RTX3070. The documentation on the site, as well as the Github repo just doesn't explain well enough where to put the weights on where the engine should calculate.
I could pull the 970 out, but, meh.
I can just see: Something went wrong, try again.
why you guys are using third party plugins which has limit and then you claim its free.. would be nice to see which doesnt require that shit
If I run the local ollama3 will it requires GPU to see faster performance.@mervin