When I saw the interpreter recode to get the tasks done I realized we are now living in the future. Matthew, hats off to you for being here from the start and continuing to deliver such quality content.
Hey Matthew, I just watched your video and I have to say, it's absolutely amazing! I love how you're showcasing the latest developments in AI and sharing them with the world. It's so encouraging to see how technology is advancing and shaping our future. Keep up the fantastic work! #AIAdvancements
Yes, yes please Mathew, when you've got it sorted out with Code Llama, please make a video showing how to install it with Code Llama. Also make more use cases like reading a csv file of historical stocks that you previously told it to download and make it perform some statistics on it and plotting it all in a terminal CLI console ! !
once you can get this running with code-llama, please make another video showing that off, it'd be awesome to see where local models current limitations are for this sort of advanced usage.
Yo instale codellama. Me dió ese mismo error, pero se soluciona instalado independientemente llamacpp python. Lo que pasa es que no me hace nada, más que chat. Osea solamente sirve como chat, no me crea código ni ejecuta nada.
Hi Matthew, I want to tell you that I just tried it with llama and after the error install llama-cpp-python (pip install llama-cpp-python). It is incredible how well it works, thank you very much for your videos that are excellent!!! Sorry for my English.😅
I was able to run CodeLlama Python 34B on 4 Maxwell GPUs, at 4 bits load, using 32 float for computation. The key is to use the transformer loader, I think. It is about 5-6 gig per GPU. So less than 24gb. I used text-generation-webui.
Hey Pensive Introvert! I'll give this a try but, if you have a moment, could you take us through the steps, where to amened, what tab within Text-generation Webui, and how I install interpreter on the Webui?
This is incredible, the pace with which you bring it to us is super appreciated, thanks! As you mentioned if you could fix the “local” thing kindly make a video, thanks again!
I have been watching multiple of your videos and this is the coolest one. Thanks so much for doing this for all of us!! I can't miss another videos of yours from now on.
Wow. Thanks! Given the huge benefits and security concerns of letting this run unattended on a system, I'd love to know how to run this locally within a docker container that has GPU access to locally run LLMs. I think it would be an amazingly helpful setup video
Security not so much for me. Just waking up with a suddenly spacious hard drive after it went haywire and rm --recursive c:\* everything off your machine.
So I've had it scan all my image files and delete duplicates - successful. Then I asked it to convert some images I made with SD into .ico files and it did it flawlessly. Then I asked it to change the square icons into circle icons. 100% within seconds. Most use I've gotten out of AI so far. I am amazed! Simply amazing!
No one wonder about the security ? In 2022 we were giving the OpenAI some of our data by putting some in chatGPT. Now we are installing that in our computers with a full access to our life... I highly recommend to use docker containers and isolate the data that you share with the container... at least, you still control what you give to OpenAI...
self correcting code is the thing that i still can't beleive is real. This feature alone is worth every ai achievements. This is true revolution in programming. And i don't understand why it is not on the first page of every book and news feed.
I stumbled on this at work today. I had it build a web site and a chat GPT bot that talked using eleven labs. I had to help it with the api call to eleven labs but that was easy I just dropped an example.txt file in that it could read and it took it from there. I think if I had a different web reader installed (response I was using?) it would have been able to pull the documentation from eleven labs. I can see this being an extremely useful tool and I will probably spend my weekend playing open interpreter (and I usually like to get away from computers on the weekend).
Thanks for the conda environment advice! :D That way I won't be worried about Open Interpreter breaking the rest of the AI software on my computer when it starts installing packages! ^^
I got it to run locally by installing llama-cpp-python first before running interpreter --local. Then it downloaded the models just as in your video. It's slower than GPT4, but that was to be expected.
Hey Matthew, reinstall llama-cpp-python before running interpreter. The following snippet will resolve the llama error. ``` pip uninstall -y llama-cpp-python CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir ``` I'm loving the content on this channel. Cheers!
Its quite expensive to run. Just tried out for like 10 minutes processed an excel sheet. Even though when i ask which model are you based on it says 3.5 when i was interacting with it, while I was billed $3.28 for the GPT4. Might not be a feasible option but yes i think it gives privacy as a great option over the Chat GPT Code interpreter which stores and can process your files. You pay for privacy its otherwise the ChatGPT Plus Code interpretor will do the same for Data Analytics. Really loved Open Interpreter ability to work on the machine although technically it requires the OpenAI API model to run. Waiting for Code Lama integration and to see if the whole thing can work locally.
Wondering if some code could be adjusted for this to work with LLM Studio, running a local server instead of needing an API from ChatGPT. If yes, perhaps another video with those instructions could be made. Thanks for all your hard work.
What if you make this see the files of itself and with a loop use another gpt to call this to improve itself. Maybe a chain that is always telling the code interpreter what to do and improve or add. What happens if it runs for a few days and check what’s the result. Really cool ideas may come from this
Great video, as usual! The installation went smoothly, but I'm encountering an issue with my API Key. A message keeps appearing, stating that the model either does not exist or I do not have access to it. If anyone has faced a similar situation and could offer some advice, I would be grateful.
I write a comment about that also, i recommend installing that in a docker container and give it access to the data that you only allow... too dangerous for me
Wow. So add something that summarises the output and decides whether to feed to TTS, and whisper for text input and you've got something close to Star Trek's computer interface
update: I got the tool to write this itself yesterday btw, it was very very tedius and it spent all my money trying to get it set up. It's not the best programmer
@@marconeves9018 are there any instructions? When do I use llama-cpp-python? I get the below error: 'llama-cpp-python' is not recognized as an internal or external command, operable program or batch file.
So I installed open interpreter I set my rate limit and entered my open API key I instructed it to build me a website and so far it has done exactly that working on all different areas of the website including back-end looks feel pop-ups a chatbot and all of the code to go with it neatly organized on my computer unfortunately the usage is costing quite a bit at this point so I'll probably be trying out code llama but in the meantime isn't that awesome it successfully written a whole bunch code for a static HTML website with a single line of code to start the server
Love it! Please make another vidoe. Running the api is quite costly, how can we use the paid code interpreter online, which I already pay for, and the local open interpreter when it is neccessary? For us 3usd daily is not that much, but for many it is quite a lot.
To answer your question, I would write a speech-to-text to describe my projects and plug it into an Auto-GPT and the resulting recursive plans will be the base instructions for Open Interpreter.
Why there is an error with my open api key? it does not work properly. The message told me that I have no access to GPT4 : The model `gpt-4` does not exist or you do not have access to it. I use normal openai's API.
Could you make it more clear when you say everything is running local ? Not clear as to why an OpenAI key is required for something that is running local. I've watched several vids where this is the case and would really appreciate the clarification.
@@AI_effect Sounds like it is more like a local plug-in given how you described and not really running A.I. locally ... hopefully he makes a vid that will cover this so it is more clear.
This would be cool if it wasn't just another name for AutoGPT? We still have to use OpenAI API? Yea the lama install feature fails with a nice message saying it's difficult to get a local llm running lol
I'm using code interpreter on my macbook m1 and ran this command: LlamaCpp installation instructions: For MacBooks (Apple Silicon): CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir For Linux/Windows with Nvidia GPUs: CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Imagine malware running locally, trying different methods and downloading the necessay payload to encrypt your data and then ask for ransom. NIGHTMARE ! (Matthew: Great Content , thank you )
I was able to get it running local finally... the prerequisites are installing llama-cpp-python, see installing for gpu acceleration... the initial results are impressive but the llm seems to get stuck in a repitition loop shortly after starting... not sure if I can fine tune the way it loads the model to fix or not... but this excites me.
After typing "interpreter" l got this error message : 'interpreter' is not recognized as an internal or external command, operable program or batch file. What can l do to solve this issue inorder to get into Open Interpreter
It sounds like its not as intuitive as people would hope, but you can eventually train it right? Even if it's slow and dumb now, it's still eventually going to figure your workflow out if you work with it, right?
Hello thank you so much i've done it! But the fans of my laptop start making too much sound and it starts getting hot, my macbook pro is 2020 model, can you tell me if it is apropriate to do this? i used the smaller model 7B
QUITE IMPRESSIVE AND THANKS SO MUCH FOR THE DETAILED TUTORIAL, BUT I DON'T HAVE CHATGPT 4 AND I DO NOT WANT TO SUSCRIBE TO THAT VERSION. HOW TO I INSTALL THE INTERPRETER AND MAKE IT RUN THEN? THANKS IN ADVANCE FOR YOUR ANSWER AND HELP!
Great video, Mathew .. it really was simple to follow and apply! The only sad part is, OpenAI is not available to my country! I wish there was a way to get the API key .. this stops me and many others from following this video and others one you posted .. thank you so much!
I managed to run it local all you need to do is pre install `pip install llama-cpp-python` as the command to install llama-cpp is not correct, I will try to create a pr to fix this later
Funny enought, some weekends ago I made a C# code interpreter using the roslyn scripting API. It works very well but the costs were not so friendly (mainly when using GPT-4, but GPT-3.5 16k was also very expensive).
ChatGPT has a limited memory it can remember your interaction history with. I suppose this problem persists here too if you keep the same instance on forever? Does it forget older things?
I'm not having much luck with OI. I have asked multiple ways for it to write python code that it will pipe to a text file, save the code but NOT RUN IT. It runs the code every time, no matter how I phrase it. Frustrating. There are also problems when scrolling up, the text overwrites itself multiple times. Oh ya, and it's way to slow to use in a practical setting. Has any one else had these issues or is it just me?
Hmm, I had code-llama installed on E drive using text web ui, but this pip install script only knows to check C drive for llama install and model. Wonder if there is a way to force it to look in another directory?
~$ interpreter --local Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model. blah ▌ Model set to Code-Llama Open Interpreter will require approval before running code. Use interpreter -y to bypass this. etc
I'd like to see a tutorial on adding capabilities to Open Interpreter. I was able to add a description of a simple Windows batch file to System_Message.txt and get Open Interpreter to run my batch file as one step in an execution plan, but it took some trial and error. Surely there is a more systematic way to go about this.
Wonder if you can use GPT-3.5, I did a couple demos, jpeg to pdf, jpeg to txt, and it was a couple dollars, that could add up quick. Forgot how quick you can run up your GPT-4 tab. Code Llama where are you??? :)
When I saw the interpreter recode to get the tasks done I realized we are now living in the future.
Matthew, hats off to you for being here from the start and continuing to deliver such quality content.
Hey Matthew, I just watched your video and I have to say, it's absolutely amazing! I love how you're showcasing the latest developments in AI and sharing them with the world. It's so encouraging to see how technology is advancing and shaping our future. Keep up the fantastic work! #AIAdvancements
Yes, yes please Mathew, when you've got it sorted out with Code Llama, please make a video showing how to install it with Code Llama. Also make more use cases like reading a csv file of historical stocks that you previously told it to download and make it perform some statistics on it and plotting it all in a terminal CLI console ! !
once you can get this running with code-llama, please make another video showing that off, it'd be awesome to see where local models current limitations are for this sort of advanced usage.
also of course if you can get wizardCoder running instead it'd be a lot better than codeLlama, but I'm not sure if they let you change it.
I just saw another video that simply install code Llama first and it should work. Hope you can get it working.
Yo instale codellama. Me dió ese mismo error, pero se soluciona instalado independientemente llamacpp python. Lo que pasa es que no me hace nada, más que chat. Osea solamente sirve como chat, no me crea código ni ejecuta nada.
Here's a quick fix to run the Code-Llama locally:
"pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir"
@@damianhill7762 Link please :D:D
The issue at the end, just copy each parameter and run the install command yourself. Not sure why it's broke, but it works.
Another great guide Govender, thank you! I hope everyone get's to see your content!
Truly incredible; this is a game changer!
Hi Matthew, I want to tell you that I just tried it with llama and after the error install llama-cpp-python (pip install llama-cpp-python). It is incredible how well it works, thank you very much for your videos that are excellent!!! Sorry for my English.😅
This worked for me as well!
Way to go Marcel!! Love the people following Matt
I was able to run CodeLlama Python 34B on 4 Maxwell GPUs, at 4 bits load, using 32 float for computation. The key is to use the transformer loader, I think. It is about 5-6 gig per GPU. So less than 24gb. I used text-generation-webui.
Hey Pensive Introvert! I'll give this a try but, if you have a moment, could you take us through the steps, where to amened, what tab within Text-generation Webui, and how I install interpreter on the Webui?
Yes please share
I added a comment with some instructions and RUclips deleted it.
Thanks!
I love your videos as I am always learning so much form you!
We are REALLY close to AI virtual assistants. Like extremely close. I give it 3 months. So cool to see this.
This is incredible, the pace with which you bring it to us is super appreciated, thanks! As you mentioned if you could fix the “local” thing kindly make a video, thanks again!
YOOOO MATT! YOU'RE GETTING SPONSORS! LFG! CONGRATS! GETTING THERE!
this is a great video, going to try it right now. thanks!
Thanks!
I am interested in watching more videos that showcase the practical applications.
cool :) did you enjoy the autogen videos?
This is absolute madness! Thanks for the video!
Going on vacation letting this takeover all of my oracle databases, ttyl
Haha
Good luck man
I have been watching multiple of your videos and this is the coolest one. Thanks so much for doing this for all of us!! I can't miss another videos of yours from now on.
Wow. Thanks! Given the huge benefits and security concerns of letting this run unattended on a system, I'd love to know how to run this locally within a docker container that has GPU access to locally run LLMs. I think it would be an amazingly helpful setup video
Security not so much for me. Just waking up with a suddenly spacious hard drive after it went haywire and rm --recursive c:\* everything off your machine.
Let's go really excited about this
Amazing Video! The progress is astounding. This is next level... can't wait for the next!
This is the most mind-blowing thing I've ever seen !
So I've had it scan all my image files and delete duplicates - successful. Then I asked it to convert some images I made with SD into .ico files and it did it flawlessly. Then I asked it to change the square icons into circle icons. 100% within seconds. Most use I've gotten out of AI so far. I am amazed! Simply amazing!
Yes please , would love to see more updates or examples
No one wonder about the security ? In 2022 we were giving the OpenAI some of our data by putting some in chatGPT. Now we are installing that in our computers with a full access to our life... I highly recommend to use docker containers and isolate the data that you share with the container... at least, you still control what you give to OpenAI...
I would love to see a video of use cases on open interpreter!
self correcting code is the thing that i still can't beleive is real. This feature alone is worth every ai achievements. This is true revolution in programming. And i don't understand why it is not on the first page of every book and news feed.
I stumbled on this at work today. I had it build a web site and a chat GPT bot that talked using eleven labs. I had to help it with the api call to eleven labs but that was easy I just dropped an example.txt file in that it could read and it took it from there. I think if I had a different web reader installed (response I was using?) it would have been able to pull the documentation from eleven labs. I can see this being an extremely useful tool and I will probably spend my weekend playing open interpreter (and I usually like to get away from computers on the weekend).
Thanks for the conda environment advice! :D That way I won't be worried about Open Interpreter breaking the rest of the AI software on my computer when it starts installing packages! ^^
I got it to run locally by installing llama-cpp-python first before running interpreter --local. Then it downloaded the models just as in your video. It's slower than GPT4, but that was to be expected.
Hey Matthew, reinstall llama-cpp-python before running interpreter. The following snippet will resolve the llama error.
```
pip uninstall -y llama-cpp-python
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
```
I'm loving the content on this channel. Cheers!
Its quite expensive to run. Just tried out for like 10 minutes processed an excel sheet. Even though when i ask which model are you based on it says 3.5 when i was interacting with it, while I was billed $3.28 for the GPT4. Might not be a feasible option but yes i think it gives privacy as a great option over the Chat GPT Code interpreter which stores and can process your files.
You pay for privacy its otherwise the ChatGPT Plus Code interpretor will do the same for Data Analytics.
Really loved Open Interpreter ability to work on the machine although technically it requires the OpenAI API model to run. Waiting for Code Lama integration and to see if the whole thing can work locally.
yikes
Wondering if some code could be adjusted for this to work with LLM Studio, running a local server instead of needing an API from ChatGPT. If yes, perhaps another video with those instructions could be made. Thanks for all your hard work.
What if you make this see the files of itself and with a loop use another gpt to call this to improve itself. Maybe a chain that is always telling the code interpreter what to do and improve or add. What happens if it runs for a few days and check what’s the result. Really cool ideas may come from this
It would be nice if you could run this within pycharm or spyder or some other interface. We’re getting closer every day
Tell me what these are about please
Great video, thank you! And Factorio FTW! 😁👍
Please note - Anaconda is a turn off for some due 1gb install size - but there's a miniconda without al the ui guff.
Great video, as usual! The installation went smoothly, but I'm encountering an issue with my API Key. A message keeps appearing, stating that the model either does not exist or I do not have access to it. If anyone has faced a similar situation and could offer some advice, I would be grateful.
did you figure this out? im getting the same issue
I believe that, with you, we will be among the very first people who will start to use AGI when it arrives:) Million likes!
This is TRULY amazing! Thank you very much!
Considering it can analyze our whole computer and play with all our files, what security concerns should we have?
Running the code in a controlled environment will mitigate several risks
I write a comment about that also, i recommend installing that in a docker container and give it access to the data that you only allow... too dangerous for me
Wow. So add something that summarises the output and decides whether to feed to TTS, and whisper for text input and you've got something close to Star Trek's computer interface
update: I got the tool to write this itself yesterday btw, it was very very tedius and it spent all my money trying to get it set up. It's not the best programmer
Have you tried Aider with Open interpreter? Aider could allow open interpreter to have codebase interaction.
Please keep us posted about this! Would love to run CodeLLama locally without the webui but yeah... "difficult task" describes it rather well :D
it's pretty straight-forward IMO-- where did you get stuck? I got the 13B q4 gguf model running locally through llama-cpp-python
@@marconeves9018 are there any instructions?
When do I use llama-cpp-python?
I get the below error:
'llama-cpp-python' is not recognized as an internal or external command,
operable program or batch file.
Yo sorry, "pip install llama-cpp-python" works. thank you! @@marconeves9018
one doubt though... do we need chatgpt plus subscription to make this work ?
Also how much space does it take up locally..
So I installed open interpreter I set my rate limit and entered my open API key I instructed it to build me a website and so far it has done exactly that working on all different areas of the website including back-end looks feel pop-ups a chatbot and all of the code to go with it neatly organized on my computer unfortunately the usage is costing quite a bit at this point so I'll probably be trying out code llama but in the meantime isn't that awesome it successfully written a whole bunch code for a static HTML website with a single line of code to start the server
running pip install llama-cpp-python before running interpreter --local worked on my Mac M1 (for the 13B model on Medium)... But its supersloooooooow
Personal Work Assistant just change the interface for a great user experience
Jaw dropped .... im so impressed , AI is moving soooooo fast
Love it! Please make another vidoe. Running the api is quite costly, how can we use the paid code interpreter online, which I already pay for, and the local open interpreter when it is neccessary? For us 3usd daily is not that much, but for many it is quite a lot.
To answer your question, I would write a speech-to-text to describe my projects and plug it into an Auto-GPT and the resulting recursive plans will be the base instructions for Open Interpreter.
Why there is an error with my open api key? it does not work properly. The message told me that I have no access to GPT4 : The model `gpt-4` does not exist or you do not have access to it. I use normal openai's API.
Could you make it more clear when you say everything is running local ? Not clear as to why an OpenAI key is required for something that is running local. I've watched several vids where this is the case and would really appreciate the clarification.
@@AI_effect Sounds like it is more like a local plug-in given how you described and not really running A.I. locally ... hopefully he makes a vid that will cover this so it is more clear.
Can you please do more examples on PC
Amazing content. I love Your work ❤
when i install python=latest version it fails.. why? and also i fails to summarize big pdf files around 300 pages
its a GPU memory thing for code Llama, you need a good bulky GPU and I doubt that will ever change.
It doesn't seem to have memory from previous conversations. Is there a way to enable/fix that?
Awesome, thanks mathew. Your videos help me a lot.
Please tell me why I was not prompted to fill in the openai api key after I logged in.
This would be cool if it wasn't just another name for AutoGPT? We still have to use OpenAI API? Yea the lama install feature fails with a nice message saying it's difficult to get a local llm running lol
I'm using code interpreter on my macbook m1 and ran this command:
LlamaCpp installation instructions:
For MacBooks (Apple Silicon):
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
For Linux/Windows with Nvidia GPUs:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Seeing the future of computer interaction
Imagine malware running locally, trying different methods and downloading the necessay payload to encrypt your data and then ask for ransom.
NIGHTMARE !
(Matthew: Great Content , thank you )
what should I do to fix the "MAX retries reached" error message ?? , I have a paid account and I am stuck with this error, any help appreciated
I was able to get it running local finally... the prerequisites are installing llama-cpp-python, see installing for gpu acceleration... the initial results are impressive but the llm seems to get stuck in a repitition loop shortly after starting... not sure if I can fine tune the way it loads the model to fix or not... but this excites me.
After typing "interpreter" l got this error message : 'interpreter' is not recognized as an internal or external command, operable program or batch file. What can l do to solve this issue inorder to get into Open Interpreter
It sounds like its not as intuitive as people would hope, but you can eventually train it right? Even if it's slow and dumb now, it's still eventually going to figure your workflow out if you work with it, right?
Hello thank you so much i've done it! But the fans of my laptop start making too much sound and it starts getting hot, my macbook pro is 2020 model, can you tell me if it is apropriate to do this? i used the smaller model 7B
QUITE IMPRESSIVE AND THANKS SO MUCH FOR THE DETAILED TUTORIAL, BUT I DON'T HAVE CHATGPT 4 AND I DO NOT WANT TO SUSCRIBE TO THAT VERSION. HOW TO I INSTALL THE INTERPRETER AND MAKE IT RUN THEN? THANKS IN ADVANCE FOR YOUR ANSWER AND HELP!
just superb, thank you very much
Great video, Mathew .. it really was simple to follow and apply! The only sad part is, OpenAI is not available to my country! I wish there was a way to get the API key .. this stops me and many others from following this video and others one you posted .. thank you so much!
Can code interpreter “code”? Like code snake or a nodeJS application
To run code Llama :
LlamaCpp installation instructions:
For MacBooks (Apple Silicon):
CMAKE ARGS="-DLLAMA METAL=on" FORCE_CMAKE=1 pip install -U Ilama-cpp-python --no-cache-dir
For Linux/Windows with Nvidia GPUs:
CMAKE_ARGS="- DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install Ilama-cpp-python
I managed to run it local all you need to do is pre install `pip install llama-cpp-python` as the command to install llama-cpp is not correct, I will try to create a pr to fix this later
Funny enought, some weekends ago I made a C# code interpreter using the roslyn scripting API. It works very well but the costs were not so friendly (mainly when using GPT-4, but GPT-3.5 16k was also very expensive).
are there only python code intepretors? waiting for javascript or a browser based one
Re use cases - yes please!
How to do that whole Conda install bit?
Amazing tools !!! 😊
Great guide! 🎉
ChatGPT has a limited memory it can remember your interaction history with. I suppose this problem persists here too if you keep the same instance on forever? Does it forget older things?
I'm not having much luck with OI. I have asked multiple ways for it to write python code that it will pipe to a text file, save the code but NOT RUN IT. It runs the code every time, no matter how I phrase it. Frustrating. There are also problems when scrolling up, the text overwrites itself multiple times. Oh ya, and it's way to slow to use in a practical setting. Has any one else had these issues or is it just me?
Are there any privacy concerns we should consider before running it when it comes to all our personal data been transferred to GPT API?
What if it accidentally ran $ rm -r command that means delete all files on local system?
Do we need a sandbox to use this 'beast'?
I'd love to see a follow up on this if they get llama to work.
Hmm, I had code-llama installed on E drive using text web ui, but this pip install script only knows to check C drive for llama install and model. Wonder if there is a way to force it to look in another directory?
figured that out btw. Just let it download to C, for now.
Nice that it's self-correcting, but does it remember the solution that worked on subsequent similar requests?
Can i use Conda for Tensorflow
Well the question is, if a model is not good in coding (like llama2) will it still be able to run everything so smoothly as it is shown in the video?
using wsl seemed to "Just work" for code-llama local only version,
~$ interpreter --local
Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
blah
▌ Model set to Code-Llama
Open Interpreter will require approval before running code. Use interpreter -y to bypass this.
etc
yeah slow on wsl1 but it works on a machine with 8Gb Ram, No Gpu and windows 10. Glacial. but works.
I'd like to see a tutorial on adding capabilities to Open Interpreter. I was able to add a description of a simple Windows batch file to System_Message.txt and get Open Interpreter to run my batch file as one step in an execution plan, but it took some trial and error. Surely there is a more systematic way to go about this.
Looks good - Will it work with 3.5 or just 4? Thanks
I am also wondering this question. tell me if you find out the information.
Also wondering this. I'm guessing it's for GPT-4 only
Amazing!!! Please another with use cases! Thank you Matt
can we use open interpreter, currently with chatgpt 3.5 turbo, how and where would the user be prompted?
If I ask the same questions to chatGPT 4 and Open Interpreter powered by chat GPT4, I get different answers. Why is that?
what is the minimum requirement for the computer to run this local program?
And how many days does it take to process a command on a high end gaming computer.
I played around with it. This is in the prototype stages at best.
Wonder if you can use GPT-3.5, I did a couple demos, jpeg to pdf, jpeg to txt, and it was a couple dollars, that could add up quick. Forgot how quick you can run up your GPT-4 tab. Code Llama where are you??? :)
For gpt-3.5-turbo, use fast mode: interpreter --fast
How can I use with Chat gpt-3.5 ?
Amazing! Can you integrate this with whisper, so that you can talk instead of typing? Would be so Minority report bad ass