Hey Mervin, I just tried interpreter after watching your video, Amazing! works like a charm. Thank you for sharing these videos, they are always pleasant time spent and very helpful. Keep up the great work.💯💫its like AGI - How To Learn About AGI
its a good vieo can you tell what hardware resources we need for smooth working like ram cpu gpu as i am using it but its too slow like it answers for hi in 3 mins
That's definitely heavy. :) I mean GEN AI is awesomely done by Open Interpreter. Since you have show case this demo from terminal. How about if I would like it to be converted as a rest-full api and get some of my task done. Is it really possible?
interpreter seems to be like this : i generally tell any LLM ... all these steps you are displaying , always respond with a python script at the end that fully prepares me to just run and confirm your response or solution so i can effectively and quickly test it on my system. then as the conversation continues...i say ok ... update the setup script to have all the above changes....and so on. on my system i run the script and test ...you can even tell it to make the py script spawn a docker container and run the code in there, for safety. no matter the language ...but the "maker script", i find does very well if you ask it in for python or bash. even if you making. a nextjs project ...the py script will create a the nextjs project script .. (so python has nothing to do with the actual tasks ...it just the maker) this works well for me on any of the free web based offerings. especially deepseek due to large context and large outputs. So instead of copying and pasting into many different files 'let the llm do the work" by scaffolding out the scenario..and mutate from the original ..progressing the code to completion.
what is the best local model, - I'm getting really pissed with this thing lol, it doesn't work other than with gpt , it wont work worth anything with any other model I've tried it to death here lol.. let me know ASAP thanks.
Please do more videos on open interpreter, you might have the best set of use cases on YT
will look in to that 🙏
Sweet! I like that you also included the local options at the end of the video 🤗
Thank you :)
no... Thank You! @@MervinPraison
Me too 😅😂
wow the fact it's composing music is insane!
Great video, I love your videos, are so great.
Thank you . I will try
Phenomenal my friend thank you 🙏🏽
🙏
Awesome work!!! How can I implement and execute open-interpreter commands from Open-WebUI? any idea?
muito bom, mais video sobre este sistema revolucionario
Hey Mervin, I just tried interpreter after watching your video, Amazing! works like a charm. Thank you for sharing these videos, they are always pleasant time spent and very helpful. Keep up the great work.💯💫its like AGI - How To Learn About AGI
Thank you very much :)
Can you cover aifs of open interpreter?
Great! Thanks
🙏
Almost 10k subs!
Yes :)
Requesting review on which chat-with-code setups are the best and how to install them
I will review and create a video
its a good vieo
can you tell what hardware resources we need for smooth working like ram cpu gpu as i am using it but its too slow like it answers for hi in 3 mins
That's definitely heavy. :) I mean GEN AI is awesomely done by Open Interpreter. Since you have show case this demo from terminal. How about if I would like it to be converted as a rest-full api and get some of my task done. Is it really possible?
Yes definitely possible
It seems a bit flaky when opening apps on Ubuntu. It opens things like VLC but then hangs. Have you tried it on Linux?
interpreter seems to be like this :
i generally tell any LLM ...
all these steps you are displaying , always respond with a python script at the end that fully prepares me to just run and confirm your response or solution so i can effectively and quickly test it on my system.
then as the conversation continues...i say ok ... update the setup script to have all the above changes....and so on.
on my system i run the script and test ...you can even tell it to make the py script spawn a docker container and run the code in there, for safety.
no matter the language ...but the "maker script", i find does very well if you ask it in for python or bash.
even if you making. a nextjs project ...the py script will create a the nextjs project script ..
(so python has nothing to do with the actual tasks ...it just the maker)
this works well for me on any of the free web based offerings. especially deepseek due to large context and large outputs.
So instead of copying and pasting into many different files
'let the llm do the work" by scaffolding out the scenario..and mutate from the original ..progressing the code to completion.
what is the best local model, - I'm getting really pissed with this thing lol, it doesn't work other than with gpt , it wont work worth anything with any other model I've tried it to death here lol.. let me know ASAP thanks.
I played around with it for an hour using GPT-4 and it cost 10 dollars in tokens. I think I will try the local models next.
What are the minimum requirements to run an local LLM in Mac and windows?
Most LLMs require at least 8GB of RAM and a powerful CPU, such as an Intel Core i7 or AMD Ryzen 9. GPU is recommended.
@@MervinPraison what computer do you recommend?
Thank you
🙏
how much it cost you using this interrreptor
This is complete BS, it doesn't actually work. It takes ages to learn how to things on your system.
when is it not amazing? LOL
😂 probably never
A little scary
Haha
redo the video from scratch, because nothing was understood. you go too fast and it makes no sense