Open Interpreter: Beginners Tutorial with 10+ Use Cases YOU CAN'T MISS

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025

Комментарии • 41

  • @build.aiagents
    @build.aiagents 11 месяцев назад +12

    Please do more videos on open interpreter, you might have the best set of use cases on YT

  • @HyperUpscale
    @HyperUpscale 11 месяцев назад +3

    Sweet! I like that you also included the local options at the end of the video 🤗

  • @jaychas
    @jaychas Месяц назад

    wow the fact it's composing music is insane!

  • @renierdelacruz4652
    @renierdelacruz4652 11 месяцев назад +1

    Great video, I love your videos, are so great.

  • @build.aiagents
    @build.aiagents 11 месяцев назад

    Phenomenal my friend thank you 🙏🏽

  • @mohitmanjalkar9
    @mohitmanjalkar9 3 месяца назад

    Awesome work!!! How can I implement and execute open-interpreter commands from Open-WebUI? any idea?

  • @sr.modanez
    @sr.modanez 8 месяцев назад

    muito bom, mais video sobre este sistema revolucionario

  • @oieieio741
    @oieieio741 11 месяцев назад +1

    Hey Mervin, I just tried interpreter after watching your video, Amazing! works like a charm. Thank you for sharing these videos, they are always pleasant time spent and very helpful. Keep up the great work.💯💫its like AGI - How To Learn About AGI

  • @BreezyVenoM-di1hr
    @BreezyVenoM-di1hr 7 месяцев назад

    Can you cover aifs of open interpreter?

  • @AlloMission
    @AlloMission 11 месяцев назад

    Great! Thanks

  • @youriwatson
    @youriwatson 11 месяцев назад +2

    Almost 10k subs!

  • @MattJonesYT
    @MattJonesYT 11 месяцев назад +1

    Requesting review on which chat-with-code setups are the best and how to install them

    • @MervinPraison
      @MervinPraison  11 месяцев назад

      I will review and create a video

  • @hethemystery
    @hethemystery 11 месяцев назад

    its a good vieo
    can you tell what hardware resources we need for smooth working like ram cpu gpu as i am using it but its too slow like it answers for hi in 3 mins

  • @atifsaeedkhan9207
    @atifsaeedkhan9207 11 месяцев назад +1

    That's definitely heavy. :) I mean GEN AI is awesomely done by Open Interpreter. Since you have show case this demo from terminal. How about if I would like it to be converted as a rest-full api and get some of my task done. Is it really possible?

  • @jim02377
    @jim02377 8 месяцев назад

    It seems a bit flaky when opening apps on Ubuntu. It opens things like VLC but then hangs. Have you tried it on Linux?

  • @saabirmohamed636
    @saabirmohamed636 4 месяца назад

    interpreter seems to be like this :
    i generally tell any LLM ...
    all these steps you are displaying , always respond with a python script at the end that fully prepares me to just run and confirm your response or solution so i can effectively and quickly test it on my system.
    then as the conversation continues...i say ok ... update the setup script to have all the above changes....and so on.
    on my system i run the script and test ...you can even tell it to make the py script spawn a docker container and run the code in there, for safety.
    no matter the language ...but the "maker script", i find does very well if you ask it in for python or bash.
    even if you making. a nextjs project ...the py script will create a the nextjs project script ..
    (so python has nothing to do with the actual tasks ...it just the maker)
    this works well for me on any of the free web based offerings. especially deepseek due to large context and large outputs.
    So instead of copying and pasting into many different files
    'let the llm do the work" by scaffolding out the scenario..and mutate from the original ..progressing the code to completion.

  • @JohnGallie
    @JohnGallie 9 месяцев назад

    what is the best local model, - I'm getting really pissed with this thing lol, it doesn't work other than with gpt , it wont work worth anything with any other model I've tried it to death here lol.. let me know ASAP thanks.

  • @jim02377
    @jim02377 8 месяцев назад

    I played around with it for an hour using GPT-4 and it cost 10 dollars in tokens. I think I will try the local models next.

  • @raulgarcia6191
    @raulgarcia6191 11 месяцев назад

    What are the minimum requirements to run an local LLM in Mac and windows?

    • @MervinPraison
      @MervinPraison  11 месяцев назад +1

      Most LLMs require at least 8GB of RAM and a powerful CPU, such as an Intel Core i7 or AMD Ryzen 9. GPU is recommended.

    • @raulgarcia6191
      @raulgarcia6191 11 месяцев назад +2

      @@MervinPraison what computer do you recommend?

  • @Simon-qe8ph
    @Simon-qe8ph 11 месяцев назад

    Thank you

  • @atantiko2982
    @atantiko2982 11 месяцев назад

    how much it cost you using this interrreptor

  • @robertmazurowski5974
    @robertmazurowski5974 8 месяцев назад

    This is complete BS, it doesn't actually work. It takes ages to learn how to things on your system.

  • @TheBestgoku
    @TheBestgoku 11 месяцев назад

    when is it not amazing? LOL

  •  11 месяцев назад

    A little scary

  • @SAVONASOTTERRANEASEGRETA
    @SAVONASOTTERRANEASEGRETA 7 месяцев назад

    redo the video from scratch, because nothing was understood. you go too fast and it makes no sense