Create Your "Small" Action Model with GPT-4o

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 28

  • @ShpanMan
    @ShpanMan 4 месяца назад +10

    This is actually really impressive. GPT-4o watching you act and understands what is done, then writes code to reproduce it, which can then be run and automated.
    Very clever flow, OpenAI should definitely hire you.

    • @MilkGlue-xg5vj
      @MilkGlue-xg5vj 4 месяца назад

      Anyone can do better than this with a powerful language model, it's not much. It's just that the rabbit is overrated.

  • @clumsymoe
    @clumsymoe 4 месяца назад +2

    Cool experimental project and idea 👍 The entire process can be scripted further to continuously store the most recent number of screenshots in 2-second intervals to VRAM using PyTensor, and a call can be triggered at any time with keyword through mic input or keys shortcut to send it to gpt-4o to retrieve the "reply last action script" and then automatically execute it to save time doing some mundane tasks👍👍

  • @georgestander2682
    @georgestander2682 4 месяца назад +4

    Thanks, this is interesting. I was wondering about this as well and had a thought about adding log data of user interactions to give the model more telemetry. So it not just vision but also the actual logs of all the interactions happening in the background.

  • @cyc00000
    @cyc00000 4 месяца назад

    So good to see you getting onboard the rabid r1. It's seriously going to change lives.Enjoyed the video man.

  • @ibrahimaba8966
    @ibrahimaba8966 4 месяца назад

    Very interesting. I think it could also be useful to provide it with the mouse positions between different frames.
    To go further, we could create multiple actions and then implement a RAG that allows the model to choose the correct snapshot and execute it.
    Thanks for this video.

  • @TTOnkeys
    @TTOnkeys 4 месяца назад

    I can think of so many uses for this. Great work.

  • @mikew2883
    @mikew2883 4 месяца назад +3

    This is awesome!

  • @nic-ori
    @nic-ori 4 месяца назад +1

    Useful information. Thank you!👍👍👍

  • @avi7278
    @avi7278 4 месяца назад +2

    honestly more legit than scammer Jesse Lyu and RabbitR1 garbage hardware scam after his NFT game scam.

  • @gnosisdg8497
    @gnosisdg8497 4 месяца назад +2

    so where is the code for this project! looks fun

  • @Anubhav-Chaturvedi
    @Anubhav-Chaturvedi 4 месяца назад +3

    Bro Plz create video for real time vision and response

    • @lokeshart3340
      @lokeshart3340 4 месяца назад

      Woh woh look whos here bhai kya aap mere ko jante ho ya yaad rkhe ho?

  • @Soft_Touch_
    @Soft_Touch_ 4 месяца назад +1

    I've been thinking recall and omni screenshots were ways to create large pratical data sets to train lams. Do you think that is what's happening? You seem to be doing a smaller version of this

  • @carstenli
    @carstenli 4 месяца назад

    Great start. What's the GH url for subscribers?

  • @Ahmad-ej2fy
    @Ahmad-ej2fy 26 дней назад

    Disclaimer for those who thinking of implementing this project, open ai GPT models are not free so you have to pay to let it the code run

  • @ewasteredux
    @ewasteredux 4 месяца назад +5

    Are there any local LLM's this might work with?

    • @PanduPandu-fh5tk
      @PanduPandu-fh5tk 4 месяца назад +1

      Maybe, LLaVA 13b can

    • @Ahmad-ej2fy
      @Ahmad-ej2fy 26 дней назад

      I tried LLaVA but it will run very very slow that it is not worth it, the analyzing of 1 picture might take up to 2-4 minutes, so for 20 images 40-80 minutes, you will need to use an API for a server that runs the model, almost all of them are not free

  • @BThunder30
    @BThunder30 4 месяца назад

    Interesting project as always.

  • @АлексГладун-э5с
    @АлексГладун-э5с 4 месяца назад +1

    Looks great

  • @futureworldhealing
    @futureworldhealing 4 месяца назад +2

    learning how to be data scientist 80% from u bro haha

  • @darthvader4899
    @darthvader4899 4 месяца назад

    How does it know where to click though? Does

  • @JNET_Reloaded
    @JNET_Reloaded 4 месяца назад +1

    the github is always the same repo btw itl be easyer tomake a new repo for each project and put project link in description

    • @wurstelei1356
      @wurstelei1356 4 месяца назад

      I think you can link to git sub folders. The repo is pretty messy, but keep in mind, this is free. Thou I am also not able to find code for some projects on that repo.

  • @kalilinux8682
    @kalilinux8682 4 месяца назад

    Humane and Rabbit watching this and raising another round of funding

  • @lokeshart3340
    @lokeshart3340 4 месяца назад +1

    Hello sir can u recreate gemini vision fake demo in real life

  • @spencerfunk6697
    @spencerfunk6697 4 месяца назад +2

    So literally open interpreter…