Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024

Комментарии • 18

  • @canislupusfool
    @canislupusfool 6 лет назад +6

    Fake! You can clearly hear the mouse you've trained to push it along! Nice work :)

    • @pranavsreedhar1402
      @pranavsreedhar1402 6 лет назад +1

      dont know if you are sarcastic. Im guessing a mouse wouldn't push this fast.

  • @abhishekkumar1972
    @abhishekkumar1972 3 года назад

    @greg kahn i will try to implement this project, any sort of help will be appreciated

  • @chanchoi5076
    @chanchoi5076 6 лет назад

    I enjoyed this.

  • @MinhTran-ew3on
    @MinhTran-ew3on 4 года назад

    In the paper, you claim that your approach learns from scratch to navigate using monocular images solely in the "real-world". So, did you train in a simulation and then evaluate the trained model in the real world or train it directly in the real world (it may be a real collision happen to the car to get more experience) ?

  • @primodernious
    @primodernious 5 лет назад

    what about use many linear neural net to works as seprarte networks but are targeted the same travel path by let each network read the same input data but compete for their output by forwarding into a output network that will select the optimum path ahed of time before the actual try? would not this speed up the overall learning performace?

  • @sam171288
    @sam171288 7 лет назад +1

    Hi, It was a great stuff. Anyway, a question about how you trained the agent.
    Is there any terminal state? For example when the agent hit the wall or flipped and how about the reward?
    Thank you

    • @gregkahn7238
      @gregkahn7238  7 лет назад +1

      Good question. Yes, any type of collision is a terminal state. After a collision, the car performs a hard-coded backup procedure and then continues learning. The hard-coded backup is not necessarily needed, and we are going to remove it soon.
      When evaluating the (N-step) Q-learning prior methods, the reward was the speed of the car, or 0 if a collision occurred. We tried adding a negative reward for collisions, but this actually hurt the performance of Q-learning.

    • @sam171288
      @sam171288 7 лет назад +2

      Thank you for the reply. Can I contact you directly in case I have any further question? I am doing a Deep RL for robotic too, but a lot simple than yours.

  • @deeplearner2634
    @deeplearner2634 6 лет назад +1

    this is bloody awesome!

  • @fractelet
    @fractelet 6 лет назад

    good job

  • @zzzzjinzj
    @zzzzjinzj 6 лет назад

    Can it simulate in gym-gazebo?

    • @gregkahn7238
      @gregkahn7238  6 лет назад +2

      The simulator in this release uses Bullet (for physics simulation) and Panda3d for graphics rendering. However, adding an interface to a new environment should hopefully be straightforward.

  • @ConsumerAria51
    @ConsumerAria51 6 лет назад

    Do you have a paper published ? Thanks!

  • @aitor.online
    @aitor.online 6 лет назад

    so uc berkley isnt all bad😂 jk but this is siiick

  • @suryaepic2154
    @suryaepic2154 3 года назад

    what about the code bro?