[AI Lab] Pepper robot learning "ball in a cup"

Поделиться
HTML-код
  • Опубликовано: 20 сен 2016
  • This video realized by the AI Lab of SoftBank Robotics shows how Pepper robot learns to play the ball-in-a-cup game ("bilboquet" in French). The movement is first demonstrated to the robot by guiding its arm.
    From there, Pepper has to improve its performance through trial-and-error learning. Even though the initial demonstration does not land the ball in the cup, Pepper can still learn to play the game successfully.
    The movement is represented as a so-called dynamic movement primitive and optimized using an evolutionary algorithm. Our implementation uses the freely available software library dmpbbo: github.com/stulp/dmpbbo.
    After 100 trials, Pepper has successfully optimized its behavior and is able to repeatedly land the ball in the cup.

Комментарии • 68

  • @jillybeanAZ
    @jillybeanAZ 7 лет назад +65

    After 90 attempts, I was waiting for pepper to throw the ball and cup against the wall and smash it.

  • @kenadams4142
    @kenadams4142 7 лет назад +43

    Imagine with a little practice how good they'd be at capturing humans for their people zoo.

  • @user-tn7ty1sf9g
    @user-tn7ty1sf9g Год назад +2

    看完老高的直播又来回味一下~

  • @corellifer9799
    @corellifer9799 7 лет назад +9

    Poor Pepper 90 trial...but after he never fails...cute and scary

  • @profMara-pn1vi
    @profMara-pn1vi 6 лет назад

    magnifico, complimenti per i vostro lavoro

  • @sebk7185
    @sebk7185 7 лет назад +11

    Impressive. Even scary...

  • @pitcairnpostcardmag
    @pitcairnpostcardmag 7 лет назад +6

    For some reason I forget that Pepper is a robot and just a machine. This makes it seem very impressive after the 100 attempts.

  • @jonnynelson5734
    @jonnynelson5734 7 лет назад +2

    Something so simple... and yet so scary. The Robots Cometh!

  • @mahes303
    @mahes303 7 лет назад +2

    This is amazing.

  • @ziggyinjapan
    @ziggyinjapan 7 лет назад

    wow, great work!

  • @133faceman
    @133faceman 7 лет назад +4

    Its a bit slow at learning. Once it knows the way to move its going to always get the ball in the pot. What would be impressive is if it was moving around and doing this!

  • @PattymayoTv
    @PattymayoTv 7 лет назад +5

    With the quick reflexes of the robot it seems it would have been easier to teach it to locate the ball in motion and move the cup into position based on the balls trajectory instead of trying to create an optimal swing. So is the robot really learning?

  • @jameschudy576
    @jameschudy576 7 лет назад +10

    I wish she could reset my ball and cup.

  • @SitiAisyah-es5ho
    @SitiAisyah-es5ho 7 лет назад +4

    impressive 😉

  • @brkuldeep
    @brkuldeep 7 лет назад

    fantastic

  • @lyan803
    @lyan803 7 лет назад

    Bien joué coach.

  • @mallamsiang
    @mallamsiang 7 лет назад

    what algoritm and parameter used ?

  • @MikeinMiss
    @MikeinMiss 7 лет назад +1

    Just a few questions. It seems the only thing modified for each trial is the starting position while the joint movements are the same each time, right? If you were using only cameras attached to the robot, then I assume the arm was moved along only one axis? I have trouble imagining how the algorithm knew whether its adjustments were making it closer to or further from achieving the goal. Also, recursive algorithms have been around for decades now, is this approach any different or just a unique implementation?

  • @O5680
    @O5680 7 лет назад +1

    Still better at it than I am!

  • @vorupan
    @vorupan 7 лет назад +1

    While it is cool that in the end the optimal trajectory pretty much grants a succsess, it would be nice if this learning process could be abstracted and applied to different ball-/cup-/string-dimensions. Im guessing in this case the optimal trajectory is only true for the same test condtitions?