OrthographicNet - a real-time system demonstration (serve_a_drink scenario)

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024
  • In this demonstration, a table is in front of a Kinect sensor, and a user interacts with the system. Initially, the system has prior knowledge about juiceBox and oreo, learned from batch data (i.e., set of observations with ground truth labels), and does not have any information about the bottle and the cup categories. This demonstration shows that apart from batch learning, the robot can also learn about new object categories in an open-ended fashion. Furthermore. it shows that the proposed approach is capable of recognizing objects in various positions.

Комментарии •