DIY Robotic arm calibration using OpenCV and chessboard pattern.

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024

Комментарии • 23

  • @keepfighting8402
    @keepfighting8402 Год назад +1

    Дякую, дуже вчасно підігнали відео!

  • @kot2905
    @kot2905 Год назад +2

    Good explanation, thank you!

  • @svitlana1367
    @svitlana1367 Год назад +2

    Thanks for the video

  • @likefoodua1088
    @likefoodua1088 9 месяцев назад +1

    Дякую за відео

  • @vikavr5641
    @vikavr5641 Год назад +1

    Thanks!!! Good video!

  • @ВладШевченко-ц7т
    @ВладШевченко-ц7т Год назад +1

    Good video

  • @billobama6018
    @billobama6018 Год назад +1

    Hello, I'm a student who wants to use ROS2 in UE5 to test my image processing algorithm with the photorealistic images rendered by UE5. I found your great videos, and how can I find and install such a ROS2 connector for UE5? Thanks a lot!

  • @b-lifestyle7263
    @b-lifestyle7263 4 месяца назад

    Thanks for your video. I have some questions. If I have a point coordinate from camera coordinate. So, how robot can know to moving with its coordinate?

    • @roboage1027
      @roboage1027  4 месяца назад

      Thank you for your comment. You should know where your robot's camera is located. So you can create transform matrix from camera coords to robot coords. And then just use standard inverse kinematics to move the robot.

    • @b-lifestyle7263
      @b-lifestyle7263 3 месяца назад +1

      @@roboage1027 Thanks for your answer! But the important thing is the point coordinate that's camera coordinate,right? Hence, we need to transform the point coordinate to real word space coordinate first! And use it to create transform matrix from real world space to robot coords, right?

    • @roboage1027
      @roboage1027  3 месяца назад +1

      @@b-lifestyle7263 I think, you need only one transform - from camera coords to robot coords. Usually, robot's base is located in the world's origin. So, if you know transformation from camera frame to gripper frame, for example, you can use forward kinematics to calculate transformation from camera to gripper an fruther to base frame. And that will also be transformation from camera to world frame.

    • @b-lifestyle7263
      @b-lifestyle7263 3 месяца назад

      @@roboage1027 Thanks for your quick response! I follow the handeye calibration. And the relative equation AX=XB is popular. With A is matrix end-effector to base, B is chess to camera. Then, I assume I obtain the X matrix (including rotation & translation matrix) for transformation of camera coords to end-effector coords. For the test, I turn on the robot camera (rgb camera) and I move the mouse to 1 corner of chessboard and I get the x and y, so x and y is pixel, right? If it's pixel, then I product [x,y,z] with Mintrinsic to obtain [xc,yc,zc]. Finally, If I wanna get [x,y,z,rx,ry,rz] of end-effectors, I product the [xc,yc,zc] with X matrix, right?

  • @abhishekvkulkarni8456
    @abhishekvkulkarni8456 Год назад +1

    How can we contact you

    • @roboage1027
      @roboage1027  Год назад

      Sorry robo_age. That's my insta id

  • @govindnair5407
    @govindnair5407 Год назад

    Do you have a github repo now....?

    • @roboage1027
      @roboage1027  Год назад +1

      Not yet, sorry. Just need to clean up some of my code before showing it to people. But if you have any questions, please ask, I'll try to answer :)