Daniel DeTone
Daniel DeTone
  • Видео 9
  • Просмотров 102 137

Видео

Person Tracking in a Lecture Hall Setting
Просмотров 1499 лет назад
Deformable Parts Model Detector Particle Filter MCMC Sampling
Robust Locally Weighted Regression for Detection Smoothing Demo #2
Просмотров 1,2 тыс.9 лет назад
Robust Locally Weighted Regression for Detection Smoothing Demo #2
Project SEDA
Просмотров 919 лет назад
s-e-d-a.github.io/
KITTI dataset localization using modified PTAM (Demo)
Просмотров 4819 лет назад
By removing some of the long term pose optimizations and by limiting the allowed number of bundle adjustment iterations, I was able to modify PTAM to work in an outdoor localization setting. Here is quick demo. Code (Github): github.com/ddetone/PTAM-for-KITTI
Projectile Prediction and Robotic Retrieval using Kinect
Просмотров 1,3 тыс.10 лет назад
Final project video for EECS 498 (Autonomous Robotics Lab) during the Winter 2013 term.
Simultaneous Localization and Mapping (SLAM)
Просмотров 97 тыс.10 лет назад
Simultaneous Localization and Mapping (SLAM)
PTAM iphone demo
Просмотров 78310 лет назад
PTAM iphone demo

Комментарии

  • @oliverjepp3113
    @oliverjepp3113 2 месяца назад

    cant believe we already had this 10 years ago

  • @51Chen
    @51Chen Год назад

    NICE WORK!!!

  • @impulserr
    @impulserr Год назад

    Nice, but too many things are predefinied like blue lines and triangles. It would be much harder with unknown features.

  • @muskduh
    @muskduh 2 года назад

    thanks

  • @jaylton_alencar
    @jaylton_alencar 2 года назад

    top do top

  • @emmanuellerousseau7314
    @emmanuellerousseau7314 2 года назад

    Hi Dear, would you please help me ? : Can there be 21h time difference at only 83km distance? The Bering Strait of 83 Km gathers Uelen in Siberia and Point Hope in Alaska that have 21 hours time difference ruclips.net/video/sGBcexJ8ZHA/видео.html Thanks a lot in advance !

  • @mistervoldemort7540
    @mistervoldemort7540 3 года назад

    Well explained, thanks

  • @nickname2805
    @nickname2805 3 года назад

    Is the paper published? If so, where can I find it? Thanks.

  • @noorulhassan3560
    @noorulhassan3560 3 года назад

    Hi.. I am doing my final year project of SLAM.. please can you tell me that in your experiment did you use fusion scheme of visual module and LIDAR module or not? Also can you provide me some relevant materials

  • @varunittigi9288
    @varunittigi9288 4 года назад

    I want to get started with slam, please recommend me some material. Thankyou

  • @dwiprayetno4294
    @dwiprayetno4294 4 года назад

    Can i have the source code sir?

  • @oscar187oscar
    @oscar187oscar 5 лет назад

    Hi, I would like to know, how I can start to develop a software like this, Can you give me some advice?

  • @AliAbdoulhamid
    @AliAbdoulhamid 5 лет назад

    Hello, Could you please inform me which algorithm to use and which software to do visual SLAM. (I use a small robot equipped with a rasberry pi3 camera). Thank you

  • @pearinnovation1419
    @pearinnovation1419 6 лет назад

    Hello can anyone tell me what software is used for mapping in 3d the feed from camera

    • @Personnenenparle
      @Personnenenparle 5 лет назад

      Its called slam or s-ptam, you can program this with python. There is also kinect fusion if you want something simpler.

    • @danieldetone337
      @danieldetone337 5 лет назад

      We wrote our own software but never released it on github :/

    • @charliebaby7065
      @charliebaby7065 4 года назад

      @@danieldetone337 wha? Omg.... why? Why?!

  • @myperspective5091
    @myperspective5091 6 лет назад

    I always liked robots with these types of capabilities. I always wanted one.

  •  7 лет назад

    Hi Daniel, what Java software do you use to create the map? Do you have some Github repo? Many thanks in advace

    • @danieldetone337
      @danieldetone337 5 лет назад

      We wrote our own software but never released it on github :/

  • @bestest43
    @bestest43 7 лет назад

    Hey guys, well done on your project. Really liked it. I am doing kind of same final project but the catcher is KUKA iiwa 7 R800 manipulator in my case. I am trying to apply least squares calculation but having trouble to print the predicted coordinates. I really liked your simulation predicting trajectories. Nice work! By the way, I would be really appreciated for any advice, thank you!

  • @game-xg4th
    @game-xg4th 7 лет назад

    Use only a camera? Are you use sensor such as Ultrasonic Sensors?

  • @robosergTV
    @robosergTV 7 лет назад

    but how do you compute the distance to the walls? Do you do mono SFM aka 3D scene mapping ?

    • @danieldetone337
      @danieldetone337 5 лет назад

      The scale of the scene can be determined by the size of the green triangles, which are known beforehand.

  • @SakthiTharan
    @SakthiTharan 8 лет назад

    do you have any papers in this? can you share this elaborately

    • @danieldetone337
      @danieldetone337 5 лет назад

      I don't. This was a project I did my senior year of undergrad and we never had enough time to write it up. Wish I would have tho!

    • @danieldetone337
      @danieldetone337 5 лет назад

      If you want to learn more about the fundamentals of this project, check out these great lecture notes: april.eecs.umich.edu/courses/eecs467_w14/wiki/index.php/Lecture_Slides

    • @dhak7650
      @dhak7650 3 года назад

      I know it's too late but for others, look at this: ORB-SLAM2 arxiv.org/abs/1610.06475

    • @ntd252
      @ntd252 3 года назад

      @@danieldetone337 Hi, can you re-upload the documents?

    • @tominfotech
      @tominfotech Год назад

      @@danieldetone337 nice

  • @hansolo8814
    @hansolo8814 8 лет назад

    where is the Iphone

  • @amiltonwong
    @amiltonwong 9 лет назад

    Hi, Daniel, The 3D reconstruction points are in different colors, is it related to different categories like road, tree, sky, etc...?

    • @danieldetone337
      @danieldetone337 9 лет назад

      Nope. No scene recognition done here. The 3D points are colored by their distance to the camera.