KITTI Odometry with OpenCV Python - Pt.1 - Fundamentals (Autonomous Vehicles)

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии • 56

  • @blanamaxima
    @blanamaxima День назад

    rectified means that the cameras where de-warped , else one would need to apply the lens correction :) You will not see any issues if you use the pre-rectified data but if you do things yourself you will face the problem. Good job going through the things.

  • @방현성-g2g
    @방현성-g2g 4 месяца назад

    Hello I am just random guy trying to learn about object detection. I know very little about object detection, so I couldn't read and understand papers. Thanks to you, the random dude on the youtube, I learned a lot.
    I have to say this again, thank you.

  • @gullisreisen
    @gullisreisen 3 года назад +4

    I am currently working on my bachelor thesis. This has saved me so much time! I greatly appreciate the effort and detailed explanations. Keep up the great work!

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад

      hey
      which dataset to download?
      i have just started and i am very confused?

  • @rishinigam9501
    @rishinigam9501 3 года назад +4

    This is really a great place to follow up for beginners like us in the area of odometry. I have grouped on some running points when you are explaining things in this video and happy to share for further running notes on the next videos also. I will drop you the documents via email of the notes created and hopefully will be useful in future too. Once again thanks for sharing these videos its really helpful :)

  • @chexter-wang
    @chexter-wang 3 месяца назад

    Thanks for the tutorial! A very little tip: numpy print output style like float.round(4) can be set using numpy.set_printoptions(precision = 4)

  • @ducnguyen4973
    @ducnguyen4973 3 года назад +2

    Great combination of theory and hand-on coding exercises. Thank you so much.

  • @alessandrograttarola2009
    @alessandrograttarola2009 3 года назад +2

    Great videos! As a PhD student new to computer vision I'm finding your videos absolutely great for both theory and practice! Keep it up!

    • @natecibik5505
      @natecibik5505  3 года назад +1

      Awesome! Glad you are finding them useful. I should have the third video up in the next couple of days.

  • @msuegajnriorpenda9745
    @msuegajnriorpenda9745 3 года назад +1

    Awesome Video! I'm already learning a lot from this collection. I find these concepts coming together in a nice flow.

    • @natecibik5505
      @natecibik5505  3 года назад

      Awesome, glad that you're finding them useful! Part 5 just went up tonight.

  • @muhammadimad9469
    @muhammadimad9469 3 года назад +1

    Great work...! Hope to hear more from you especially on topics like 3D computer vision.

  • @mohamadalikhani2665
    @mohamadalikhani2665 3 месяца назад

    Regarding, projecting 3D point onto the image plane of the camera. (YT video skipped in part1)
    In the provided code we see along the code cell [26] and [27], after moving the some_point 3D point into the left camera coordinate frame, the code only uses k1 intrinsic matrix to project the point (in the left cam coordinate frame) onto the 2D image plane.
    Shouldn't we use the complete projection matrix?
    I mean P1 = k1[r1|t1]

  • @Ooulu
    @Ooulu 2 года назад

    your content is amazing !!! feels like reading a book

  • @sukantrai5251
    @sukantrai5251 10 месяцев назад

    Damn!!! This is exactly what I needed !! Thanks !!!

  • @AbidAhsan-yp4dc
    @AbidAhsan-yp4dc 3 года назад

    damnn.... this is what i was looking for so long!... thank u so much

    • @natecibik5505
      @natecibik5505  3 года назад +1

      yvw :) Glad that you found it helpful!

    • @AbidAhsan-yp4dc
      @AbidAhsan-yp4dc 3 года назад

      Can we use any deep learning method for stereo matching here?

  • @Nusaybacookies
    @Nusaybacookies 3 года назад

    What a masterpiece. Thanks Nate!

  • @vg5028
    @vg5028 3 года назад +1

    Thanks for the videos! I really appreciate you putting together the Jupyter notebook. There are some typos here and there which I am more than happy to submit corrections for, but I'm not super familiar with pull and merge requests in git (hopefully I'll have some time to learn that soon).

    • @natecibik5505
      @natecibik5505  3 года назад

      My email is nate.cibik@gmail.com, feel free to follow up about it. This could make some good practice :)

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад

      hey
      which dataset to download?
      i have just started and i am very confused?

  • @ayankashyap5379
    @ayankashyap5379 3 года назад +1

    Thank you so much for making this video! subscribed and waiting for more stuff from you

    • @natecibik5505
      @natecibik5505  3 года назад

      Yvw :) Thanks for the sub! I'm working on Pt.4 of the series this week, so you should see it up in the next few days.

  • @mmdalikhani-s2j
    @mmdalikhani-s2j 3 месяца назад

    bro your are my hero😍😍😍

  • @ayushmankumar7
    @ayushmankumar7 3 года назад +2

    You look like Gilfoyle from Silicon Valley XD. Amazing Video! Thanks for uploading such contents! Would like to see more Computer Vision videos on this channel maybe things like Visual SLAM, DSO, Triangulation, Bundle Adjustment, etc.

    • @natecibik5505
      @natecibik5505  3 года назад +2

      😂 I can see it. Yeah I am working on building more knowledge in those topics so that I can share it. In the meantime, I would suggest Cyrill Stachniss' RUclips channel which has lectures on at least some of those topics. His lectures are very good, although he doesn't walk through any coding unfortunately.

    • @ayushmankumar7
      @ayushmankumar7 3 года назад +1

      @@natecibik5505 yes ... I follow him.... His channel is GOLD🔥🔥🔥... All the best for your RUclips Journey ❤️

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад +1

      hey
      which dataset to download?
      i have just started and i am very confused?

    • @ayushmankumar7
      @ayushmankumar7 3 года назад +1

      It's called KITTI Dataset. Under KITTI, you will find the Odometery dataset. Download that one

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад +2

      @@ayushmankumar7 there are 5 dataset in odometery anyone will do?
      also i want help i am doing project on LIDAR based object detection.....so which resourses and dataset will be helpful

  • @phani437
    @phani437 10 месяцев назад

    I have referred to many sources regarding cx and cy. Some of them claim it is just due to the sensor misplacement (like you have mentioned) and some other say that origin is considered on the corner instead of center hence it is the translation (Andrieas Geiger lecture). I'm confused. What is the correct explaination?

    • @natecibik5505
      @natecibik5505  10 месяцев назад

      cx and cy are in the camera intrinsic matrix to offset the origin (0, 0) of the xy' (xy prime) space to take on this value, which represents the intersection of the optical axis of the camera lens with the image plane, and effectively translating the origin (0, 0) of the resulting uv (pixel) space to the upper left corner of the image. Doing a matrix multiplication by hand can be helpful to understand the effect that values have on the output. In this case, you're multiplying the intrinsic matrix with a xy' vector containing (x, y, 1), so the last column of the intrinsic matrix where the cx and cy values are located will always be multiplied by exactly 1 to perform this translation of the origin.
      As for sensor misalignment with the optical axis of the camera lens, you may notice for many cameras that the cx/cy values are not equal to exactly half the image width and height, which indicates the center of the image sensor is slightly offset from the optical axis of the lens. In extreme cases, one side of the image may have noticeably more distortion than the other side for cameras with a large misalignment and distortion.
      I hope this is helpful to clear up your understanding of cx/cy and how to interpret their values.

  • @NP-hf5ri
    @NP-hf5ri 3 года назад +1

    Thanks a lot, I am new to computer vision and its really very helpful.
    Using data from poses file you calculated the distance covered by the vehicle during cameras 0th frame to 1st frame which was around 0.85 m. I was bit curious to know, if we can calculate the distance covered by the vehicle between each consecutive frames over the complete data sequence.

    • @natecibik5505
      @natecibik5505  3 года назад +1

      You can. The translation vector component (4th column) of the 3x4 transformation matrices stored in the poses file essentially locate the origin of the camera in 3D space w.r.t. the camera's first pose over time. That means you can just subtract these consecutive vectors from one another to get your x, y, z deltas in a chosen direction, or take the Euclidean distance between them to get overall distance between them.

    • @NP-hf5ri
      @NP-hf5ri 3 года назад

      @@natecibik5505 Thanks a lot. By just subtracting consecutive vectors I was getting negative value for z at some instances. I tried with Euclidean distance as well and I believe it is giving better results. Thank you.

  • @luke7503
    @luke7503 2 года назад +1

    thank you for being that dude

  • @hasihasi7163
    @hasihasi7163 2 года назад +1

    Thanks for the videos.This is awesome.insta subbed

  • @marychrisgo2073
    @marychrisgo2073 2 года назад

    Hello! Thanks for a very clear explanation of odometry, appreciate it. I tried downloading the dataset and I am not sure if they updated it, but right now it only has a 'sequences' folder. I cannot find the 'poses' folder. Am I downloading the wrong one?

    • @luke7503
      @luke7503 2 года назад

      lmk if you figure this out but i think maybe the sequences folder is the big one you downloaded and poses is in 'Download odometry ground truth poses (4 MB)'?

    • @marychrisgo2073
      @marychrisgo2073 2 года назад

      @@luke7503 Found it, thank you!

  • @foreverfeelgood727
    @foreverfeelgood727 3 года назад

    youre the real mvp. Thank you

  • @oguzhanbuyuksolak4536
    @oguzhanbuyuksolak4536 2 года назад

    Thanks, this is great!

  • @evanhemingway2115
    @evanhemingway2115 2 года назад

    Well done.

  • @ShubhamTiwari-ks2qg
    @ShubhamTiwari-ks2qg 3 года назад

    hey
    which dataset to download?
    i have just started and i am very confused?

    • @natecibik5505
      @natecibik5505  3 года назад +1

      Hey sorry for the late reply, been away from RUclips for a while. This was done with the KITTI Odometry dataset, specifically the grayscale images, lidar, and calibration files were used. www.cvlibs.net/datasets/kitti/eval_odometry.php

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад

      @@natecibik5505
      thanks for response and
      is there any way to have less data in LiDAR as I can see 80GB lidar dataset.
      and I am a student who is making a project out of it

    • @natecibik5505
      @natecibik5505  3 года назад +1

      @@ShubhamTiwari-ks2qg Technically you don't need LiDAR to do the visual odometry technique that I taught in this video series. There are a couple times I play with the LiDAR data for fun, but the feature matching and pose estimation were only done with the stereo images.
      If you do download the LiDAR, remember that there are 22 scenes in the dataset, so if you're only working with say, the first scene, it's only 8.22 GB of LiDAR

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад

      @@natecibik5505 thank yo so much for information.
      and also if you can suggest any resources for my project (object detection based on lidar dataset) it will be very very very helpful from your end

    • @ShubhamTiwari-ks2qg
      @ShubhamTiwari-ks2qg 3 года назад

      @@natecibik5505 technically i am at very base level in this area . so i want to learn all small details

  • @thorkrogaardhansen462
    @thorkrogaardhansen462 3 года назад

    Amazing :D

  • @catz6457
    @catz6457 3 года назад

    thank you!