I appreciate this video isn't quite as "flashy" as part one of the series, though I felt it was important to delve into the "HOW" of seeing in 3D using two cameras. The quick primers of how StereoBM and StereoSGBM work do gloss over a large amount of detail, which I've left to you if you want to jump further into the computer vision side of this project! I spent days and days trying to nail down a decent overview of the algorithms, and I hope they make sense! Also, please let me know if I got anything wrong 😅
Glad you enjoyed it! The next one will be focusing heavily on a neural network named Yolov3, and I’ll be covering its history as well as some controversy around it too 😅
It would be interesting to compare the results that this setup can provide, versus what the recently released Arducam "ToF" depth camera can do-although the depth of that camera may be limited.
I'm trying to follow along and despite my years of programming experience I'm not familiar with python environment managers. What version of Conda are you installing on the Jetson?
Haha you and me both, writing scripts is ok but setting up virtual environments etc has proved a sticking point. It’s been quite a while since I looked at this, and since the Jetson was last booted up. Half sure I skipped using Conda in the end, and installed stuff manually. Having to install an ARM-specific one like miniconda seems to ring a bell!
Dude how are you able to use the CUDA on your openCV? I have the 2GB version but it seems I can't find a way to do it. I have reinstall my Jetpack complied the OpenCV localy with parameters provided tutorials and So on It really sucks that I have to use the CPU only
can you add a lidar or vcsel sensor like iphones 11? for mapping the 3D environment? , or also a pattern projector to make a structured light scanner :D? open source :D :D :D :D
Unfortunately, I have no idea. 😞 I attempted to run the PyTorch variant of AANet+ on my Jetson Nano, but I ended up hitting a thermal cutoff and the device shut itself down. Haven’t tried since…!
I’m using the MPU9250 on a PXFmini, though for this project I think I’m going to need a global reference frame (ie, GPS). Will be more sure about that though when I got more of the programming side sorted out 👍
Awesome video... Even I had a dream of building a drone in my 8th grade... Im now in the 2nd year of my engineering lol... Can you help on where can I get started 😅😁
The hardest part at the very start was figuring out what components you need! From memory, I chose the frame first, then specc’d the motors to handle an estimated final weight correctly. I’d recommend checking out PixHawk as your flight controller if you want to go for something that’s not for racing. If it helps, you can actually buy kits with all the components you’ll need! Searching “PixHawk drone kit” on even just Amazon seems to pop up some good result. Or, use them as a guide to buy your own components from eg HobbyKing 😅
Hey! So I tried using the code on your github for this (using jetson nano and same camera). But when I run python3 calibrate.py I get the following error: "Cannot perform reduce with flexible type". How would I fix this? Thanks!
Sorry for the delay in response! In all honesty, I have no idea on that one, and its been quite a long time since I last looked at that code. It sounds like that's happening somewhere in one of the project dependencies? Try double checking that the installed versions of e.g. opencv-python works with your current version of Python.
@@akamatchstic Can I edit the environment.yml file to match the versions present in my system, or does the project only work with those specific versions?
RUclips recommended this video again to me and I still have to say man you made it very clear. Got to learn about triangulation, stereo cam and depth.
Good to see you uploading again🔥👍🏿
How does this not have more views!? Great video
I appreciate this video isn't quite as "flashy" as part one of the series, though I felt it was important to delve into the "HOW" of seeing in 3D using two cameras.
The quick primers of how StereoBM and StereoSGBM work do gloss over a large amount of detail, which I've left to you if you want to jump further into the computer vision side of this project!
I spent days and days trying to nail down a decent overview of the algorithms, and I hope they make sense! Also, please let me know if I got anything wrong 😅
Loved learning about how you got a portion of the project done, thank you! Can’t wait for the next video!
Glad you enjoyed it! The next one will be focusing heavily on a neural network named Yolov3, and I’ll be covering its history as well as some controversy around it too 😅
Thank you for explaining the process step by step. Looks so cool.
Well done!
This is a really cool video, looking forward for the next update!
Loving your videos thus far. This series is fun. 😀
Video 3 is midway through the editing process right now! Hoping to release this coming weekend, but I can’t guarantee that 😅
@@akamatchstic I don't mind waiting for quality content. Take your time. You're doing great 👍
Appreciate the attention to detail. Great video.
Great work! What image did you boot your Nano from? I used the official one from Nvidia but it seems very outdated.
Great video bro, Just looking for this.
Damn, you deserve more views.
Great job!
Your videos are amazing!
It would be interesting to compare the results that this setup can provide, versus what the recently released Arducam "ToF" depth camera can do-although the depth of that camera may be limited.
Dope work 🔥🔥🔥
DIY cheap microwave radar + laser scanner for object distance
RF TX+RX for auto following
or you need a bigger drone to carry mini-PC for OpenCV
What is the power usage of Jetson Nano while deep mapping?
EPICCCCCC!!! :)
I'm trying to follow along and despite my years of programming experience I'm not familiar with python environment managers. What version of Conda are you installing on the Jetson?
Haha you and me both, writing scripts is ok but setting up virtual environments etc has proved a sticking point.
It’s been quite a while since I looked at this, and since the Jetson was last booted up. Half sure I skipped using Conda in the end, and installed stuff manually. Having to install an ARM-specific one like miniconda seems to ring a bell!
Dude how are you able to use the CUDA on your openCV? I have the 2GB version but it seems I can't find a way to do it.
I have reinstall my Jetpack
complied the OpenCV localy with parameters provided tutorials
and So on
It really sucks that I have to use the CPU only
Brother Should I use the Jetson in place of Raspberry pi or I have to use both.
can you add a lidar or vcsel sensor like iphones 11? for mapping the 3D environment? , or also a pattern projector to make a structured light scanner :D? open source :D :D :D :D
thank you man! if i used AANet+ AI things, what is the time to process one video frame ? is it working good? can you share AI performance ?
Unfortunately, I have no idea. 😞 I attempted to run the PyTorch variant of AANet+ on my Jetson Nano, but I ended up hitting a thermal cutoff and the device shut itself down. Haven’t tried since…!
Which IMU are you using for the drone? It's enought precise or you still need GPS support?
I’m using the MPU9250 on a PXFmini, though for this project I think I’m going to need a global reference frame (ie, GPS). Will be more sure about that though when I got more of the programming side sorted out 👍
Awesome video...
Even I had a dream of building a drone in my 8th grade...
Im now in the 2nd year of my engineering lol...
Can you help on where can I get started 😅😁
The hardest part at the very start was figuring out what components you need!
From memory, I chose the frame first, then specc’d the motors to handle an estimated final weight correctly. I’d recommend checking out PixHawk as your flight controller if you want to go for something that’s not for racing.
If it helps, you can actually buy kits with all the components you’ll need! Searching “PixHawk drone kit” on even just Amazon seems to pop up some good result. Or, use them as a guide to buy your own components from eg HobbyKing 😅
@@akamatchstic thanks a lott for your reply😁
Could you please explain how the code runs? After connecting the camera. I am actually not able to access the camera using python code
Unfortunately it’s been nearly two years since I last looked at the code for this video, and so don’t have much of an idea how it works
@@akamatchstic oo ok tq for replying. If u've any information for setting the camera with the board, please help me out. Thank you
Hey! So I tried using the code on your github for this (using jetson nano and same camera). But when I run python3 calibrate.py I get the following error: "Cannot perform reduce with flexible type". How would I fix this? Thanks!
Sorry for the delay in response! In all honesty, I have no idea on that one, and its been quite a long time since I last looked at that code. It sounds like that's happening somewhere in one of the project dependencies? Try double checking that the installed versions of e.g. opencv-python works with your current version of Python.
you fix this?
how i know the cudnn and cudatoolkit version?
It’s been so long since I’ve looked at this, I’m not sure to be honest - sorry!
@@akamatchstic oh, ok, thanks
@@akamatchstic Can I edit the environment.yml file to match the versions present in my system, or does the project only work with those specific versions?