Thank you, this video is pure gold! Unfortunately I have a problem with the SDK using the wrapper for C#. When I open the folder downloaded from github I can't find the solution file... I guess I'm doing something wrong. Could you please help me?
I have a questions, I hope you can help me, I already downloaded the SDK but in the wrappers folders there's no .sln file, i really dont know what to do
Hello sir, It was a very helpful video indeed. I have installed the the software and looking forward for robotics application. I would like to know that using the camera how would I get the depth coordinates? The camera model I am having is d435.
could you run two D430's into the software? I saw something like this on display and they utilized unity to help create the illusion with 4 cameras instead of one. Also creating a stronger sense of depth.
It would depend on the project. Email me at Farnam7@gmail.com or look me up on LinkedIn. Send me a brief overview of the requirements please and Ill let you know what I can do.
Thank you so much! I think it should work with the Euclid since Intel makes the Atom processor and the Euclid does have USB 3.0 + ubuntu os running. Although, you may need a USB micro to female-usb adaptor, in which case be careful to purchase one that is short or designed to boost the signal. If you have not yet purchased the euclid, I'd check out this combo: Intel® RealSense™ Depth Camera D435 + Aaeon® UP Board . The reason is that the Euclid is now a legacy product and it has a built-in zr series camera already, I can however, still imagine using both on a robotic application. Let me know if you have any other questions!
Thank you so much for your response. I already have Euclid, but I'm really having hard time getting the Camera information to Python environment in my desktop. I need first to establish Linux ssh channel, then ROS node, in which I've very little experience. Anyhow, thanks again.
Hello, maybe you can tell me how to get the x,y,z information to an Pixel(u,v) ? I have the D415 Camera but only get the length of a pixel to the camera with depth.get_distance(). But I think its equal to (sqr(x^2+y^2+z^2)). Isn't it possible to get only x, only y and only z? Thank you for you're help.
I have clicked a picture with IntelRealsense 435i. I have the depth picture with me which have the colour combinations. with the help of Matlab, I can have R, G, B at any particular region. How can I calculate a "value" in single integer so that I can calculate the depth of a point as compared to another point in the picture?
Farnam Adelkhani 30 seconds ago (edited) The RealSense is an RGBD camera, so in addition to your R,G, & B channels; you have a depth channel. If you can determine the depth values at the two points then the function to take the delta would be straight forward.
Nope. intel lists these 3 software tools on their website: realsense.intel.com/software-for-intel-realsense/#3d_scanning .... but, I have also used RecFusion. Get the D415 for 3D scanning over the wider angle D435. Best of luck!!
@@lineage13 I think that it would work, but you may run across some difficulties, ie: holes in the mesh. The companies out there have been working on optimizing the scans for several months now... i would personally lean in that direction. Most of those software tools require $ to be purchased, that was the downside.
Really helpful, many thanks. You're an excellent teacher and communicator. More vids, please.
thank you so much! from south korea student.
Thank you, this video is pure gold!
Unfortunately I have a problem with the SDK using the wrapper for C#. When I open the folder downloaded from github I can't find the solution file... I guess I'm doing something wrong. Could you please help me?
Good job. Followed the 1.5x suggestion :)
I have a questions, I hope you can help me, I already downloaded the SDK but in the wrappers folders there's no .sln file, i really dont know what to do
can you show pls how to setup intel sdk programm to nvidia broatcast for using an 3d depth image a video ?
Hello sir,
It was a very helpful video indeed.
I have installed the the software and looking forward for robotics application.
I would like to know that using the camera how would I get the depth coordinates?
The camera model I am having is d435.
could you run two D430's into the software? I saw something like this on display and they utilized unity to help create the illusion with 4 cameras instead of one. Also creating a stronger sense of depth.
Do you do consulting?
It would depend on the project. Email me at Farnam7@gmail.com or look me up on LinkedIn. Send me a brief overview of the requirements please and Ill let you know what I can do.
Very helpful, thank you.
Can I get the same functions you showed in Intel Euclid?
Thank you so much! I think it should work with the Euclid since Intel makes the Atom processor and the Euclid does have USB 3.0 + ubuntu os running. Although, you may need a USB micro to female-usb adaptor, in which case be careful to purchase one that is short or designed to boost the signal. If you have not yet purchased the euclid, I'd check out this combo: Intel® RealSense™ Depth Camera D435 + Aaeon® UP Board . The reason is that the Euclid is now a legacy product and it has a built-in zr series camera already, I can however, still imagine using both on a robotic application. Let me know if you have any other questions!
Thank you so much for your response. I already have Euclid, but I'm really having hard time getting the Camera information to Python environment in my desktop. I need first to establish Linux ssh channel, then ROS node, in which I've very little experience.
Anyhow, thanks again.
Can i manage realsense camera with "Unreal Engine" ? I've figured out it can be created with Unity, but there's is no information with UE4 :)
Hello,
maybe you can tell me how to get the x,y,z information to an Pixel(u,v) ? I have the D415 Camera but only get the length of a pixel to the camera with depth.get_distance(). But I think its equal to (sqr(x^2+y^2+z^2)). Isn't it possible to get only x, only y and only z? Thank you for you're help.
Will D400 series works with existing OpenNI based software?
Sorry I am noob. Need help
31:10 "We are Anonymous. We are Legion. We do not forgive. We do not forget. Expect us."
lol!
Why does the 415 need to have a rgb camera as well, when both the right and left cameras provide rgb color information?
Hi I would love to ask you is it possible to use 4 415 sensors to make a mesh could to vr headset ??
I have clicked a picture with IntelRealsense 435i. I have the depth picture with me which have the colour combinations. with the help of Matlab, I can have R, G, B at any particular region. How can I calculate a "value" in single integer so that I can calculate the depth of a point as compared to another point in the picture?
Farnam Adelkhani
30 seconds ago (edited)
The RealSense is an RGBD camera, so in addition to your R,G, & B channels; you have a depth channel. If you can determine the depth values at the two points then the function to take the delta would be straight forward.
@@farnaminum could you please explain me how to take depth value from the image? , I have just started working on this camera
@@harshvardhan8956 I have a similar doubt on how to acquire the depth value from the image?
@@sisirakottakkal7873 i have tried it to convert into gray scale in MATLAB but no significant results . Btw which project are you working on?
@@harshvardhan8956 I am working on robotic application where I need the depth ( distance) coordinates.
I am using the Intel realsence viewer software
thanks good video well described stuff!
can I use the intel realsense viewer to 3d scan objects?
Nope. intel lists these 3 software tools on their website: realsense.intel.com/software-for-intel-realsense/#3d_scanning .... but, I have also used RecFusion. Get the D415 for 3D scanning over the wider angle D435. Best of luck!!
@@farnaminum I read I can record a .ply point cloud file then convert it to .STL in meshlab? Does it work?
@@lineage13 I think that it would work, but you may run across some difficulties, ie: holes in the mesh. The companies out there have been working on optimizing the scans for several months now... i would personally lean in that direction. Most of those software tools require $ to be purchased, that was the downside.