How the Kinect Depth Sensor Works in 2 Minutes
HTML-код
- Опубликовано: 18 сен 2024
- The kinect uses a clever combination of cheap infrared projector and camera to sense depth.
References:
• Video
www.google.com/...
campar.in.tum.d... (p. 33)
en.wikipedia.or... (Stereo triangulation)
I can hardly find a word to express how much you have helped in my research study using kinect.
Thank you!
Glad it helped! The best thanks is a link to this video from a website or forum.
I believe the important thing about the pattern is that it's random, so that the camera can differentiate between groups of specs. The broader term for this is "structured lighting". Google Structured-light_3D_scanner
I've checked your channel and can confirm, you're a genius.
incredibly well explained and simple video.
Thanks, you're a good presenter. Simple & Concise :)
Good clarification. Should have said irregular pattern instead of random. I wonder if the dot pattern is the same on each one, or they're all calibrated to their own pattern.
SketchUp, Bamboo Tablet, Serif DrawPlus, CamStudio. All the drawing scenes are usually sped up during the editing process.
Totally guessing here, but I suspect there's some variance in the manufacturing, and that each unit gets calibrated at the factory.
Please let me know how kinect comes to know the angle of speckle pattern pattern..?
One way to use 2 Kinects is to have one Kinect shaking side to side while the other one is still. The dots of the moving camera will look stationary to that camera while the other camera's dots are blurred and vice versa. V Motion Project did this. They also used one computer for each camera.
Very clear and concise. Great ! Thanks !
Good one. Can you tell me what software you are using for the drawings?
Good! Easy to understand the theory.....can you tell me how can i used it with matlab?
hows does an IR sensor help to calculate the depth better compared to a secondary camera? Can you please explain that part again?
so does the camera recognise each part, ie the sectors in the red grid example, via unique speckle clusters?
Thanks for your knowledge
Good! Easy to understand the theory!!
Do you know if the Asus Xtion series of depth sensors work the same way? Would they have the same limitation of a single sensor in a room?
But lets say i was to take out the cameras from the kinect and make a diffrent distance between the cameras that will affect the andgle right? so it won't be able to recreate the image?
well this is for a 3d scanner basicly
Fantastic! I knew there was a reason I subscribed!
Do all units have the same fixed speckle pattern, or is it learned after it's created?
So you're saying somehow the lights are randomized (the leds are moved somehow), then that it's recalibrated? No. The pattern is predetermined. It might have been random at some point, but I doubt it.
you can use multiple kinect, main problem is that the usb bandwidth is to high for two kinects, on one computer
Thank you!
wonderful thank you for explain
thank you! that was great!
Thank you
Thank you very much! This was a very helpful video!!! ;-)
subbed!
It's not random. It appears random but the device has to be aware of the pattern it is casting.
Cool
There is no such limitation with either...
I thought it was a time-of-flight lidar.
+Qinggeng Zhuang new one is ToF, old uses triangulation.
***** That doesn't work, just like before the two kinects would confuse each other and wouldn't be able to triangulate points
What if i told you, you don't need depth-sensor or any software.....well i just did...but will i tell you how...that's a billion dollar answer..but i'll take a couple hundred million. my name is not 4D for no reason
Please let me know how kinect comes to know the angle of speckle pattern..?
Very clear and concise. Great ! Thanks !