How the Kinect Depth Sensor Works in 2 Minutes

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024
  • The kinect uses a clever combination of cheap infrared projector and camera to sense depth.
    References:
    • Video
    www.google.com/...
    campar.in.tum.d... (p. 33)
    en.wikipedia.or... (Stereo triangulation)

Комментарии • 46

  • @zhang28155
    @zhang28155 11 лет назад +25

    I can hardly find a word to express how much you have helped in my research study using kinect.
    Thank you!

  • @CuriousInventor
    @CuriousInventor  11 лет назад +14

    Glad it helped! The best thanks is a link to this video from a website or forum.

  • @CuriousInventor
    @CuriousInventor  11 лет назад +8

    I believe the important thing about the pattern is that it's random, so that the camera can differentiate between groups of specs. The broader term for this is "structured lighting". Google Structured-light_3D_scanner

  • @Bezeoner
    @Bezeoner 11 лет назад +3

    I've checked your channel and can confirm, you're a genius.

  • @AndersGustafsson87
    @AndersGustafsson87 9 лет назад +6

    incredibly well explained and simple video.

  • @DilipLilaramani
    @DilipLilaramani 9 лет назад +7

    Thanks, you're a good presenter. Simple & Concise :)

  • @CuriousInventor
    @CuriousInventor  11 лет назад +6

    Good clarification. Should have said irregular pattern instead of random. I wonder if the dot pattern is the same on each one, or they're all calibrated to their own pattern.

  • @CuriousInventor
    @CuriousInventor  11 лет назад +2

    SketchUp, Bamboo Tablet, Serif DrawPlus, CamStudio. All the drawing scenes are usually sped up during the editing process.

  • @CuriousInventor
    @CuriousInventor  11 лет назад +2

    Totally guessing here, but I suspect there's some variance in the manufacturing, and that each unit gets calibrated at the factory.

  • @tanajikamble13
    @tanajikamble13 9 лет назад +7

    Please let me know how kinect comes to know the angle of speckle pattern pattern..?

  • @carlandj
    @carlandj 11 лет назад +1

    One way to use 2 Kinects is to have one Kinect shaking side to side while the other one is still. The dots of the moving camera will look stationary to that camera while the other camera's dots are blurred and vice versa. V Motion Project did this. They also used one computer for each camera.

  • @poltergeistish
    @poltergeistish 10 лет назад +1

    Very clear and concise. Great ! Thanks !

  • @exnol
    @exnol 11 лет назад +1

    Good one. Can you tell me what software you are using for the drawings?

  • @alaaabd2598
    @alaaabd2598 9 лет назад +1

    Good! Easy to understand the theory.....can you tell me how can i used it with matlab?

  • @iAnimationProduction
    @iAnimationProduction 10 лет назад

    hows does an IR sensor help to calculate the depth better compared to a secondary camera? Can you please explain that part again?

  • @bluemeat8299
    @bluemeat8299 10 лет назад

    so does the camera recognise each part, ie the sectors in the red grid example, via unique speckle clusters?

  • @jayeshkurdekar126
    @jayeshkurdekar126 2 года назад

    Thanks for your knowledge

  • @MaestroMinito
    @MaestroMinito 11 лет назад

    Good! Easy to understand the theory!!

  • @OptimusPrimeTime
    @OptimusPrimeTime 11 лет назад

    Do you know if the Asus Xtion series of depth sensors work the same way? Would they have the same limitation of a single sensor in a room?

  • @WerewolfSlayer91
    @WerewolfSlayer91 9 лет назад

    But lets say i was to take out the cameras from the kinect and make a diffrent distance between the cameras that will affect the andgle right? so it won't be able to recreate the image?

  • @fknrdcls
    @fknrdcls 11 лет назад +1

    Fantastic! I knew there was a reason I subscribed!

  • @sd4dfg2
    @sd4dfg2 11 лет назад

    Do all units have the same fixed speckle pattern, or is it learned after it's created?

  • @Grazfather
    @Grazfather 11 лет назад

    So you're saying somehow the lights are randomized (the leds are moved somehow), then that it's recalibrated? No. The pattern is predetermined. It might have been random at some point, but I doubt it.

  • @steamcastle
    @steamcastle 11 лет назад

    you can use multiple kinect, main problem is that the usb bandwidth is to high for two kinects, on one computer

  • @andresleon8893
    @andresleon8893 10 лет назад

    Thank you!

  • @tsw_hussen3635
    @tsw_hussen3635 9 лет назад

    wonderful thank you for explain

  • @magicbuskey
    @magicbuskey 11 лет назад

    thank you! that was great!

  • @Heavenlydreamer
    @Heavenlydreamer 8 лет назад

    Thank you

  • @ma888u
    @ma888u 8 лет назад

    Thank you very much! This was a very helpful video!!! ;-)

  • @naitB
    @naitB 11 лет назад

    subbed!

  • @Grazfather
    @Grazfather 11 лет назад

    It's not random. It appears random but the device has to be aware of the pattern it is casting.

  • @RandomDigits
    @RandomDigits 11 лет назад

    Cool

  • @marbleshark6
    @marbleshark6 11 лет назад

    There is no such limitation with either...

  • @qinggengzhuang5950
    @qinggengzhuang5950 8 лет назад +1

    I thought it was a time-of-flight lidar.

    • @CuriousInventor
      @CuriousInventor  8 лет назад +4

      +Qinggeng Zhuang new one is ToF, old uses triangulation.

  • @VoltzLiveYT
    @VoltzLiveYT 10 лет назад

    ***** That doesn't work, just like before the two kinects would confuse each other and wouldn't be able to triangulate points

  • @diamony123
    @diamony123 10 лет назад

    What if i told you, you don't need depth-sensor or any software.....well i just did...but will i tell you how...that's a billion dollar answer..but i'll take a couple hundred million. my name is not 4D for no reason

  • @tanajikamble13
    @tanajikamble13 9 лет назад

    Please let me know how kinect comes to know the angle of speckle pattern..?

  • @robstorms
    @robstorms 11 лет назад

    Very clear and concise. Great ! Thanks !