HyperDepth: Learning Depth from Structured Light Without Matching

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 7

  • @RuofeiDu
    @RuofeiDu 8 лет назад +2

    Bravo! This is awesome work on depth sensor from perceptiveIO and Microsoft Research! @Kinect #3D #Depth #PointCloud

  • @charliebaby7065
    @charliebaby7065 4 года назад

    why is the v1 outperforming the v2 in most of those scans?

  • @madepaz76
    @madepaz76 8 лет назад +2

    buen video

  • @olivekid100
    @olivekid100 7 лет назад

    this is amazing

  • @praseie
    @praseie 6 лет назад

    is this project can be done in diy with any open source source code

    • @charliebaby7065
      @charliebaby7065 4 года назад

      I'm attempting to reconstruct as much of this holoportation tech that i can BUT, i don't have the luxury of high detail hardware. I'm stuck with a set of kinect v1s . I'm using processing3 to wrangle the java code needed to basically attempt to maximize my frame rate, rotate and glue skeletons (and meshes) together, to isolate the model in the center of the xyz cords , utilizing a "Marching Cubes" algorithm to smooth the parts together , and then sending my joints over a network sockets.
      the BIGGEST problem , is the streaming the texture of the model, i could vector array a color layer ontop of my point cloud data, but point clouds require too much data. I'm better off with polygon representations. but then i need more textual information..... there has to be some sort of video streaming and projection mapping solution, but im not there yet. soon.
      so to answer your question....... sure, but good luck. you'll need it

    • @mattizzle81
      @mattizzle81 3 года назад

      @@charliebaby7065 Have you thought of streaming the RGBD stream directly? I personally know of two different, simple algorithms to stream depth over standard H264, which can be combined effectively.