The Light Field Stereoscope - SIGGGRAPH 2015

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024
  • Light field display with focus cues for virtual reality near-eye displays.

Комментарии • 21

  • @TomCourtney
    @TomCourtney 9 лет назад

    Clever concept, nice work.

  • @stevenclark2188
    @stevenclark2188 9 лет назад +2

    I'd agree with Rota Kravits. It looks like the front layer isn't transparent enough to allow sharp focus on the background. it may not be that way for everyone but my brain reacts pretty strongly to images not coming into focus and my eyes water.
    On the other end I wonder if bumping up the brightness until the scene is bright enough that the visual system expects all planes to be within the hyperfocal distance would reduce discomfort for similar reasons. Probably would be anything but power efficient though.

  • @KamilCzerski
    @KamilCzerski 8 лет назад

    +Augure Zera "I don't really understand, how two stacked LCD panel with only two image allow for the eye to refocus accurately (given that there's not just two point of focus possible in scenes)."
    The technique is called depth fusion or depth filtering . A quote from the paper[1]: "One proposed solution is to present a sum of images at multiple focal planes and to vary focal depth continuously by distributing image intensity across planes-a technique referred to as depth filtering." Or see figure 2 at this paper[3], which gives axample of intensity distribution.
    Autors claim to reach high depth range[2] "With the prototype developed for this project, we achieve a field of view of 87◦ × 91◦, an accommodation range from approx. 0.2 m to 1.2 m (covering a large portion of the depth range where accommodative depth cues matter)"
    [1]Accommodation to multiple‐focal‐plane displays: Implications for improving stereoscopic displays and for accommodation control
    [2]www.computationalimaging.org/wp-content/uploads/2015/06/TheLightFieldStereoscope-SIGGRAPH2015.pdf
    [3]bankslab.berkeley.edu/publications/Files/achieving_near-correct_focus_cues05.pdf Achieving near-correct focus cues in a 3-D display using multiple
    image planes

  • @inceptional
    @inceptional 9 лет назад

    Yeah; sounds like a pretty neat little solution.

    • @JohnFAlmqvist
      @JohnFAlmqvist 8 лет назад

      +inceptional Little? It sound like a re work of the entire concept of a screen to me :P

  • @szynkers
    @szynkers 10 месяцев назад

    i wonder why nobody even attempted to ever use this commercially... it's the only solution that is simultaneously solid-state and doesn't seem to sacrifice resolution. I've heard from a live presentation that generating the 2 plane images was supposedly very computationally heavy for them, but I bet it could be optimized, i.e. by using the z-buffer for depth data instead of rendering multiple view points.

  • @MikeTrieu
    @MikeTrieu 9 лет назад

    Since this technique uses a compressive display, perhaps you can also achieve super-resolution while also accounting for the light field?

  • @jsymons1985
    @jsymons1985 7 лет назад +1

    So you only need two images to reconstruct a 4-D light field? Is this because of the magic of multiplicative 4-D factorization? how the hell does that work? Can someone explain it to me? With two displays you should have two planes of focus not a range of focus... how does 4D Factorization make it possible to get a range of focal planes? Is anyone else scratching their noggins?

    • @tikiman098
      @tikiman098 7 лет назад +4

      Sort-of not really - it's complicated. A light field display is one that can control light both in position and in angle. A conventional display can only control light by position (think pixel position, x,y, i.e. two dimensions). A light field display can control light by position (x,y) and also by emitted angle (a,b - you can call these angle coordinates whatever you want, but there are 2 coordinates), so that's the four dimension or 4D part of it. When you have a light field display and you have these four dimensions to play with, and one display for each eye, you can reproduce 3D scenes which when viewed behave exactly like the original 3D scene - meaning a range of depths are simultaneously present (as is the case with natural scenes - a range of depths are always there, you just focus on whatever you choose, and the rest blurs accordingly). Big breath.
      OK, so if we'll agree that once you have a light field display, you can display scenes which span a range of depths (and all depths are simultaneously present), the next question is - how do we make a light field display? One way to make a light field display is ... by stacking LCD panels. You can think of it like this: the first LCD panel controls light's x,y position, and the second LCD panel controls the light's a,b or emitted angle. Light from the backlight goes through BOTH panels before you see it. Each LCD panel kind-of acts like an old-school transparency film.
      The key piece to understand is that the two LCD panels are NOT displaying two flat images, one near, one far. Instead, they are together encoding light's POSITION AND ANGLE. Once you have control of position and angle, you can reproduce 3D scenes that behave like the natural world, in which MANY depths are SIMULTANEOUSLY present.
      Hope that helps.
      -Adam

  • @Reticuli
    @Reticuli 5 лет назад

    What resolution and dynamic range has been achieved?

  • @antivanti
    @antivanti 9 лет назад

    Neat idea but I assume it only works with LCD which means we lose some of the benefits of OLED displays such as low persistance. I guess you might be able to use prisms or semi-transparent mirrors to mix OLED screens.
    How significant are the benefits? I assume the eye-strain problem is mostly only an issue with objects represented as closer than arms length and the differing focus planes quickly becomes less relevant just a few meters away where focus quickly approaches infinite focus.

    • @RobertSzasz
      @RobertSzasz 8 лет назад

      this looks like its closer to a two plane display rather than a full compressive lightfield display (there is a front focus, and a rear focus but not a continuous field). An emissive oled rear panel with a spaced led overlay might allow for full compressive lightfield displays. I don't know if computing the light field, and the 2 correct 2d images to display to reconstruct said lightfield at 120hz for a head mounted display of non trivial fields is possible at this point.

  • @AndrewAdamsturnkey
    @AndrewAdamsturnkey 8 лет назад

    Sounds Like Magic leap

  • @ezmo55
    @ezmo55 8 лет назад

    So two ranges of focus would mean that there's only two organically layered perspectives (instead of countless layers of depth viewing in real life). I wonder how obvious this would be if properly optimized. Interesting...

  • @warpnuum
    @warpnuum 9 лет назад

    This is actually an old technology. And I'm not sure that it will work in 3D games

  • @PinchOfLuck
    @PinchOfLuck 8 лет назад +1

    Ups, they forgot to mention how screwed they are regarding resolution etc.

  • @ygalion
    @ygalion 8 лет назад

    vr from nvidia should be the best cause no need for thrid person waiting fot gpu dtivers, they make own. so oculus and my favorite vive should wait for now, after nvidia will release it after like dont know 2 years?

  • @NietonoNoShana243
    @NietonoNoShana243 8 лет назад +1

    Sword Art Online gets closer.

  • @ameliabuns4058
    @ameliabuns4058 3 года назад

    God why doesn't technology advance faster! I love vr but Ihate how "flat" and fake things look. like you can't judge distance properly. i want my eyes to actualyl adjust lik IRL!