Sony | Event-based Vision Sensor (EVS) to detect only changes in moving subjects -Full ver.-

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024
  • An Event-based Vision Sensor(EVS) is designed to mimic the mechanism of the human eye. It realizes high-speed data output with low latency by capturing the movement of the subject as a change in luminance.
    The possibility of various applications may expand with Sony's Event-based Vision Sensor. This video provides a basic description and application examples.
    #SonySemiconductorSolutions #EventBasedVisionSensor #SonyEVS
    ◆Web - Sony's Image Sensors for Industry -
    www.sony.net/c...
    Click here to see products lineup and technology information of Sony's EVS.
    ◆Web - Sony Semiconductor Solutions Group
    www.sony-semic...

Комментарии • 42

  • @tonyintieri
    @tonyintieri 3 года назад +4

    Sony + Prophesee = Great technology!

  • @electroncommerce
    @electroncommerce 3 года назад +4

    Congrats Sony!

  • @Tony-tr3di
    @Tony-tr3di 3 года назад

    Head raising so many doubts. Lets wait for hands on preview. Hope for the best 💕💕💕

  • @Sey357
    @Sey357 3 года назад +3

    WOW 👑 SONY GOD FOREVER#1 ✌️ 👑

  • @dinahplacido5786
    @dinahplacido5786 Год назад

    Way better than my old one. Perfect height

  • @hajime5486
    @hajime5486 3 года назад +4

    Sony you're biggest fan here. I live in Tokyo I would love to work for you. Message me!

    • @UniversalIndian-fh6st
      @UniversalIndian-fh6st 3 года назад +1

      Me too. A Lifetime Sony Fan ❤️❤️❤️
      Love from India 🇮🇳 ❤️❤️❤️❤️❤️ 🙏🏻🙏🏻

  • @Александр-л8з3э
    @Александр-л8з3э 2 года назад +1

    This is future of robotics vision

  • @arielatomguy
    @arielatomguy 3 года назад +2

    These sensors encapsulate so many possibilities, yet it is important to remember that the platform on which the camera/sensor is positioned must remain completely still. it would be very interesting if a next gen version of these sensors can compensate for platform motion using either IMU or additional sensors pointed at other directions. Having the sensor receive such motions as input could perform the ambient subtraction within the sensor.

    • @tobidelbruck
      @tobidelbruck 2 года назад

      It's true that a moving camera increases the data rate, but still our measurements with driving scenes show that in typical 20ms "frames" of accumulated brightness change events, less than 10% of the pixels are activated, making these activity-driven frames a nice fit to AI hardware that can exploit activation sparsity.

  • @diegovelasco914
    @diegovelasco914 3 года назад +1

    Te amo Sony

  • @edygk20kstm64
    @edygk20kstm64 3 года назад +5

    This tech can be a great asset to paranormal research.

  • @stevenlk
    @stevenlk 6 месяцев назад

    wow this is basically capturing ground truth optical flow

  • @vladodamjanovski
    @vladodamjanovski 3 года назад +1

    I am really trying to decipher the concept. Maybe you can help with a bit more elaboration. As far as I know, a video compression like HEVC analyses moving subject with great accuracy already, admittedly after the capturing by the sensor, through the encoder. Is this a similar concept but within the sensor itself?

    • @ES-qy2ju
      @ES-qy2ju 3 года назад +5

      No
      HEVC is a codec, it needs frames to make a video.
      This sensor doesn't capture frames, but pure data.
      The " temporal resolution" on video its the framerate, a video of 30 fps is limited because you can't see more than 30 images in a second, you can't see more if you play it in slow motion.
      With DATA you dont have that limitation, because there are no frames, just points that move from point A to point B on a canvas, you can slow down the speed as you like.

    • @JustinHunnicutt
      @JustinHunnicutt 2 года назад +1

      Think of it as a grid of analog brightness sensors but instead of measured brightness at each point they output the derivative (or instantaneous change) of the brightness value. If that isn't right someone please comment because I know that's not how it actually works but I thought it might help grasp the concept.

  • @NowyKurs
    @NowyKurs 3 года назад

    When this sensor would be used in camera's? Looks like a decent quality.

  • @user-es8ty4ke2s
    @user-es8ty4ke2s 11 месяцев назад

    can self-driving cars operated by these

  • @bibeksutradhar3590
    @bibeksutradhar3590 3 года назад

    Awesome

  • @user-mf7li2eb1o
    @user-mf7li2eb1o 3 года назад

    Cool stuff

  • @vk2630
    @vk2630 3 года назад +3

    Trying to understand what is the benefit of this technology

    • @ES-qy2ju
      @ES-qy2ju 3 года назад +1

      Ai efficiency, surveillance, obtain data without the need to analyze the image in post process

    • @butinloris5756
      @butinloris5756 3 года назад

      2:30 ...

  • @kimberlytierney1369
    @kimberlytierney1369 3 года назад

    Amazing technology!

  • @sumandas9039
    @sumandas9039 3 года назад +1

    😍😍😍

  • @JustinHunnicutt
    @JustinHunnicutt 2 года назад

    Can someone explain to me why this is inherently better than a high frame rate sensor with some processing to focus on change in luminance. Is it just the fact that it's equivalent to a super high frame rate? Or could some give a specific case where this would work better than a high frame rate sensor and the processing I mentioned, besides not having to do that processing. I'm not knocking the tech I just don't see the benefit besides offloading the processing. I'm sure these examples exist or the product wouldn't exist.

    • @tobidelbruck
      @tobidelbruck 2 года назад +1

      It has the USP that you can beat the latency-power tradeoff of frame cameras, plus you get really large DR and minimal motion blur.

    • @kylebowles9820
      @kylebowles9820 5 месяцев назад

      I know this is an old comment but basically its pixels are async and very dense in time, you get very high resolution and continuous temporal gradients for things like fast motion tracking in robots, factories, and AR/VR. The dynamic range is also better than many so called 'night vision' cameras I've used, and it works in the daylight just as well. Great for vehicles, robots, outdoor AR/VR. It also has a few features like per-pixel hardware bandpass filters to either filter out or deliberately capture flickering / vibration for industrial applications and robotics. I have been playing with them for a few years now.

  • @GreenishlyGreen
    @GreenishlyGreen Год назад

    Vr?

  • @churamontgomery6063
    @churamontgomery6063 Год назад

    Wow

  • @BlackPrism100
    @BlackPrism100 3 года назад

    Sony, metahero and WDW. Future is comming.

  • @newdar-ff5bz
    @newdar-ff5bz 3 месяца назад

    sounds like lcd vs oled !!

  • @aidedflyer173
    @aidedflyer173 2 года назад

    Let me find out this is project Skynet..

  • @shivam627
    @shivam627 3 года назад

    👍

  • @jhonyhill1
    @jhonyhill1 3 года назад

    This I done when I got pass out... 🤨

  • @rcmoedas
    @rcmoedas 3 года назад

    👍🏽
    3

  • @Blag_Cog
    @Blag_Cog 3 года назад

    How many frames per second? Or should I say states per second? How many hertz lol.

    • @ES-qy2ju
      @ES-qy2ju 3 года назад

      it probably depends on the shutter speed because there are no frames.

    • @Blag_Cog
      @Blag_Cog 3 года назад

      @@ES-qy2ju yeah thats what I was thinking. How many "events" per second

    • @ES-qy2ju
      @ES-qy2ju 3 года назад +1

      @@Blag_Cog a typical and common sensor with electronic shutter speed can capture up to 12000 "images".
      An event sensor would give the illusion of unlimited framerate

    • @Blag_Cog
      @Blag_Cog 3 года назад +1

      @@ES-qy2ju I could imagine this would be amazing for a wide variety of uses. It is going to be really great data for neural networks and interpolation technology.

  • @lucasn82_
    @lucasn82_ 3 года назад +1

    Sony Alien Company 😁