- Видео 164
- Просмотров 75 779
Columbia Computer Vision
Добавлен 6 мар 2016
This channel shares research highlights and educational lectures created by the Computer Vision Laboratory at Columbia University in New York City.
Cricket: A Self-Powered Chirping Pixel
Project Page: cave.cs.columbia.edu/projects/categories/project?cid=Computational+Imaging&pid=Cricket+A+Self-Powered+Chirping+Pixel
We present a sensor that can measure light and wirelessly communicate the measurement, without the need for an external power source or a battery. Our sensor, called cricket, harvests energy from incident light. It is asleep for most of the time and transmits a short and strong radio frequency chirp when its harvested energy reaches a specific level. The carrier frequency of each cricket is fixed and reveals its identity, and the duration between consecutive chirps is a measure of the incident light level. We have characterized the radiometric response function...
We present a sensor that can measure light and wirelessly communicate the measurement, without the need for an external power source or a battery. Our sensor, called cricket, harvests energy from incident light. It is asleep for most of the time and transmits a short and strong radio frequency chirp when its harvested energy reaches a specific level. The carrier frequency of each cricket is fixed and reveals its identity, and the duration between consecutive chirps is a measure of the incident light level. We have characterized the radiometric response function...
Просмотров: 9 087
Видео
Minimalist Vision with Freeform Pixels
Просмотров 1 тыс.3 месяца назад
Project Page: cave.cs.columbia.edu/projects/categories/project?cid=Computational Imaging&pid=Minimalist Vision with Freeform Pixels A minimalist vision system uses the smallest number of pixels needed to solve a vision task. While traditional cameras use a large grid of square pixels, a minimalist camera uses freeform pixels that can take on arbitrary shapes to increase their information conten...
The Minimalist Camera - BMVC 2018
Просмотров 4135 лет назад
Project page: www.cs.columbia.edu/CAVE/projects/mincam/ Title: The Minimalist Camera Authors: Parita Pooj, Michael Grossberg, Peter Belhumeur, Shree Nayar Conference: British Machine Vision Conference, 2018 Abstract: We present the minimalist camera (mincam), a design framework to capture the scene information with minimal resources and without constructing an image. The basic sensing unit of a...
Racing Auditory Display (RAD) Video Preview
Просмотров 3776 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/rad/ In this work we introduce the racing auditory display - the RAD for short - which is an audio system that players can hear through a standard pair of headphones. The RAD makes it possible for people who are blind to play the same types of racing games that sighted players can play with the same sense of control that sighted players have. Here...
Racing Auditory Display (RAD) Demo
Просмотров 2,3 тыс.6 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/rad/ The racing auditory display (RAD) is an audio-based user interface that allows players who are blind to play racing games with a similar efficiency and sense of control as sighted players can. The RAD comprises two novel sonification techniques: the sound slider for understanding a car's speed and trajectory on a racetrack and the turn indica...
Photorealistic Rendering of Rain Streaks: Changing Sky Illumination
Просмотров 1588 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/rain_ren/ Photorealistic rendering of rain streaks with lighting and viewpoint effects is a challenging problem. Raindrops undergo rapid shape distortions as they fall, a phenomenon referred to as oscillations. Due to these oscillations, the reflection of light by, and the refraction of light through, a falling raindrop produce complex brightness ...
Non-Single Viewpoint Imaging: Raxels and Caustics - Calibration of a Catadioptric Camera
Просмотров 1238 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/non-single/ Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, cameras need not always maintain a single viewpoint. For instance, an incorrectly aligned imaging system could cause non-single viewpoints. Also, systems could be designed on purpose to deviate from a single viewpoint to trade-off image ch...
Catadioptric Stereo: Planar and Curved Mirrors
Просмотров 2988 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/cad_stereo/ Conventional stereo uses two or more cameras to compute three-dimensional scene structure. Catadioptric stereo enables the capture of multiple views of a scene using a single camera. In this project, we are exploring the use of planar as well as curved mirrors to develop catadioptric stereo systems. By placing planar mirrors in front o...
Shape from Focus: Leaf Structure
Просмотров 2768 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/shape_focus/ Focus analysis provides a powerful means of recovering the shapes of visibly rough surfaces. These are surfaces with complex roughness and reflectance properties and are difficult, if not impossible, to recover using other vision techniques. In this project, we have developed a shape from focus method that uses different focus levels ...
Shape from Focus: Silicon Wafer Example
Просмотров 3798 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/shape_focus/ Focus analysis provides a powerful means of recovering the shapes of visibly rough surfaces. These are surfaces with complex roughness and reflectance properties and are difficult, if not impossible, to recover using other vision techniques. In this project, we have developed a shape from focus method that uses different focus levels ...
Appearance Matching: Parametric Eigenspace Representation
Просмотров 1788 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/app_match/ In contrast to the traditional approach of recognizing objects based on their shapes, we formulate the recognition problem as one of matching appearances. For any given vision task, all possible appearance variations define its visual workspace. A set of images is obtained by coarsely sampling the workspace. The image set is compressed ...
Adaptive Dynamic Range Imaging
Просмотров 1438 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/adr_lcd/ This project is focused on the development of a new approach to imaging that significantly enhances the dynamic range of an imaging system. The key idea is to adapt the exposure of each pixel on the detector based on the radiance value of the corresponding scene point. This adaptation is done in optical domain, that is, during image forma...
Lighting Sensitive Display: Lighting Sensitive Display of David
Просмотров 1818 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/lsd/ Although display devices have been used for decades, they have functioned without taking into account the illumination of their environment. In this project, an initial step has been taken towards addressing this limitation. We are exploring the concept of a lighting sensitive display (LSD) a display that measures the surrounding illumination...
Flexible Depth of Field Photography: Extended Depth of Field - Captured Video (f/1.4)
Просмотров 798 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/flexible_dof/ The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between DOF and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today’s cameras ha...
Flexible Depth of Field Photography: Extended Depth of Field - Computed EDOF Video
Просмотров 1508 лет назад
Project Page: www.cs.columbia.edu/CAVE/projects/flexible_dof/ The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between DOF and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today’s cameras ha...
Flexible Depth of Field Photography: Extended Depth of Field - (f/1.4, Fixed Focus)
Просмотров 568 лет назад
Flexible Depth of Field Photography: Extended Depth of Field - (f/1.4, Fixed Focus)
Flexible Depth of Field Photography: Extended Depth of Field - (f/8, Fixed Focus)
Просмотров 388 лет назад
Flexible Depth of Field Photography: Extended Depth of Field - (f/8, Fixed Focus)
Appearance Matching: 100 Object Recognition System
Просмотров 1838 лет назад
Appearance Matching: 100 Object Recognition System
Appearance Matching: Robot Positioning
Просмотров 968 лет назад
Appearance Matching: Robot Positioning
Appearance Matching: Real-Time Robot Tracking
Просмотров 648 лет назад
Appearance Matching: Real-Time Robot Tracking
Appearance Matching: Temporal Appearance Model of Scanned Object
Просмотров 698 лет назад
Appearance Matching: Temporal Appearance Model of Scanned Object
Appearance Matching: Temporal Inspection
Просмотров 678 лет назад
Appearance Matching: Temporal Inspection
Photometric Invariants for Segmentation and Recognition: Cluttered Dynamic Scene
Просмотров 378 лет назад
Photometric Invariants for Segmentation and Recognition: Cluttered Dynamic Scene
Photometric Invariants for Segmentation and Recognition: Varying Illumination
Просмотров 408 лет назад
Photometric Invariants for Segmentation and Recognition: Varying Illumination
Photometric Invariants for Segmentation and Recognition: Reflectance Ratio Invariant
Просмотров 378 лет назад
Photometric Invariants for Segmentation and Recognition: Reflectance Ratio Invariant
Photometric Invariants for Segmentation and Recognition: Reflectance Based Recognition
Просмотров 308 лет назад
Photometric Invariants for Segmentation and Recognition: Reflectance Based Recognition
Photometric Invariants for Segmentation and Recognition: Region Reflectance Ratios
Просмотров 248 лет назад
Photometric Invariants for Segmentation and Recognition: Region Reflectance Ratios
Depth from Defocus: Magnification Variation in Conventional Lens Due to Focusing
Просмотров 2608 лет назад
Depth from Defocus: Magnification Variation in Conventional Lens Due to Focusing
Worst project 😂😂sorry but its true but i love the glasses
Next time make array of dicklets
It could be handy for detecting nuclear blasts
we already have listening stations and satellites for that
So very cool ❤
Neat project. Some technical details of the presented hardware version I looked up after watching the video: They're transmitting a constant frequency signal for 30us, generated using an integrated VCO (MAX2752), so techncially not a chirp (which they acknowledge in the paper). Each unit has a different Rb value for a different VCO control voltage and therefore a different VCO frequency. The VCO is on for 40us, the RF switch is there only to cut out the first 10us while the VCO's output settles. They're currently using 2.04-2.11GHz (that is, the current hardware is designed to be set to a single frequency in this range), mostly to keep the antenna size reasonable. They're using an SDR for reception and checking a 2MHz band for each cricket, to account for drift in frequency. Current range is up to 40 feet (~12 meters, line of sight). Anyway, the basic idea of a self-powered light sensor working as a kind of voltage-to-frequency converter is neat and they've presented a bunch of potential applications.
Hell yeah cool asf dependant on output power, these could be great for ham radio beacons / rescue beacons
yo this is awsome
The automation applications for this are interesting. I'll have to do some reading on what your signaling protocol looks like.
The glasses example may be fast to respond to intensity changes, but don't they always attenuate the light by at least half?
Cool, but I like turtles.
Nice
If those glasses can work while inside a car, count me in, because I stupidly paid for transition lenses once. They were great while I was outside, but as soon as I got into the car where the UV light was blocked, they did nothing.
Very cool! Reminded me or all the self powered haptic work we used to do a decade back. Thank you for sharing and the project page is excellent!
Oh man, I’ve been wanting fast changing transition lenses for years. Sign me up!
And with an override switch for the occasions when you need it to NOT transition rather than having to bring a second non-treated pair.
Nice
Oh man, how many inventions can you cram into a solar powered radio thingy?
wow. ultra low power stuff like this is so cool
3:40 so basically self dimming welding goggles?
A truck equiped with a radio passes your car and your sunglasses go black 😂
no? the glasses don't transmit or recieve RF, they power the photochromic cell instead
Self-powered and light powered are not the same thing.
Bruh give me an example of something self powered that doesn't have an energy source 😂 Self powered in this case is meaning no battery or wires
I feel like in this case it could be argued so, because the sensor is always functional, given it senses light.
this is neat!
Consider me subscribed to this channel 🦗, awesome demo
Amazing idea. To scale it up, I'm curious how much bandwidth and selectability you need/get with hundreds or thousands of sensors. Perhaps this circuit could be made in CMOS and miniaturized.
I dont think it would scale down. Its basically a solar panel.
@@LittleRainGamesthey showed in this demo how miniature it could get, about four square millimeters; they just haven't actually built one that small yet. all you'd need is a base station able to handle that many signals.
The usual trouble with sensors is bringing them power. This is very fascinating
Cricket in the cricket ground.
Very cool.
3:50 is allready a thing in automatic welding goggles. but they use a solar cell to provide power and a ir / uv sensor to control the attentuation but their circuir is dumb and usually only is able to turn on/off the lc glass, not attentuate continously
Not all are solar powered, many take batteries. Also they go from dark to nearly black, whereas eyeglasses need to go from as clear as possible to only somewhat tinted, so I imagine the exact type of LCD used is important. Also the fact that these are based on a low power RF link means fewer wires in wearable devices if the sensor and what it controls are remotely located.
If I understand it correctly and based on the circuit diagrams shown, the glasses don’t use an RF Link, rather they modified the Cricket circuit to drive an lc instead of generating a radio burst.
@@chrismofer Tech that originally came from how to protect bomber pilots for flash blindness in nuclear war. Helmets were design to have very fast acting light attenuators in their visors. Definitely beat wearing an eye-patch over one eye (so if you did get blinded you could switch to your other eye) or flying with flash curtains and only instruments.
Exciting prospects for lightweight designs 👍
Compelling stuff, well explained! Thinking further: if the camera perspective changes, perhaps movement and rotation vectors of the camera can be determined by the relative changes and movement of lighting values in the mask. My application is computer vision in a software space, but I'll still try experimenting with the gradient masks you showed for the prototype example. It seems like a good way to determine relative movement vectors within a static scene. By doing this in hardware, you've achieved great results. In particular, I am impressed by the self-powering versions described in the other video on this project. Thanks for sharing and documenting your inventions!
where is the code ? is it open ?
Would the next step be to use an e-ink display (or something similar that consumes almost no energy when it isn't being updated) to dynamically update the masks of the freeform pixels? Could the traditional camera use a different detection algorithm to effectively train the masks and the inference network at the same time?
Thank you for high-quality research video! What kind of mask do you use for each pixel? Is it some kind of optical modulator or prerecorded plate?
I have the same doubts. BTW, nice job!
this was in 2006!?!?
2006년 1월 :)
Is this something that could be added to games like Assetto Corsa, or IRacing?
Hello, I have a question. Hadamard codes are projected directly on the wall? Does this mean that the number of light sources is as many as 100 and the corresponding demultiplexed images have to be taken 100 times?
Hey Iam Student from Germany. Great Work. Thanks for sharing. I would like to verify your results, did you share some code in github? Kind Regards Paul
What games have this technology?
So, we're making the floating face dude from power rangers now?
It makes sense why this showed up in my feed...
It makes sense why there was corn in my toilet this morning
open this and macintosh plus in two tabs next to each other at the same time lol
Absolutely brilliant
This is good but if your looking for the best power and computation method why not just use the sun as the structure light source. At most it would take adding a small device to look at the sun. Then just something to move shadows.
what can make the shadows move, any idea?
@keiwang-nd2kg If you wanted the most control, then you would probably want a liquid like water in a clear container circled by speakers. That setup would take a lot of computation, so maybe just a hard, clear plate with black sand on it, attach something to make it vibrate in a bunch of places. You would have just about as much control but less computation cost. Then, there would be a list of less fancy ways, too.
how do I get the game? I'm blind and want to test/play the game if possible.
From what year is this video? it looks quite old (perhaps 80-90s?)
It says 2006 at the beginning of the video so i assume it is from about that time period.
The rolling Sphere looks like a Pokeball!
Good!
Sir could you please send me the code for removing the snow in an image ,MATLAB code
This is astonishing
We alreadu have gaussian blur, thanks
I am new to projection technology, please what do you mean
@@emmanueloluga9770 just a joke. Because gaussian blur is the most standard and unadvanced blur in photoshop
@@h10hunter oh ok..thank you for the clarification