Imagine combining this display with mid air haptics like what ultraleap did using an array of transducers, so you can feel when you touch it. You could even use leap motion for hand tracking to detect interaction, though that last part may be unnecessarily overcomplicated for the interacting tracking.
Thanks for the comment. Haptics is indeed an idea that I had already thought about. Tactile interactions would definitely be great. Maybe I'll have a look at that. I briefly looked at the Leap Motion solutions (hand sensor), but they were too expensive for me.
@@antonliakhovitch8306Other than concerns about (1) hidden cameras/observers watching you use the pad-also a solved problem, just necessitating some other method (physical barriers, polarized filters, etc) to reduce viewing angles to just the intended one, but this may be less obtrusive than some of those, so preferable in some situations, or (2) a situation where the secure code entry issue is COMBINED with the sterilization issue. A scramble pad is still a touch screen, so can be a vector for infection.
Fantastic video. Not only the result, but the research you went through to get to the final result. That's really useful for a project I am working on, thanks!
This would be excellent to combine with those sonic arrays that provide haptic feedback by making standing waves of low/high pressure in a 3d space. (look up sonic tweezers for a sense of them). If you could create a sense of feeling at the location of the buttons, that would remove a lot of the annoyance people might have with imprecise entry that would discourage its use.
It's certainly different to see it right in front of you. And as with most of my projects, I have the problem of not being able to show it as it is in real life in a video.
Always is, until we get there. Imagine what they thought tv's would be like. Heck before, people thought we'd be watching proper TV far into the future. Now most people don't have any sort of cable package. Same diff with Netflix and stuff but still, very different from what they imagined.
@@paulbunyangonewild7596The "we miniaturized X, therefore we can miniaturize Y" logic is dangerous. There's subtlety. CRTs set a limit to how thin TVs could be, until we came up with an entirely new technology to replace them. Floating displays would also require an entirely new technology to make the emitter smaller than the image. You can't predict if/when a new technology will be developed. Maybe it'll be tomorrow, maybe in a few decades, and maybe we've already reached some fundamental limits. (The exception is when the theoretical basis for the device is already there, but engineering work has to be done to figure out how to manufacture it. That's why we've been able to accurately predict advancement in computer processors.)
The emitter can be smaller than the image if a laser array projector is used, brighter, and more controllable, can also provide enough rapid movement for 3D persistence of vision effect
Pepper's Ghost always impresses. Here's an idea for tuning your sensors: Light up a box and record the sensor data as you repeatedly "push" it. Use that resulting data as your paramater range for that button/area, kind of turning that fine-tuning process inside out.
Even before the pandemic, I always wanted more touchless machines. Not only does it keep your hands clean, it adds a whole extra level of security. I feel like it needs to be refined a bit more but, this would be ideal for ATMs.
Is anyone still use ATMs nowadays? Thing from early 2010s. Around me already even bunch of cafe/shops where impossibly to pay with cash at least because there are no cashiers, they are automatized and human-less (payment possibly only by card / qr / faceid).
Hey, I tried Building a Vertical flying trackpad, and I simply Used one of those cheap laser projection keyboards. You don’t have to modify them, just lay them on the back and you can “type” mid air. great video as always!
Passive light position/movement sensing is prone to ambient light interference. But when a sensor detection field is instead illuminated with pulsed light and the signal processing synchronized to the pulse's phase, that effectively plunges a lit room into darkness. It's an extra dimension of signal processing.
Sehr geil, vor allem die Einbindung der Sensoren, was für ein Aufwand, Wahnsinn. Ich wünschte ich würde mich auch so auskennen, aber die Programmierung ist ein buch mit sieben siegeln für mich. Daher absolut beeindruckend.
I saw super neat - though static - holograms in the 90's in Berlin. It was pretty epic. The stuff you can do with holograms is pretty cool, and they've since been able to animate them.
If I was to try the finger tracking, I would most likely try a 2D IR matrix sensor and some combination of TOF / distance sensor. The IR matrix should give the position on the X/Y axis, while the TOF would give the Z/touch distance.
Thanks for the suggestion. Yes, the finger tracking is really simple. For a more advanced floating display version with better resolution I would definitely need something like that. Then I might look into 3D time-of-flight cameras, which work in a way like you suggested: by illuminating the scene with a modulated light source and observing the reflected light.
Sehr schönes Design... das ganze Projekt handwerklich sehr gut umgesetzt... tolle Arbeit! Ohne das nötige Background-Wissen natürlich nicht realisierbar, aber schön zu sehen, dass es funktioniert. Danke für Deine Mühe 👍 Gruß von Stefan aus dem U-Allgäu
Very cool concept. I think the touchless concept is actually a really great point. It would be perfect for medical and other "preferably restrictive situations" like public kiosks and such. Would definitely reduce or eliminate cross contamination. Security I think is a secondary nature - we've got the same kind of screens you were discussing for that, just standard screen overlays disallow seeing anything unless you are directly in front of it. All of them are completely viable applications regardless. I just might have to build something like this over the winter while I'm stuck indoors :)
It'd be cool if we could finally get a floating 3D map, might be useful in the future when we have to see drone traffic, hopefully air speeders one day
a mini version of this could be used in bathroom sinks, where everyone spins the thing to turn the sink on, both before and after washing hands, but not wash the spinny thing
@@GordieGii the "stepped away from sink" part could be detected with door closed/opened sensor or any IR sensor that detects blockage on the door and resets temp, which is enabled only till person disables sink or passes thru the sensor
You should send samples of that reflective stuff to Breaking Taps and have him scan them with his crazy microscopes. I think he has a force elctron scanner? That would be pretty sweet to see what the differences of the materials are at super tiny detail
This is really cool! I think another cool use of this would be to combine it with volumetric spherical rotating display. You could then physically put your finger inside a volumetric displayed object. With the standard vanilla spherical display this isn't possible as your finger would get whacked off :)
Nice work. The secure input idea for this is a really good application. Now, just shrink it down small enough to be practical- the size of the average credit card machine. I actually believe that may be possible with our current tech but it'd likely be expensive to produce/sell.
very nicely done! depending on how frequently those time of flight sensors update, you could take one of those rotary mirrors from a barcode reader to make it into a sort of single axis LiDAR, enabling better tracking provided the angle of the mirror is known (trivial task with a shaft encoder). it should be a fairly simple modification to make.
I have a theory: If you moved the real screen back and forth and had a sensor telling the CPU where the monitor was, you could create a cross section of a 3D object on the screen to match the position. Then with persistence of vision, the floating image would appear 3D. (going about moving the screen precisely is too much work, you got to just measure where the monitor is and update the image)
Concerning the use of time of flight sensors to track the users finger, if the sensors are powered on separately, sampled and then depowered to prevent the laser emissions from one from interfering with the reading of the other, and a lens used to fan out the beam, the triangulation method of tracking the position of the users finger would not only be viable, it could achieve far higher resolution… the beam would first need to be columnated, because the raw output from the laser diode appears to be spread conically, then the beam could pass through an optic like a curved reflector or a cylindrical lens to fan it out.
Thanks for your thoughts on this. Sounds like it could work. However, it requires a bunch of special optical components for IR lasers, which seem expensive... but not sure. Anyway, would be a great solution if it's not more expensive than a Leap Motion hand gesture sensor.
You can probably make the optics yourself. You’d just need to take care that the materials used are transparent to the IR wavelength of your sensors. A columnating optic train diagram can be found with a little online searching and a cylindrical lens is literally just a polished cylinder made of an optical material that refracts your wavelength. It’s also worth mentioning that you’re not doing holography or anything that requires the finest optics, so if a ready made version of what you need isn’t available for cheap, it can be homemade.
Probably not practical or cheap enough to get consistent results to be worth it. Note companies want profit above all. If there's a easier cheaper method that's less likely to cause more money for repairs over time, they'll choose that. Capitalism pretty much always destroys creativity, because creativity is diverse and requires more effort and time. Honestly amazing we make any progress at all.
This sort of thing already exists in a different way... which just uses glass or transparent plastic at an angle with a display somewhere along the target direction, used to make HUDs in cars, fighter jets, and of all things to make hatsune miku appear on a stage. But these generally appear further away, not closer.
@@marcasrealaccountHatsune Miku concerts work by using a projector shining the light behind a special glass. You constantly see the light coming from the projectors shining to it and sometimes directly at the camera.
@@Hachi_Shibaru The projector is under the stage and shines on just a 45 degree rotated glass/mirror, at least that's what they used to do very early on.
Mad skills, buddy! You have my respect!! Thank you for sharing this in such detail. I'm looking forward to sci-fi up my whole damn flat xD Ok, maybe not the flat, but my 3D Printer CGSD (Center of Getting Shit Done). Thank you once more!!!! Take care and have a great new year!
could be useful in space stations with large numbers of inhabitants. I heard that the cleaning is one of the most tedious chores on the space station, since everyone is touching everyting. one more thing you don't have to clean could be nice.
11:07 The Latte Panda has a built in Arduino Leonardo which basically can also act as a Human Interface Device (HID) which you could program directly to enter value like a keyboard or use it like a mouse.
Thanks - yes, the Leonardo could be used as HID to act like an external keyboard. I tried to use it for basic sensor processing at first, but it somehow did not work at all. After some unsuccessful tweaking, I decided to use the external Arduino Nano.
I have two silly questions: First, how do you ajust focus? Can you control visible position of your virtual screen? Second, why do you need reflector? Maybe simply place LCD directly instead reflector? In this case picture would be brighter. Or it is have something to do with focus?
1. In contrast to an optical system based on a lens, there is no real "focus" control due to the principle described above. However the quality of the reflector material greatly affects the sharpness of the formed image. 2. The reflector itself is required as a passive optical element that converges diverging light rays from the light source display to the floating image.
for more proper touch setup.. look into CIS sensor.... it normally used to make touch screen... without touching screen...s also will it be possible to make a some fog trick make it visible for everyone.. :)
A mirror creates a virtual image *behind* it. The retroreflective material is what makes the "real" image in front of the device. The sharpness depends on the resolution and precision of the retroreflector cells.
I may not be fully understanding this, but could you swap out tablet display for your volumetric one? Would that not create multiple in-air focal planes, and thus, have a stereoscopic effect for both your eyes?
So, you mean take a holographic display where objects appear to be inside it and use this technique to reposition them in mid air? I suspect the focal distance might be inverted, though, and it would basically render inside-out. Might make more sense to create a new hybrid display.
I'm surprised with the choice of the SBC for this project. LattePanda Sigma is a very impressive machine, and using it to simply drive a display feels about as wasteful as using a Raspberry Pi to blink some LEDs. Other than that - really cool project. Could see potential other sensor types being useful - but these would surely require more than just connecting to SPI to function, so probably outside the scope of this thing.
thanks, yes it may be that the SBC is a bit oversized, but I needed something with 2 HMDI outputs that could show both static and animated graphics on both screens, and as a Windows system the GUI was easy to program. What are the alternatives for something like this (other than the Pi not running Windows?)?
I wonder what would happen if you used a lenticular display as the light source. Are we closer to that princess Layla Star Wars moment? As one projection has a narrow field of view then what happens if you put multiple projections Into the same space using different camera angles of a 3d image? Pseudo 3d?
The image of the princess is the wish of the father of the idea and many children who do not have a complete education. You can't project an image in empty space - you have to use water vapor or laser ignite nitrogen atoms. Lenticular film only deflects light for the right and left eye (stereoscopic photography). A hologram is the result of interference and diffraction. Both of these platforms can be viewed in the "window" that are the source of these images. They cannot be viewed from a side view, like a princess, unless you see the "window" in which they appear. Many children and adults believe in holograms á la Star Wars, but this is the result of poor education.
Would there be a way to use some form of lenticular material to create a kind of 3D version? I feel like it should be possible but not sure how you’d specifically go about it. Perhaps even just using a 3DS-type lenticular display would do the trick.
hi there are you using 3M Scotchlite 7610 High Gain Reflective Sheeting for the Retro-reflector ? update the website dos not ship to new zealand any other place to get it
One thing that I am missing from the "how it works" is what changes the beam of light into something that you can see as a point. If you shine a laser, you see nothing until the beam hits an object. As you focus a light source to a point, you see nothing at the point that the light converges. With both this and your volumetric display, what is it that causes the light to appear as a point in space?
Perhaps it will help to imagine the following: If you replaced the floating image with a real display in this position, the rays would emerge from the pixels in exactly the same way. In both cases, the eye would see rays "originating" from a source. That is why we see a real image in the air. A laser, on the other hand, simply generates a single straight beam of light. To "see" the laser beam requires a scattering of particles such as fog or mist.
@@makermac70 I'm still missing something. As I understand your description, the light rays from the display are getting bounced and go through the points in space where the display should appear, but a light source at that point would radiate it's light in every direction, the light ray passing through that point is only visible if you are directly inline with that ray I can get the light rays from the display to pass through that point in space at that angle with just a mirror, no need for the beamsplitter or fancy reflective surface. But looking at that display through the mirror won't have it appear to float in space and the viewing angle would be very narrow. The fact that the camera can pick up the display from the side shows that something very different is happening. what I'm missing is what causes the rays to fan out from the point in space where the display is. why do they fan out at all, and why not from some point closer or further from the display
A display with a normal mirror would not cause rays to converge on a point for the eye to interpret as outgoing rays. This is only possible with this principle, and you need the components discussed. The fact that the image can only be viewed at a narrow angle is a side effect that can be improved with complex techniques. For a better understanding of the principle, it would be recommended to read the linked scientific papers. See my project description.
@@davidelang The fact is, in layman's terms, that the direct and reflected "beam" from one source is projected onto a common glass surface (that's the top glass pane) and each image has its perspective from a mutual distance. Your brain evaluates these two different images as a single image with "depth", i.e. as a "levitating" 2D object, which of course changes its perspective if you change the position of your eyes.
@@DL-kc8fc so that sounds to me like the glass must be in the field of view of the viewer (something true in all the videos I've seen, but not something every explicitly mentioned) am I understanding correctly?
Very awesome project! Truly a device of the future! Now I want holographic displays all over the place! I wonder if you can widen the FOV at all. What would a slightly curved retroreflective surface look like? What would a really curved surface look like?
Thanks for the question. Interestingly, curving or generally modifying the surface of the retroreflective layer has no effect at all. You could even take a wave shape and stick the retroreflective film on it - the result would always be the same. This is due to the unusual property of reflecting light rays in exactly the same direction.
Great project. I am wondering if you considered the multi point capapable VL53 variants. I know (a year or two ago) they were available but not always for Arduino as they needed more processing power but they did support gestures and multiple finger points.
thanks... Yes, I looked at the VL53L08 which can create a 64 zone mini depth map. Very cool product and definitely worth a try if I want to improve the machine.
you should had tried putting the sensors in a triangular shape with the emitters as the vertices, all pointing towards the same spot and use them to triangulate the position of the finger, you would had gotten so much more resolution out of them
The optics of a retroflective system like this means that you can only see the image when you can see the retroreflector "through" it. It can't be made any smaller than the display.
@@alananderson2616 Yeah, you're right! It's not possible to make an image float in "mid air" like in science fiction. Light just doesn't work that way.
Wouldn't it be really easy to "shoulder surf" these displays? If you know where all the buttons are you know what's being pressed. I guess you could randomize the layout but that would hurt it in situations where muscle memory is important.
I'm wondering if you could actually make a lenticular areal display similar to the Looking Glass Factory with this sort of technology. I'm just not sure if it would have the brightness for the material.
Thanks for the idea. I also thought of it once and tried a simple lenticular picture some time ago to display it in the air. Unfortunately, the effect is barely visible in my case. The resolution is probably too poor and due to the limited viewing angle only a few partial images are recognizable, which then do not allow the 3D effect to be recognized enough. This is a real pity, because I had hoped to be able to use the aerial image as a 3D display.
Three mirrors in cube corner formation reflect the light back were is coming from. Could it be that your weird reflector surface can be replaced with 3 mirrors?
What if you played the display flat. Then have the retro reflectors in a cone shape like it holds the display on the sides 360 end have from above the mirror beeing a bol with the underside beeing a beam splitter. Wouldn't that give you a full 360 vieuw like a half dome?
Thanks for the suggestion. It is quite interesting, however, that forming a cone - or generally modifying the surface of the retroreflective film - has no effect at all. You could even take a wave shape and stick the retroreflective film on it. The result would always be the same. This is because it reflects light in exactly the same direction. But because of the ray optics behavior we're used to, it's a little bit difficult to imagine...
this is so cool. if you add more sensors, you can make a full range of input. im wondering, why is the light not projected onto the ceiling? light will keep on traveling until it meets a certain point, does the beam splitter slow down the photons so they stop at a certain point? or is it intersecting light particles? i dont really get how this works, if the beam splitter just splits the light in half... if it does split it in half, then why doesnt it appear on the wall next to it? the second question i have, can this be made more compact? if it's basically a light stopping at a certain point, can you make a tiny display and then make it appear larger with a lens? sort of like AR glasses where you can interact with it but instead it's physical if this can be made as compact as those new glasses (im talking about apple vision pro or glasses from meta), and the input will be reliably enough, it can be used commercially!
Thank you for your questions. The reason why there is no image on the wall is because it is not a projection principle. Rather, the impression of an image in space is created by the convergence of light rays. To understand this better, you might imagine that a real display would emit the same light rays at exactly this point. To make the application more compact, there are already concepts with optical elements that combine the beam splitter and the retroreflector. The design becomes much more compact, but still a box, and not really minimal.
Yes it really is quite impressive when you see it in real life. It's not very easy to show it in a video, but I've had that problem with all of my projects so far.
Me when I accidentally vote for supreme chancellor Palpatine because an inswct flew through my voting display:
What do you mean accidentally? You vote for the only candidate on the ballot.
@@AlucardNoirWrite-in Candidates are perfectly Valid. @SupremeChancelorJarJar
@AlucardNoir Are you just called US elections as fake, because there are no options for elections? :D
I did not expect you to have genuinely good applications for this type of display, I always thought it was a gimmick with no practical use
Imagine combining this display with mid air haptics like what ultraleap did using an array of transducers, so you can feel when you touch it. You could even use leap motion for hand tracking to detect interaction, though that last part may be unnecessarily overcomplicated for the interacting tracking.
Thanks for the comment. Haptics is indeed an idea that I had already thought about.
Tactile interactions would definitely be great. Maybe I'll have a look at that.
I briefly looked at the Leap Motion solutions (hand sensor), but they were too expensive for me.
@@makermac70 look for used ones or the older ones u can get them for
How about an air nozzle to lightly pulse air aimed at the finger tip,
or use a sub-woofer/piston.
This is really SOMETHING! If any of our services are required, being a part of it is always a pleasure!😊
no way, big fan!
No way!!! You Guys are awesome!!!
The Legend has arrived
Damn. PCBWay actively looking for DIY creators. So awesome! Much support for PCBWay.
Secure input is actually a really good idea for this, especially if you combine it with a scramble pad where digits are in a randomised sequence
Although, a scramble pad on a regular screen is probably good enough for almost all applications.
@@antonliakhovitch8306Other than concerns about (1) hidden cameras/observers watching you use the pad-also a solved problem, just necessitating some other method (physical barriers, polarized filters, etc) to reduce viewing angles to just the intended one, but this may be less obtrusive than some of those, so preferable in some situations, or (2) a situation where the secure code entry issue is COMBINED with the sterilization issue. A scramble pad is still a touch screen, so can be a vector for infection.
Fantastic video. Not only the result, but the research you went through to get to the final result. That's really useful for a project I am working on, thanks!
That's great. It's a better way than the mirrored parabolic "hemispheres" that give the illusion of a hologram. I like your solution. Thumbs up.
This would be excellent to combine with those sonic arrays that provide haptic feedback by making standing waves of low/high pressure in a 3d space. (look up sonic tweezers for a sense of them).
If you could create a sense of feeling at the location of the buttons, that would remove a lot of the annoyance people might have with imprecise entry that would discourage its use.
Now I just want to experience it in person, I'm sure it's on another level than seeing it on video
It's certainly different to see it right in front of you. And as with most of my projects, I have the problem of not being able to show it as it is in real life in a video.
The elevator at my office has been using this for buttons when people utilized every no-contact solution during Covid
Great project -- It's a heck of a lot bulkier than what they show in sci-fi movies ;)
Always is, until we get there. Imagine what they thought tv's would be like. Heck before, people thought we'd be watching proper TV far into the future. Now most people don't have any sort of cable package.
Same diff with Netflix and stuff but still, very different from what they imagined.
It has a very Warhammer feel to it. Actually that’s why I clicked on it. It looks exactly like something the Adeptus Mechanicus would build.
@@paulbunyangonewild7596The "we miniaturized X, therefore we can miniaturize Y" logic is dangerous. There's subtlety.
CRTs set a limit to how thin TVs could be, until we came up with an entirely new technology to replace them. Floating displays would also require an entirely new technology to make the emitter smaller than the image.
You can't predict if/when a new technology will be developed. Maybe it'll be tomorrow, maybe in a few decades, and maybe we've already reached some fundamental limits.
(The exception is when the theoretical basis for the device is already there, but engineering work has to be done to figure out how to manufacture it. That's why we've been able to accurately predict advancement in computer processors.)
Built into dest man
The emitter can be smaller than the image if a laser array projector is used, brighter, and more controllable, can also provide enough rapid movement for 3D persistence of vision effect
Pepper's Ghost always impresses. Here's an idea for tuning your sensors: Light up a box and record the sensor data as you repeatedly "push" it. Use that resulting data as your paramater range for that button/area, kind of turning that fine-tuning process inside out.
Even before the pandemic, I always wanted more touchless machines. Not only does it keep your hands clean, it adds a whole extra level of security. I feel like it needs to be refined a bit more but, this would be ideal for ATMs.
Is anyone still use ATMs nowadays? Thing from early 2010s.
Around me already even bunch of cafe/shops where impossibly to pay with cash at least because there are no cashiers, they are automatized and human-less (payment possibly only by card / qr / faceid).
Hey, I tried Building a Vertical flying trackpad, and I simply Used one of those cheap laser projection keyboards. You don’t have to modify them, just lay them on the back and you can “type” mid air. great video as always!
You could do a smaller display with optics to make the whole module smaller. Very cool!
Passive light position/movement sensing is prone to ambient light interference. But when a sensor detection field is instead illuminated with pulsed light and the signal processing synchronized to the pulse's phase, that effectively plunges a lit room into darkness. It's an extra dimension of signal processing.
I really enjoyed seeing the thought process and the program you wrote for testing the sensor array, well done!
Sehr geil, vor allem die Einbindung der Sensoren, was für ein Aufwand, Wahnsinn. Ich wünschte ich würde mich auch so auskennen, aber die Programmierung ist ein buch mit sieben siegeln für mich. Daher absolut beeindruckend.
I saw super neat - though static - holograms in the 90's in Berlin. It was pretty epic. The stuff you can do with holograms is pretty cool, and they've since been able to animate them.
Awesome work! It brings me so much nostalgia, by watching those movies before and now seeing this in real life. You rock!
This kind of interface would be great for medical equipment.
This is awesome! WOW!
0 contamination.
This channel is so underrated 🤯
Voting would be the most practical use in my opinion
I'm not quite sure there is too much of a practical use. At the end of the day it's still a flat image. Voting on this would be cool though.
I mean... we really should get back to hand counting paper ballots. Electronics are not secure, there's no transparency
A pencil would be cheaper. The secure keypad is the thing that stands out as the most useful idea to me.
Ehh... Data can be edited. Paper is still King when it comes to a reproducible paper trail.
@@giin97 Because destroying a sack of ballots and replacing it for the final counts is so hard....
If I was to try the finger tracking, I would most likely try a 2D IR matrix sensor and some combination of TOF / distance sensor. The IR matrix should give the position on the X/Y axis, while the TOF would give the Z/touch distance.
Thanks for the suggestion. Yes, the finger tracking is really simple. For a more advanced floating display version with better resolution I would definitely need something like that. Then I might look into 3D time-of-flight cameras, which work in a way like you suggested: by illuminating the scene with a modulated light source and observing the reflected light.
Sehr schönes Design... das ganze Projekt handwerklich sehr gut umgesetzt... tolle Arbeit! Ohne das nötige Background-Wissen natürlich nicht realisierbar, aber schön zu sehen, dass es funktioniert. Danke für Deine Mühe 👍
Gruß von Stefan aus dem U-Allgäu
Very cool concept. I think the touchless concept is actually a really great point. It would be perfect for medical and other "preferably restrictive situations" like public kiosks and such. Would definitely reduce or eliminate cross contamination. Security I think is a secondary nature - we've got the same kind of screens you were discussing for that, just standard screen overlays disallow seeing anything unless you are directly in front of it. All of them are completely viable applications regardless. I just might have to build something like this over the winter while I'm stuck indoors :)
Nice work, and i like that we heard hiw you got there and it wasn't all smooth sailing. Most of all, final product really did look cool.
It'd be cool if we could finally get a floating 3D map, might be useful in the future when we have to see drone traffic, hopefully air speeders one day
Very well explained. And very cool!
You can try to sensor positions via the VL53L8CX sensor. It is an array of 8x8 ToF sensor elements.
This is incredible, I've never seen anything like this before!
All my life..... Ive been looking for this
the applications for this that you mentioned are a pretty big market I like it
ultrasonic parts are used pretty often for close distance sensors, its basically time-of-flights with sound wave
a mini version of this could be used in bathroom sinks, where everyone spins the thing to turn the sink on, both before and after washing hands, but not wash the spinny thing
could have 2D array to select temperature and flow and it would stay at that setting until you changed it or stepped away from the sink.
@@GordieGii the "stepped away from sink" part could be detected with door closed/opened sensor or any IR sensor that detects blockage on the door and resets temp, which is enabled only till person disables sink or passes thru the sensor
You should send samples of that reflective stuff to Breaking Taps and have him scan them with his crazy microscopes. I think he has a force elctron scanner? That would be pretty sweet to see what the differences of the materials are at super tiny detail
This is really cool! I think another cool use of this would be to combine it with volumetric spherical rotating display. You could then physically put your finger inside a volumetric displayed object. With the standard vanilla spherical display this isn't possible as your finger would get whacked off :)
Μy thoughts exactly
I need to see more of this!
this is really awesome
It's so cool, and I am going to combine it with my volumetric display to realize floating 3d display!!
I wonder if you could use an ultrasonic transducer for some degree of wireless haptic feedback
I agree, tactile feedback would definitely be great. Maybe I'll have a look at that.
FYI:If you see double image that is because of glass. Needs to be special glass to not reflect beam from both sides of glass sheet
Nice work. The secure input idea for this is a really good application. Now, just shrink it down small enough to be practical- the size of the average credit card machine. I actually believe that may be possible with our current tech but it'd likely be expensive to produce/sell.
very nicely done! depending on how frequently those time of flight sensors update, you could take one of those rotary mirrors from a barcode reader to make it into a sort of single axis LiDAR, enabling better tracking provided the angle of the mirror is known (trivial task with a shaft encoder). it should be a fairly simple modification to make.
I have a theory: If you moved the real screen back and forth and had a sensor telling the CPU where the monitor was, you could create a cross section of a 3D object on the screen to match the position. Then with persistence of vision, the floating image would appear 3D. (going about moving the screen precisely is too much work, you got to just measure where the monitor is and update the image)
using CAD disassembly for the animation is perfect. and a very complete project 👍
and the outro slaps
vielen dank ;)
Concerning the use of time of flight sensors to track the users finger, if the sensors are powered on separately, sampled and then depowered to prevent the laser emissions from one from interfering with the reading of the other, and a lens used to fan out the beam, the triangulation method of tracking the position of the users finger would not only be viable, it could achieve far higher resolution… the beam would first need to be columnated, because the raw output from the laser diode appears to be spread conically, then the beam could pass through an optic like a curved reflector or a cylindrical lens to fan it out.
Thanks for your thoughts on this. Sounds like it could work. However, it requires a bunch of special optical components for IR lasers, which seem expensive... but not sure. Anyway, would be a great solution if it's not more expensive than a Leap Motion hand gesture sensor.
You can probably make the optics yourself. You’d just need to take care that the materials used are transparent to the IR wavelength of your sensors. A columnating optic train diagram can be found with a little online searching and a cylindrical lens is literally just a polished cylinder made of an optical material that refracts your wavelength. It’s also worth mentioning that you’re not doing holography or anything that requires the finest optics, so if a ready made version of what you need isn’t available for cheap, it can be homemade.
Well Done, This is indeed incredible.
this blows my mind, why is this not all over the place?
Probably not practical or cheap enough to get consistent results to be worth it. Note companies want profit above all. If there's a easier cheaper method that's less likely to cause more money for repairs over time, they'll choose that.
Capitalism pretty much always destroys creativity, because creativity is diverse and requires more effort and time. Honestly amazing we make any progress at all.
This sort of thing already exists in a different way... which just uses glass or transparent plastic at an angle with a display somewhere along the target direction, used to make HUDs in cars, fighter jets, and of all things to make hatsune miku appear on a stage.
But these generally appear further away, not closer.
@@marcasrealaccountHatsune Miku concerts work by using a projector shining the light behind a special glass. You constantly see the light coming from the projectors shining to it and sometimes directly at the camera.
@@Hachi_Shibaru The projector is under the stage and shines on just a 45 degree rotated glass/mirror, at least that's what they used to do very early on.
Very nice project! 🙂
Mad skills, buddy! You have my respect!! Thank you for sharing this in such detail. I'm looking forward to sci-fi up my whole damn flat xD Ok, maybe not the flat, but my 3D Printer CGSD (Center of Getting Shit Done).
Thank you once more!!!!
Take care and have a great new year!
could be useful in space stations with large numbers of inhabitants. I heard that the cleaning is one of the most tedious chores on the space station, since everyone is touching everyting. one more thing you don't have to clean could be nice.
it also needs an ultrasound emitter, to reproduce the touch of a button
Wow congratulations!
11:07 The Latte Panda has a built in Arduino Leonardo which basically can also act as a Human Interface Device (HID) which you could program directly to enter value like a keyboard or use it like a mouse.
Thanks - yes, the Leonardo could be used as HID to act like an external keyboard. I tried to use it for basic sensor processing at first, but it somehow did not work at all. After some unsuccessful tweaking, I decided to use the external Arduino Nano.
This is a phenomenal video I hope you Don't wait 2 more years To make another
use a directional ultrasonic speaker to get haptic touch.
I have two silly questions:
First, how do you ajust focus? Can you control visible position of your virtual screen?
Second, why do you need reflector? Maybe simply place LCD directly instead reflector? In this case picture would be brighter. Or it is have something to do with focus?
1. In contrast to an optical system based on a lens, there is no real "focus" control due to the principle described above.
However the quality of the reflector material greatly affects the sharpness of the formed image.
2. The reflector itself is required as a passive optical element that converges diverging light rays from the light source display to the floating image.
For the sensing you could use one of the 3 dimensional ToF sensors from sipeed
This would make an awesome way to unlock a door or safe.
Two mirrors at 90 degrees might work for retroreflector. Of course this would only retro reflect along an axis. Might be worth trying?
3 mirrors at 90 degrees angle might work for retroreflector reflecting at any angle. But they would be just so bulky.
Beautyful work! CONGRATS!!! Where can we admire it in real?😉
Incredibly cool🔥❤, you got a new fan! ))
I would try a lens to increase the angle of sensing in two axis before going with three sensors.
for more proper touch setup.. look into CIS sensor.... it normally used to make touch screen... without touching screen...s
also will it be possible to make a some fog trick make it visible for everyone.. :)
Can You Put A Mirror As The Reflector Film? That Way It Would Be Way More Sharp.
A mirror creates a virtual image *behind* it. The retroreflective material is what makes the "real" image in front of the device. The sharpness depends on the resolution and precision of the retroreflector cells.
*Nicely done!*
Fantastic! 100% YUM!
I may not be fully understanding this, but could you swap out tablet display for your volumetric one? Would that not create multiple in-air focal planes, and thus, have a stereoscopic effect for both your eyes?
So, you mean take a holographic display where objects appear to be inside it and use this technique to reposition them in mid air? I suspect the focal distance might be inverted, though, and it would basically render inside-out. Might make more sense to create a new hybrid display.
I'm surprised with the choice of the SBC for this project.
LattePanda Sigma is a very impressive machine, and using it to simply drive a display feels about as wasteful as using a Raspberry Pi to blink some LEDs.
Other than that - really cool project.
Could see potential other sensor types being useful - but these would surely require more than just connecting to SPI to function, so probably outside the scope of this thing.
thanks, yes it may be that the SBC is a bit oversized, but I needed something with 2 HMDI outputs that could show both static and animated graphics on both screens, and as a Windows system the GUI was easy to program. What are the alternatives for something like this (other than the Pi not running Windows?)?
I wonder what would happen if you used a lenticular display as the light source. Are we closer to that princess Layla Star Wars moment? As one projection has a narrow field of view then what happens if you put multiple projections Into the same space using different camera angles of a 3d image? Pseudo 3d?
The image of the princess is the wish of the father of the idea and many children who do not have a complete education. You can't project an image in empty space - you have to use water vapor or laser ignite nitrogen atoms. Lenticular film only deflects light for the right and left eye (stereoscopic photography). A hologram is the result of interference and diffraction. Both of these platforms can be viewed in the "window" that are the source of these images. They cannot be viewed from a side view, like a princess, unless you see the "window" in which they appear. Many children and adults believe in holograms á la Star Wars, but this is the result of poor education.
Would there be a way to use some form of lenticular material to create a kind of 3D version? I feel like it should be possible but not sure how you’d specifically go about it. Perhaps even just using a 3DS-type lenticular display would do the trick.
Thanks! This was very nice indeed!
Not touching anything, for security, has the extra advantage that it makes this more difficult to put skimmers on it.
This could be cool for this bathroom quality screens. I don’t wanna touch them so this would be super cool.
hi there are you using 3M Scotchlite 7610 High Gain Reflective Sheeting for the
Retro-reflector ? update the website dos not ship to new zealand any other place to get it
I am using Oralite3010 (see the Hackster project description)
One thing that I am missing from the "how it works" is what changes the beam of light into something that you can see as a point. If you shine a laser, you see nothing until the beam hits an object. As you focus a light source to a point, you see nothing at the point that the light converges.
With both this and your volumetric display, what is it that causes the light to appear as a point in space?
Perhaps it will help to imagine the following: If you replaced the floating image with a real display in this position, the rays would emerge from the pixels in exactly the same way.
In both cases, the eye would see rays "originating" from a source. That is why we see a real image in the air.
A laser, on the other hand, simply generates a single straight beam of light. To "see" the laser beam requires a scattering of particles such as fog or mist.
@@makermac70 I'm still missing something. As I understand your description, the light rays from the display are getting bounced and go through the points in space where the display should appear, but a light source at that point would radiate it's light in every direction, the light ray passing through that point is only visible if you are directly inline with that ray
I can get the light rays from the display to pass through that point in space at that angle with just a mirror, no need for the beamsplitter or fancy reflective surface.
But looking at that display through the mirror won't have it appear to float in space and the viewing angle would be very narrow.
The fact that the camera can pick up the display from the side shows that something very different is happening.
what I'm missing is what causes the rays to fan out from the point in space where the display is. why do they fan out at all, and why not from some point closer or further from the display
A display with a normal mirror would not cause rays to converge on a point for the eye to interpret as outgoing rays. This is only possible with this principle, and you need the components discussed.
The fact that the image can only be viewed at a narrow angle is a side effect that can be improved with complex techniques. For a better understanding of the principle, it would be recommended to read the linked scientific papers. See my project description.
@@davidelang The fact is, in layman's terms, that the direct and reflected "beam" from one source is projected onto a common glass surface (that's the top glass pane) and each image has its perspective from a mutual distance. Your brain evaluates these two different images as a single image with "depth", i.e. as a "levitating" 2D object, which of course changes its perspective if you change the position of your eyes.
@@DL-kc8fc so that sounds to me like the glass must be in the field of view of the viewer (something true in all the videos I've seen, but not something every explicitly mentioned)
am I understanding correctly?
Well done mate
nice build
Very awesome project! Truly a device of the future! Now I want holographic displays all over the place! I wonder if you can widen the FOV at all. What would a slightly curved retroreflective surface look like? What would a really curved surface look like?
Thanks for the question. Interestingly, curving or generally modifying the surface of the retroreflective layer has no effect at all. You could even take a wave shape and stick the retroreflective film on it - the result would always be the same. This is due to the unusual property of reflecting light rays in exactly the same direction.
@@makermac70 That is so cool! I didn't think of that!
I wonder how much or if you could make the device more accurate by wrapping a finger in reflective foil?
Great project. I am wondering if you considered the multi point capapable VL53 variants. I know (a year or two ago) they were available but not always for Arduino as they needed more processing power but they did support gestures and multiple finger points.
thanks... Yes, I looked at the VL53L08 which can create a 64 zone mini depth map. Very cool product and definitely worth a try if I want to improve the machine.
you should had tried putting the sensors in a triangular shape with the emitters as the vertices, all pointing towards the same spot and use them to triangulate the position of the finger, you would had gotten so much more resolution out of them
Nicely done.
Awesome!
Now you just need to miniaturize it so you only see a tiny "camera eye" in the wall or surface with a large floating display.
The optics of a retroflective system like this means that you can only see the image when you can see the retroreflector "through" it. It can't be made any smaller than the display.
@@alananderson2616 Yeah, you're right! It's not possible to make an image float in "mid air" like in science fiction. Light just doesn't work that way.
if you have the latte panda just get a leap motion thing for the hand stuff
Thanks for the suggestion. I actually looked at the controller first, but at over 200€ I felt it was a bit too expensive for my DIY project.
Wouldn't it be really easy to "shoulder surf" these displays? If you know where all the buttons are you know what's being pressed. I guess you could randomize the layout but that would hurt it in situations where muscle memory is important.
I'm wondering if you could actually make a lenticular areal display similar to the Looking Glass Factory with this sort of technology. I'm just not sure if it would have the brightness for the material.
Thanks for the idea. I also thought of it once and tried a simple lenticular picture some time ago to display it in the air. Unfortunately, the effect is barely visible in my case. The resolution is probably too poor and due to the limited viewing angle only a few partial images are recognizable, which then do not allow the 3D effect to be recognized enough. This is a real pity, because I had hoped to be able to use the aerial image as a 3D display.
Hes done it again :) still putting together my nipkow display
Three mirrors in cube corner formation reflect the light back were is coming from. Could it be that your weird reflector surface can be replaced with 3 mirrors?
How does it compare to using a concave mirror to produce an image mid-air?
What if you played the display flat.
Then have the retro reflectors in a cone shape like it holds the display on the sides 360 end have from above the mirror beeing a bol with the underside beeing a beam splitter.
Wouldn't that give you a full 360 vieuw like a half dome?
Thanks for the suggestion. It is quite interesting, however, that forming a cone - or generally modifying the surface of the retroreflective film - has no effect at all. You could even take a wave shape and stick the retroreflective film on it. The result would always be the same. This is because it reflects light in exactly the same direction. But because of the ray optics behavior we're used to, it's a little bit difficult to imagine...
This is soo awesome!!
Wow. That's like totally nuts :D
Can you make a handheld version?
There are atm's that use pretty much this exact setup
this is so cool. if you add more sensors, you can make a full range of input.
im wondering, why is the light not projected onto the ceiling? light will keep on traveling until it meets a certain point, does the beam splitter slow down the photons so they stop at a certain point?
or is it intersecting light particles? i dont really get how this works, if the beam splitter just splits the light in half...
if it does split it in half, then why doesnt it appear on the wall next to it?
the second question i have, can this be made more compact?
if it's basically a light stopping at a certain point, can you make a tiny display and then make it appear larger with a lens?
sort of like AR glasses where you can interact with it but instead it's physical
if this can be made as compact as those new glasses (im talking about apple vision pro or glasses from meta),
and the input will be reliably enough, it can be used commercially!
Thank you for your questions. The reason why there is no image on the wall is because it is not a projection principle.
Rather, the impression of an image in space is created by the convergence of light rays. To understand this better, you might imagine that a real display would emit the same light rays at exactly this point.
To make the application more compact, there are already concepts with optical elements that combine the beam splitter and the retroreflector. The design becomes much more compact, but still a box, and not really minimal.
This is ingenius! Where did you learn to use all these different components? Trial and error? RUclips?
Thank you. I have read the scientific papers on this after learning the general principle.
I think itll look so much better in real life
Yes it really is quite impressive when you see it in real life. It's not very easy to show it in a video, but I've had that problem with all of my projects so far.