Yeah, this is genuinely some next level stuff. In terms of game graphic / interaction capabilities and MR merging/blending. Have not seen the term 'visuotactile' before. Nice work.
🤯🤯🤯🤯 mind blown. This would make for some real trippy games, literally. 🤔 Maybe even dangerous if the objects don't re-appear fast enough while moving towards it.
Wow this cool! I hadn't really thought of this before but it makes sense to do for certain applications. I liked the elevator one...would be good to have an AI stable diffusion built in that could help blend and augment the 2 together...
In the real-world descriptions of vehicle accident investigation, a common "defense" of collision is "Officer, I did not see them/it" Usually this means the object was in view but the drivers ATTENTION WAS FOCUSED ELSEWHERE. That's the key. And a major problem. The developers of this attempt to address this safety concern by removing the mask (making the real object that was invisible, visible again if one gets too close-- in efforts to avoid collision) But it doesn't always work that way. In real life, humans visual attention can be focused on something else even if the danger object reappears in their field of view. The same with our other senses. This happens all the time in real life, and WILL happen with this technology--- even if an automatic"alarm noise" is introduced. Think "Officer, I did not see them and I did not hear a horn" My DJI drone, for example, has proximity detection and both visual and audio warnings. I can't tell you how many times I failed to even notice they were on--- my ATTENTION WAS FOCUSED ELSEWHERE. Be careful. This tech could most definitely result in unintended injuries.
This is awesome and mind bending indeed. I'm able to follow, but just barely lol... All that said. Outside of realtime mapping your physicsl space and reprojecting a photorealistic/static version laid over your actual space? What is the actual use case for such application?
Camera access is completely blocked off for third party developers (outside maybe some secret program hidden behind seven NDAs). So unless Meta releases advanced AR apps itself, they are not coming to Quest 3.
If you forget to take the VR headset off and clean your room, you'll be disappointed when you take the headset off 😂 Besides from that... Amazing stuff!!
this is amazing! i’ve always wanted to work with vr/ar and neural interfacing. seeing demos like these are really inspiring. i’m curious how objects like the real cart are segmented and modeled. especially since ur still manually superimposing a model of it. the occlusion seems like a very solvable problem tho given how the current build effectively removes irl objects in space. unity is great :) would love to cont the convo shoaib
In terms of technological advancement, this is crazy stuff. Using AI to make items disappear that are actually in your real surroundings is a terrible idea though, no way the computer is going to be able to keep up all the time and keep you from whacking stuff
This is meant for cognitive and behavioural therapy. In a therapeutic setup the computer is most likely to keep up due to the limited number of objects and pathways. Think of this like a real time holodeck and behind the scenes tiny robots rearange the room to make the illusion seamless. All you would need is clever programming. Is the patient able to see through the matrix? Maybe... but that's the beauty of a therapeutic environment, patients choose to be in the illusion, they WANT to be tricked.
Roger Rabbit? Is that you!? (Seriously, THIS IS EXACTLY MY TYPE OF XR SOFTWARE! I hope this can be accessible to more devices, companies, and hopefully, have an open source variant of it as well!)
Very cool concept, but also still very far away from working as seamlessly in the real world as the video seems to depict. The objects still need to be scanned and uploaded manually, the environment still needs to be scanned and rendered in unity, the virtual characters and objects still need to be rigged and physics manually created and most of all, the tedious programming needs to be done by hand to make any of this all work together. The video seems to suggest that one day, this can be done with advanced machine learning to infer digital environments automatically. That's the key word with all this "groundbreaking" technology though... we'll have it "one day", aka not any time soon 😅
Sooner than you might think 😉 the scanning will be taken care but just watching around with the headset, then... the uploading and rendering will happen automatically... almost in real time.
I think they mentioned that when you are approaching or reaching towards an invisible object, the virtual copy automatically moves back to that spot and makes it visible again. I know it’s not perfect, but it’s still pretty cool!
I feel like I am seeing what will be ubiquitous tech in the next 10 years. Combine this with Apple's "PASSTHROUGH IS EVERYTHING" idea and baby, you got a stew going.
the naked minion streaking through at 3:44 is the best part
Yeah, this is genuinely some next level stuff. In terms of game graphic / interaction capabilities and MR merging/blending. Have not seen the term 'visuotactile' before. Nice work.
Incredible! And with Gaussian Splats, the illusion will only get even better!! I love this research! Such astonishing work!
The safety challenges around this are staggering
Such a cool research. Great work!
This is great! Excited to experience something like this some day
🤯🤯🤯🤯 mind blown. This would make for some real trippy games, literally. 🤔 Maybe even dangerous if the objects don't re-appear fast enough while moving towards it.
This is so freaking cool this is exactly what I want to do for work is develop this technology
I love the naked minion at 3:45
Simply amazing
This is Mind-boggling.
2:44 how is the scanning works here? need an iPhone?
Finally some amazing AR AI integrations. Well done!!
Wow this cool! I hadn't really thought of this before but it makes sense to do for certain applications. I liked the elevator one...would be good to have an AI stable diffusion built in that could help blend and augment the 2 together...
That's amazing!
In the real-world descriptions of vehicle accident investigation, a common "defense" of collision is "Officer, I did not see them/it"
Usually this means the object was in view but the drivers ATTENTION WAS FOCUSED ELSEWHERE.
That's the key. And a major problem.
The developers of this attempt to address this safety concern by removing the mask (making the real object that was invisible, visible again if one gets too close-- in efforts to avoid collision)
But it doesn't always work that way.
In real life, humans visual attention can be focused on something else even if the danger object reappears in their field of view.
The same with our other senses.
This happens all the time in real life, and WILL happen with this technology--- even if an automatic"alarm noise" is introduced. Think "Officer, I did not see them and I did not hear a horn"
My DJI drone, for example, has proximity detection and both visual and audio warnings.
I can't tell you how many times I failed to even notice they were on--- my ATTENTION WAS FOCUSED ELSEWHERE.
Be careful. This tech could most definitely result in unintended injuries.
You make real life seem much more dangerous. At least tech can add some tool for prevention, you can't account for those that choose to ignore it.
Wooh😮
3:44 hmmm
yup, that's it 😭
Lmao
This is incredible 🤯 OMG future is exciting
This is actually very insane. It looks so good
Aus technischer Sicht, sind es unglaublich großartige Zeiten, in denen wir leben 🤯
This is awesome and mind bending indeed. I'm able to follow, but just barely lol...
All that said. Outside of realtime mapping your physicsl space and reprojecting a photorealistic/static version laid over your actual space? What is the actual use case for such application?
Can my Meta Quest 3 do this?
Camera access is completely blocked off for third party developers (outside maybe some secret program hidden behind seven NDAs). So unless Meta releases advanced AR apps itself, they are not coming to Quest 3.
When can we have a try on our own mr devices
So fun to consider adding robotics as little stage makers :)
If you forget to take the VR headset off and clean your room, you'll be disappointed when you take the headset off 😂
Besides from that... Amazing stuff!!
Who else say a nude minion at 3:45
this is amazing! i’ve always wanted to work with vr/ar and neural interfacing. seeing demos like these are really inspiring. i’m curious how objects like the real cart are segmented and modeled. especially since ur still manually superimposing a model of it. the occlusion seems like a very solvable problem tho given how the current build effectively removes irl objects in space.
unity is great :) would love to cont the convo
shoaib
"hey glasses. Clean my office please." .. 🎉
2:40 Depending on a scene I would leave the red outline of the object.
Mostly because I'm worried I'll stub my toe on a coffee table that is hidden.
In terms of technological advancement, this is crazy stuff. Using AI to make items disappear that are actually in your real surroundings is a terrible idea though, no way the computer is going to be able to keep up all the time and keep you from whacking stuff
This is meant for cognitive and behavioural therapy. In a therapeutic setup the computer is most likely to keep up due to the limited number of objects and pathways.
Think of this like a real time holodeck and behind the scenes tiny robots rearange the room to make the illusion seamless. All you would need is clever programming. Is the patient able to see through the matrix? Maybe... but that's the beauty of a therapeutic environment, patients choose to be in the illusion, they WANT to be tricked.
Cool - but honestly it looks scary… feels like a fever dream
This is fun
Roger Rabbit? Is that you!?
(Seriously, THIS IS EXACTLY MY TYPE OF XR SOFTWARE! I hope this can be accessible to more devices, companies, and hopefully, have an open source variant of it as well!)
Very cool concept, but also still very far away from working as seamlessly in the real world as the video seems to depict. The objects still need to be scanned and uploaded manually, the environment still needs to be scanned and rendered in unity, the virtual characters and objects still need to be rigged and physics manually created and most of all, the tedious programming needs to be done by hand to make any of this all work together. The video seems to suggest that one day, this can be done with advanced machine learning to infer digital environments automatically. That's the key word with all this "groundbreaking" technology though... we'll have it "one day", aka not any time soon 😅
Sooner than you might think 😉 the scanning will be taken care but just watching around with the headset, then... the uploading and rendering will happen automatically... almost in real time.
coole sache :D
So you can remove some people from your sight, next level blocking
I hope you take care and not trip over an actual chair in front of your stairs that was occluded. 😊
I think they mentioned that when you are approaching or reaching towards an invisible object, the virtual copy automatically moves back to that spot and makes it visible again. I know it’s not perfect, but it’s still pretty cool!
3:44 if you are coming from techlinked
I feel like I am seeing what will be ubiquitous tech in the next 10 years. Combine this with Apple's "PASSTHROUGH IS EVERYTHING" idea and baby, you got a stew going.
American Lawsuit Simulator VR™
a very cool gimmick.
this has so many blind spots and things you didn't think about. be careful just meshing...around.
So a.i is the way to become a superhero all along. We kept thinking it was through dna manipulation hahaha...
Wow, they have achieved the feat of causing nausea and headaches without a VR headset!
we must inform them that unreal 5 has been released!
INFUCKINGSAne
You suggest?
I suggest everyone eat a vegetarian diet.
Wow, schizophrenia? Is that you? 😅