One idea that might look really cool instead or next to object detection ... Having realtime 3D mapping of your environment like indoor drones use and then have it overlayed or as a rotatiting 3d mini map in a corner 🤔 Not sure if that can be achieved via direct 3D motion tracking or if those drones require use of lidar sensors.
All of it is possible and you can't imagine how many pieces of code I've started as proof of concepts. So, I assure your that these are either on the list or have already been started.
I absolutely love your work on this! Would love to build a simpler version of this for cosplay helmets, just to let some of those impossible to see in costumes, see the light!
Thanks! I think when I release the source, it'll be easy to see how you can build one with less features. I've been designing it that way from the beginning.
With the recent explosion in AI you might be able to utilize OpenAI's Whisper software to allow you to feed your words into GPT4, which could be instructed to "pretend" as JARVIS. This might allow you to have the most realistic AI assistant without the additional computational baggage, so you can focus on all the others things you need to dedicate processing power to. You can get GPT4 to pretend it is JARVIS and instruct it to act a little snarky too, and it would be nearly indistinguishable from the movie version. Now if you are able to get GPT4 to actually 'act' on the voice commands you provide it, then you essentially have the real jarvis. If the main limit you have is thermal issues constraining you, I wonder if you could get away with using water cooling on a more powerful card which could give you additional headroom. You could even use a passive pumping system that uses the movements of the muscles and changes in pressure to keep the cooling fluid moving so you don't have to run a pump as well. If you want more information the classic example of this principle being employed in the human body can be found in the venous return vasculature located in the calf. If you are ever interested feel free to reach out and I would love to help design the "circulatory" cooling system to fit the heat output of your system. Anyway, I have loved following your work so far and I am Interested to see where it goes from here! best of luck
Thanks for the feedback Daniel! To be honest, you basically have described work that I've recently started and am making really good progress on in reference to Jarvis. I guess you'll see how spot on you were in the future. Thermal issue have not been a constraint thus far but they will become more of an issue as I move into the full suit. I'm intrigued by your cooling ideas. Please hit me up as I get further down the road and into the suit.
Does Google's api allow you access to their point of interest? If so, it would be a nice touch to add them into your object detection as a little extra flair. I'm thinking along the lines of scanning past a town monument or such and having some flavor text from Google Maps about section. Cool project though, I'm looking forward to seeing where you take it!
can you do a power consumption comparison for HW encoding on your other Nvidia board vs software encoding on Orin Nano? I guess it will not be as good as the hardware-accelerated encoding for battery-powered applications but I'm ready to be proven wrong! You can also do an SW encoding power measurement on the other one that has both HW and SW as a reference.
So this is going to turn out to be a bit of a complicated answer. - While I can run the UI at 100FPS+, typically it's running at closer to 60FPS when not recording or 30FPS when recording due to system limitations. So I'm not typically doing more detection than I should. - Doing re-detection isn't a problem since it should return the same results, it's just a waste of resources. - The object detection code was one of the last things I finished before making the initial video, so it's not done. I had planned to use a different solution than the initial one I used, so I didn't spend a lot of time optimizing it or cleaning it up. - With that being said, your comment made me dig back into that code and I found a bug that may account for some of my flicker in the initial video. - Now I need to get object detection working again so that I can test the fix! Thanks for your questions!
It's a custom piece of software I wrote under Linux. The HUD is completely configurable through my software though. It's all controlled now from a configuration file with custom graphics. Thanks!
Ok but does the face plate open, its definitely possible, would love to see this, ps I haven’t seen the whole vid yet so I don’t know if it does or not.
Good evening Kersey, how difficult would it be to set this up without all the on screen stuff needing coded? I would like to do this for other cosplay characters but can i just buy the screens cameras and jetson with minimal coding?
@@timberfire9242 That's not too hard but you do need to be pretty savvy with as computer, especially Linux. I'm working on getting my software and some guides released this year.
@@kerseyfabs I would love to run this for a mando helmet, would be epic to have thermo and low light etc. wonder if you could do a rear camera as well.
Hit me if im wrong, but isnt there already a AI with 3D object rendering and motion tracking? Why didn't use that, a few steps less than programming it on your own?
Sir, you may want to reach out to some body armor manufacturers and show them this tech. I could totally see a ventilated totally encapsulated Kevlar helmet being a thing.
Cool helmet! I also have a Jetson Orin Nano dev kit and Arducam imx477 cameras. But I can't find a driver and my Orin doesn't see cameras. How did you get it? Did you write your own driver?
Thanks! The driver is actually built into the distro now and you don't need to do anything but configure it. I may try to make a short video on it when I do my next install in a couple of weeks. The secret is the tool "/opt/nvidia/jetson-io/jetson-io.py" Enlarge your console, run that, give it a minute to come up, select "Configure Jetson Nano CSI Connector," "Configure for compatible hardware," Then pick which camera(s) you have. You can ignore the "dual" part if you only have a single camera. I think it will still work. Make sure you save on the way out of the utility and reboot. It should work now! Let me know!
This is going to be open source! I'm getting closer to releasing the source but I've decided to wait for version 2 of a lot of hardware and software. It will make a lot of people who want to copy my work happier. Copying version 1 isn't always the best idea.
Yeah! I'm not sure how it's broken yet. No calls fail to compile it just doesn't like passing around the memory anymore. I'll figure it out when I have some time.
Not a bad speedup, especially since you're limited to Software H.264 (Like, How? Even the Raspberry Pi 4 has Hardware H.264 lol) encoding... Looks like the upgrade wasn't that difficult either, Hopefully they can send you an Orin NX :P
I would love to go with an Orin NX! To your point, I'm scratching my head on the HW encoder being pulled out. I know they look for ways to differentiate them but that seems really fundamental.
Can i be an embedded engineer at home, please give us a roadmap ❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️🤎🤎❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️
Thanks for the interest! Let me link you to how I would start (not sponsored): amzn.to/3FKipvk Get an Arduino starter kit that comes with sample code and hardware to play with. Then get it hooked up and worked, study the code, and figure out how it works. Maybe I'll do a video on one of these in the future.
Looks like a great improvement. You are getting much better frame rates, if you can get the hardware encoding working it will be quite a system.
I'm already eyeing the Odin NX which has hardware encoders and even more performance.
You're my hero!!!! Following in your footsteps!!!!!
Thank you for watching! All the best!
One idea that might look really cool instead or next to object detection ... Having realtime 3D mapping of your environment like indoor drones use and then have it overlayed or as a rotatiting 3d mini map in a corner 🤔
Not sure if that can be achieved via direct 3D motion tracking or if those drones require use of lidar sensors.
All of it is possible and you can't imagine how many pieces of code I've started as proof of concepts. So, I assure your that these are either on the list or have already been started.
I absolutely love your work on this!
Would love to build a simpler version of this for cosplay helmets, just to let some of those impossible to see in costumes, see the light!
Thanks! I think when I release the source, it'll be easy to see how you can build one with less features. I've been designing it that way from the beginning.
I'm going to have to keep an eye on this project a little more. Do you think NVidia will create an update to fix the crashing issue?
Which crashing issue? The one with the HW encoder? Turns out that's not supported on the Orin Nano. Only software encode is supported.
With the recent explosion in AI you might be able to utilize OpenAI's Whisper software to allow you to feed your words into GPT4, which could be instructed to "pretend" as JARVIS. This might allow you to have the most realistic AI assistant without the additional computational baggage, so you can focus on all the others things you need to dedicate processing power to.
You can get GPT4 to pretend it is JARVIS and instruct it to act a little snarky too, and it would be nearly indistinguishable from the movie version. Now if you are able to get GPT4 to actually 'act' on the voice commands you provide it, then you essentially have the real jarvis.
If the main limit you have is thermal issues constraining you, I wonder if you could get away with using water cooling on a more powerful card which could give you additional headroom. You could even use a passive pumping system that uses the movements of the muscles and changes in pressure to keep the cooling fluid moving so you don't have to run a pump as well. If you want more information the classic example of this principle being employed in the human body can be found in the venous return vasculature located in the calf. If you are ever interested feel free to reach out and I would love to help design the "circulatory" cooling system to fit the heat output of your system.
Anyway, I have loved following your work so far and I am Interested to see where it goes from here! best of luck
Thanks for the feedback Daniel! To be honest, you basically have described work that I've recently started and am making really good progress on in reference to Jarvis. I guess you'll see how spot on you were in the future.
Thermal issue have not been a constraint thus far but they will become more of an issue as I move into the full suit. I'm intrigued by your cooling ideas. Please hit me up as I get further down the road and into the suit.
Does Google's api allow you access to their point of interest? If so, it would be a nice touch to add them into your object detection as a little extra flair. I'm thinking along the lines of scanning past a town monument or such and having some flavor text from Google Maps about section.
Cool project though, I'm looking forward to seeing where you take it!
Thanks! I'm going to work at enhancing the object detection a ton in future versions. There's still a lot to do there.
can you do a power consumption comparison for HW encoding on your other Nvidia board vs software encoding on Orin Nano? I guess it will not be as good as the hardware-accelerated encoding for battery-powered applications but I'm ready to be proven wrong! You can also do an SW encoding power measurement on the other one that has both HW and SW as a reference.
Arnt your cameras only 30fps? why is the object detection like 100fps?
Super cool helmet :D
Cameras are 60FPS. UI can go a lot faster and the object detection has to happen per frame. Thanks!
@@kerseyfabs then I'm still confused, so you run the object detection on ever UI update? Not on every new camera frame.
So this is going to turn out to be a bit of a complicated answer.
- While I can run the UI at 100FPS+, typically it's running at closer to 60FPS when not recording or 30FPS when recording due to system limitations. So I'm not typically doing more detection than I should.
- Doing re-detection isn't a problem since it should return the same results, it's just a waste of resources.
- The object detection code was one of the last things I finished before making the initial video, so it's not done. I had planned to use a different solution than the initial one I used, so I didn't spend a lot of time optimizing it or cleaning it up.
- With that being said, your comment made me dig back into that code and I found a bug that may account for some of my flicker in the initial video.
- Now I need to get object detection working again so that I can test the fix!
Thanks for your questions!
Hey!
Which method did you use for the HUD overlay? How are the graphics created and updated ?
It's a custom piece of software I wrote under Linux. The HUD is completely configurable through my software though. It's all controlled now from a configuration file with custom graphics. Thanks!
@@kerseyfabs hey, i mean how did you merge the live video and the hud graphics? Did you use CUDA overlays?
Gotcha. I wrote the graphics engine in SDL. I feed the video into SDL via Gstreamer (gstappsink) then layer it in. SDL uses OpenGL under the hood.
Ok but does the face plate open, its definitely possible, would love to see this, ps I haven’t seen the whole vid yet so I don’t know if it does or not.
Not yet but I do have software support for it. I'm working on a version 2 that will definitely open.
@@kerseyfabs ur lying with the internal screen and everything, omg I would love to see that after that you just need the full suit.
@@RealSyncFN It's all coming but I have a lot I'm working on. 😂
What is the weight? (Dev Kit)
Thanks for this video!
Come on, Chris. Tell us your REAL name! It's Tony Stark isn't it? Stop lying, man.
Your videos are very inspiring. Keep building. Keep learning.
Thank you Dave! When I'm flying I'll check into that name change. 😆
Good evening Kersey, how difficult would it be to set this up without all the on screen stuff needing coded? I would like to do this for other cosplay characters but can i just buy the screens cameras and jetson with minimal coding?
So you want all of the hardware but no software overlays?
@@kerseyfabs Yes, I just to be able to see what's going on around me.
@@timberfire9242 That's not too hard but you do need to be pretty savvy with as computer, especially Linux. I'm working on getting my software and some guides released this year.
@@kerseyfabs I would love to run this for a mando helmet, would be epic to have thermo and low light etc. wonder if you could do a rear camera as well.
This would be great for a Mando (or at least the V2 I'm working on). You could do all of that as long as you can find somewhere to mount it.
Hit me if im wrong, but isnt there already a AI with 3D object rendering and motion tracking? Why didn't use that, a few steps less than programming it on your own?
A perfection has been perfected!
🔥
Sir, you may want to reach out to some body armor manufacturers and show them this tech. I could totally see a ventilated totally encapsulated Kevlar helmet being a thing.
I think the application is very interesting. When I get further down the road to production quality code, we'll see who's interested in it. Thanks!
Cool helmet! I also have a Jetson Orin Nano dev kit and Arducam imx477 cameras. But I can't find a driver and my Orin doesn't see cameras. How did you get it? Did you write your own driver?
Thanks! The driver is actually built into the distro now and you don't need to do anything but configure it. I may try to make a short video on it when I do my next install in a couple of weeks.
The secret is the tool "/opt/nvidia/jetson-io/jetson-io.py"
Enlarge your console, run that, give it a minute to come up, select "Configure Jetson Nano CSI Connector," "Configure for compatible hardware," Then pick which camera(s) you have. You can ignore the "dual" part if you only have a single camera. I think it will still work.
Make sure you save on the way out of the utility and reboot. It should work now! Let me know!
@@kerseyfabs Thanks a lot for the advice! I The script helped me and it works! Now I can work with cameras. Good luck with your project!
@@oleglukyanenko8033 Thanks! I'm really glad that worked for you.
I would love to see the code behind this
This is going to be open source! I'm getting closer to releasing the source but I've decided to wait for version 2 of a lot of hardware and software. It will make a lot of people who want to copy my work happier. Copying version 1 isn't always the best idea.
I would love to see this tech in a Spider-Man face shell with a mask over it
It would have to be a more limited version since there's not really room for a display but maybe we could get "Karen" implemented!
Are you ever going to get back to me about my special project I emailed to you a while ago?
Done! Sorry about the delay!
Damn this is nice asf
I appreciate it! Take it easy!
@@kerseyfabs i really love this project. You are taking iron man cosplaying to the next level. Thank you
@@yan3748 I appreciate it! I'll have a couple more videos soon!
@@kerseyfabs you can maybe try with the lattepanda 864s board i know its powerfull and can ez run Windows and its smaller than your nvidia pc
Thanks for the suggestion!
Technically, breaking a library function counts as a release foul...
Yeah! I'm not sure how it's broken yet. No calls fail to compile it just doesn't like passing around the memory anymore. I'll figure it out when I have some time.
Damnit I’ve needed Orin nano forever. I used to buy jetsons 50 at a time for 60 bucks ughhhhhhhhhhhh
From my 24 hours of usage, I'm really liking it. Familiar platform, awesome performance!
i think you should add jarvis, i have made a jarvis ai and it was rlly not tht hard it only took maybe 2 months and it functions very well.
I'm working on it. I'll be sure to let people know when it's ready. It's not enough to just make one. I have to make it as good as it can be.
I would recommend using python to make the ai tho.
Only if I have to. It's not off the table but it is a last resort.
oke
ngl the hud for your helmet is amazing
Only thing missing is Jarvis
Check out my new channel on AI: ruclips.net/video/vR6_FkFsi3Y/видео.html
Did you ever move that printer for your poor wife? Happy wife, happy life!
Mmmm... maybe in a couple of weeks I'll do a live stream on it.
@@kerseyfabs Just give me a warning, and my Thursday PMs are spent on the Why Files stream...
Not a bad speedup, especially since you're limited to Software H.264 (Like, How? Even the Raspberry Pi 4 has Hardware H.264 lol) encoding...
Looks like the upgrade wasn't that difficult either, Hopefully they can send you an Orin NX :P
I would love to go with an Orin NX! To your point, I'm scratching my head on the HW encoder being pulled out. I know they look for ways to differentiate them but that seems really fundamental.
Just need Jarvis and it would be perfect
How's this: ruclips.net/video/vR6_FkFsi3Y/видео.html
Hey Kersey maybe you should send your helmet to Ukraine so it can help to win the war.😉😎🤘
I'd love to help out but I don't think the helmet's quite read to be useful in combat.
Can i be an embedded engineer at home, please give us a roadmap ❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️🤎🤎❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️
Thanks for the interest! Let me link you to how I would start (not sponsored): amzn.to/3FKipvk
Get an Arduino starter kit that comes with sample code and hardware to play with. Then get it hooked up and worked, study the code, and figure out how it works. Maybe I'll do a video on one of these in the future.
Thank you i am a huge fan ❤️❤️❤️
Really nice 🤌
Thanks!