The thing about AI is that it's not creating an algorithm to compare the current image to the ones you fed it before, but instead creating a model that can generalize what "blocked" and "free" means, even for images or places that it has never seen before, that's the awesome thing with AI
If I expect an AI bot, it would be to recognize what blocked and free means based on certain parameters, rather than a pattern in an image associated with blocked or free (in this case the leg of the chair might mean blocked). so I assume using it in another room with different walls or chairs ecc would completely confuse the bot. did I got the concept right?
@@BlackXeno not necessarily, if you feed it with tens or even hundreds of thousands of images from houses, it would pretty much work in any room. In this case, if the other room has similar walls and general traits it might still work, just not as well as in the original room.
@@BlackXeno something like that, let me explain further. Old image recognition methods will have the problem that you mentioned, because they were based of examples of concepts, not concepts themselves. Imagine that you go to a new place, and there's a chair there, you have never seen that chair before, nor have you ever seen that kind of chair, but you know what a chair looks like, so you don't have to be told what it is. AI is the same, you teach the AI to know the essense of your problem, object, etc. Imagine that you want your AI to tell wheter something is a chair or not, so you feed 5 images of red chairs, and 5 images of green "not chair" objects; your data will not be varied enough to tell what a chair is, so it'll try to guess by very limited patterns, so for example, because it always sees photos of red chairs, it will assume that all chairs are red, so green chairs will not be recognized. The key is in training you AI with very varied and clear data, so it starts to recognize your object by it's core concept, and not what differentiates two similar kind of objects. In conclusion, well trained AIs will generalize, that means understanding your problem well enough that they can recognize patterns on data never seen before, just like we do. I hope I was clear enough! :)
Thanks to all. there's a way to ask to the network if it got the right concept? I read once of a tank recognition AI, failed, because associated the background grass to be the enemy. Which proofs the need to truly varied data. I wonder if was possible to verify this ahead, analitically...
Wow, this is incredible! Building an AI robot with an NVIDIA single-board computer is a game-changer, especially for students interested in practical, hands-on learning in AI and robotics. My child’s experience with Moonpreneur introduced them to robotics concepts that make content like this even more exciting. Moonpreneur’s programs emphasize not only the technical skills but also the creativity and problem-solving needed to bring robots to life. Seeing how AI can be trained here really shows the potential for what kids can achieve with the right guidance. Keep up the awesome work!
Actually it is supervised learning since a human manually annotates the images into different classes. In this case either obstacle or no obstacle. Unsupervised would be if both classes would be shot randomly and not ordered by a human into separate folders. And then let the AI model figure out a pattern by itself. For example a VAE would do this.
I think he just meant it could be. That’s what I was thinking too doing it with reinforcement learning instead and let it bump into a few walls. You probably could get away without adding another sensor if you can see that your trying to move forward but aren’t but another sensor would def make it better for detecting when it hit a wall.
@@georgemazzeo7226 could be something to try. But reinforcement learning requires a lot of trail and error. I am talking about a couple thousand tries to get something driving. Let alone advanced navigation. So you need to physically put the car back at the start a couple thousand times which is not really doable.
@@sieuweelferink6852 I was thinking you could add a bump sensor on the front, if the car hits something then it could automatically back up and maybe choose a random direction to turn (perhaps weighted by its confidence in the blockedness of that direction)
Cool! It's amazing that people from all domains get interested in Machine Learning. Every domain of activity can benefit greatly from using these algorithms, but sometimes we underestimate the domain-specific knowledge needed to understand and solve the problem.
A cool way to continue the project, is implementing a pathfinding algorithm. Use the object avoidance to have the robot navigate to a specific location within your house.
Can attest to this, yes it can, as long as the stream is 720p@30fps, but even then your frame rate will be close to 1 FPS or maybe a bit higher Edit: Just to add to this, the board is not powerful enough to handle ML based inference and 3d vision at the same time
You could look into the Intel RealSense Depth and SLAM(location) Cameras. They do some of the computation on the camera themselves so it isn't as taxing on the Jetson nano. JetsonHacks is the best channel for that. He's made lots of videos specifically about Jetson + RealSense cameras and even manages a few git repositories, really good stuff.
Cool! Note that the raspberry pi, especially pi 4 can do deep learning tasks: Adafruit has a bunch of demos showing it off. But, I think they all use tensor flow lite, not pytorch.
My friend gave me his Jetson nano several months ago and I never had an idea of what i wanted to do with it. Im going to try this project out. Thnx for the vid.
Adding Proximity sensor and one motor to the camera hinge so that it can look around, also add some servo & hand to move small objects from path change the tyres to roller treads and add suspension add gps to navigate outside on its own
Thanks! I'm training my ai in assembling wheeled robots with mechanized hands, from 3d printed plastic parts. hopefully I'll have a bunch of them cleaning my driveway and painting my walls.
Deep Learning is totally possible on raspi 3 or 4 (even convolutional neural nets), when training the code on a pc -> just search for Donkey Car and you will find everything you'll need to know. Expensive Training needs a GPU just running the network doesn't need a gpu.
I recommend the courses from Coursera on Deep Learning. They are really helpful! At first I did not understood how it worked but, with the courses I realized the tremendous potencial that it has for many aplications. It merges nicely with mechatronics and electronics too.
@@iAmCbasBoy You can choose the Courses from Deep Learning with Andrew Ng. This is with Python. But, if you want to know the details about machine learning and neural networks, you can choose the course Machine Learning with Andrew Ng from Stanford... the last one is free.
try to use reinforcement learning instead, by adding proximity sensor / any kind of sensor for measuring distance / bump. By using that sensor value, code the policy requirement for the network. and bam you got reinforcement learning robot that can handle any terrain. (sounds easy hard to implement tho)
AI actually isn't complicated, but complex. It's a number of formulas, which need to run in a certain order. If you are familiar with neural networks, and have already built a single cell, then you are on the right path.
The creation of the model/neural network usually takes quite a bit more computing power than the network will then take once deployed I wonder if it's possible to train on a computer with a powerful GPU and then deploy the model on something rather weak like a rapsberry PI
This is interesting. I literally came across the Nvidia SBC while researching for a similar video for this channel last week. What a coincidence. You're a smart dude so i guess is a no brainer why you're always on top of your game Mr. Scott
@@electronicguy4550 thank you for answer, but i also mean wire that he shown in his first essential tools video, he shown pliers that can bend this wire acros two holes in prefboard. Thick, strait wire.(i dont know how to describe it beter, im from Czechia.)
Great video as name!! Surely a informing and interesting project!Looking ahead for a new video. And thanks for such a great knowledge and content. MUCH LOVE FROM INDIA😊😊
Great video. I have been looking at getting a Vector Robot to play with the AI but finding it may not be up to speed. I'm simply looking for something like a Vector robot that can explore the area, charge itself, and more where the AI comes in, be able to collect small objects and take it home. For this why I picked the vector since the base design is perfect yet with too many limitations. A perfect robotic mouse which I am so shocked I can't find anything close to it.
Thank you for this honest overview of the Jetson. It has completely shown no AI. What a pity. Only shows that the definition of what we are to call AI has seriously lowered its expectations
How about creating an algorithm, that creates an imaged labelled "blocked" , and then later an image labelled free, to balance out the dataset. And after that retrain it with the new data. This way our AI can learn from it's mistakes. This could be useful in say' another room, where the bot has learned enough to not collide often, but still collides sometimes
In my understanding of AI, it should work like this: You set the robot free to roam around, and as it goes it takes its own pictures, classify them as "free = go" (not hit anything) or "blocked = no go" (hit something) Learning its way as a baby learns with a lot of bumps and falls. The stored photos can also be used in different "unknown" environments later on as a baseline and learning different places will go faster and faster as its "experience" grows.
this is interesting but if all you are doing is avoiding obstacles a set of ultrasonic sensors or infrared or just a set of bumper switches could do the same thing and they are ultra simple to implement
Hi there! I'm just wondering where to store these pictures? Are these directly stored on the robot's computer or can I put them on a 3rd party service?
Great video sir but I have a doubt....The same can be made with Hc-sr04 Ultrasonics sensor with Arduino which detect the obstacles right....what's the difference .... sorry I'm not teasing you but I just want to know....?
Is it possible for the robot to remember the spatial layout of an area and be taught to move around the area based on this remembered layout? Ex. If it mapped your house, it can move autonomously to the toilet if only simply instructed to "go to the toilet".
Dude! You really need to get deeper in neural networks, they're amazing! I've seen it do crazy stuff, like generating music from the ground up from nothing (It generates waveforms, wich is a really complex task) and generating images!
brother the one you made in this video is the one with Jetson Nano or without jetson nano ? Im considering this topics as my uni project so please help
You should do a video on the Platformio ide discussing it’s benefits and drawbacks in comparison to the original Arduino ide. Then explain why you will use one compared to another
"Python... the language used by Raspberry Pi"... Sorry for being pedantic, but Raspberry Pi can run any compiled or interpreted language that has ARM compiler just fine. C, C++, Java, Go, Python, Lua, Perl, Nim and many more run just fine.
Really interesting indeed! 😃 Too bad it can't learn for itself... I mean, trying, colliding and taking pictures of where not to go... You know? But other than that, it's a pretty impressive little kit! 😃 Stay safe and creative there! 🖖😊
To be completely fair you should let it run in another place other than corridor cause if you lets it run just in the corridor you wont take advantage of any AI software cause it would be just remebering photos that are already taken not comparing with new photos
U know!!?? U should make a car design smarter than Tesla That stops on all stop needs And so on!! I am lazy right now as it's 9:43 in the night so there are grammatical mistakes so just ignore them ;)
Should have used the Google Coral board. It's made for things like this. I used it but don't like it because it's too new and doesn't have wide enough support for things like ROS and ROS packages.
The thing about AI is that it's not creating an algorithm to compare the current image to the ones you fed it before, but instead creating a model that can generalize what "blocked" and "free" means, even for images or places that it has never seen before, that's the awesome thing with AI
Well explained :-)
If I expect an AI bot, it would be to recognize what blocked and free means based on certain parameters, rather than a pattern in an image associated with blocked or free (in this case the leg of the chair might mean blocked). so I assume using it in another room with different walls or chairs ecc would completely confuse the bot. did I got the concept right?
@@BlackXeno not necessarily, if you feed it with tens or even hundreds of thousands of images from houses, it would pretty much work in any room. In this case, if the other room has similar walls and general traits it might still work, just not as well as in the original room.
@@BlackXeno something like that, let me explain further.
Old image recognition methods will have the problem that you mentioned, because they were based of examples of concepts, not concepts themselves.
Imagine that you go to a new place, and there's a chair there, you have never seen that chair before, nor have you ever seen that kind of chair, but you know what a chair looks like, so you don't have to be told what it is. AI is the same, you teach the AI to know the essense of your problem, object, etc.
Imagine that you want your AI to tell wheter something is a chair or not, so you feed 5 images of red chairs, and 5 images of green "not chair" objects; your data will not be varied enough to tell what a chair is, so it'll try to guess by very limited patterns, so for example, because it always sees photos of red chairs, it will assume that all chairs are red, so green chairs will not be recognized.
The key is in training you AI with very varied and clear data, so it starts to recognize your object by it's core concept, and not what differentiates two similar kind of objects.
In conclusion, well trained AIs will generalize, that means understanding your problem well enough that they can recognize patterns on data never seen before, just like we do.
I hope I was clear enough! :)
Thanks to all. there's a way to ask to the network if it got the right concept? I read once of a tank recognition AI, failed, because associated the background grass to be the enemy. Which proofs the need to truly varied data. I wonder if was possible to verify this ahead, analitically...
Never clicked faster before on a great Scott video
Same
Sammee
Awesome :-) Thanks
Same my dude ;D
NEXT EPISODE: HEART PACEMAKER 'DIY' or 'BUY'?
🤪
Yes, and of course, with an arduino nano.
sounds more like 'DIE' or 'BUY'
@@joewulf7378 it´s fine electronics but no nanoscience haha
Yes
Wow, this is incredible! Building an AI robot with an NVIDIA single-board computer is a game-changer, especially for students interested in practical, hands-on learning in AI and robotics. My child’s experience with Moonpreneur introduced them to robotics concepts that make content like this even more exciting. Moonpreneur’s programs emphasize not only the technical skills but also the creativity and problem-solving needed to bring robots to life. Seeing how AI can be trained here really shows the potential for what kids can achieve with the right guidance. Keep up the awesome work!
This could be a really neat unsupervised learning project!
Actually it is supervised learning since a human manually annotates the images into different classes. In this case either obstacle or no obstacle. Unsupervised would be if both classes would be shot randomly and not ordered by a human into separate folders. And then let the AI model figure out a pattern by itself. For example a VAE would do this.
I think he just meant it could be. That’s what I was thinking too doing it with reinforcement learning instead and let it bump into a few walls. You probably could get away without adding another sensor if you can see that your trying to move forward but aren’t but another sensor would def make it better for detecting when it hit a wall.
@@georgemazzeo7226 could be something to try. But reinforcement learning requires a lot of trail and error. I am talking about a couple thousand tries to get something driving. Let alone advanced navigation. So you need to physically put the car back at the start a couple thousand times which is not really doable.
@@sieuweelferink6852 I was thinking you could add a bump sensor on the front, if the car hits something then it could automatically back up and maybe choose a random direction to turn (perhaps weighted by its confidence in the blockedness of that direction)
Cool! It's amazing that people from all domains get interested in Machine Learning. Every domain of activity can benefit greatly from using these algorithms, but sometimes we underestimate the domain-specific knowledge needed to understand and solve the problem.
Absolutely!
From his first video to all of his videos always a joy to watch and learn something new. Keep up the awesome work you do Great Scott
Thanks, will do!
A cool way to continue the project, is implementing a pathfinding algorithm. Use the object avoidance to have the robot navigate to a specific location within your house.
This Channel is like my second school because I learned a lot knowledge about electronics and stuff from here. You're awesome!
Maybe the next step could be adding another camera for 3D vision.. Could the Jetson board handle that?
Should be able to pull it off :-)
what about a ToF(time of flight) camera
@@greatscottlab Or lidar for 3d vision
Can attest to this, yes it can, as long as the stream is 720p@30fps, but even then your frame rate will be close to 1 FPS or maybe a bit higher
Edit: Just to add to this, the board is not powerful enough to handle ML based inference and 3d vision at the same time
You could look into the Intel RealSense Depth and SLAM(location) Cameras. They do some of the computation on the camera themselves so it isn't as taxing on the Jetson nano. JetsonHacks is the best channel for that. He's made lots of videos specifically about Jetson + RealSense cameras and even manages a few git repositories, really good stuff.
Explaining computers and Great Scott videos uploaded on the same day, what a treat!
I like the fact that you face the camera to urself when doing the intro
Thanks :-)
Cool! Note that the raspberry pi, especially pi 4 can do deep learning tasks: Adafruit has a bunch of demos showing it off. But, I think they all use tensor flow lite, not pytorch.
If the robot had an ultrasonic sensor, it seems as though it could take its own pictures and learn on it's own without having to be explictly trained.
I just got into PyTorch and saw this. Perfect timing. Thanks.
My friend gave me his Jetson nano several months ago and I never had an idea of what i wanted to do with it. Im going to try this project out. Thnx for the vid.
Good luck!
Last week i though of suggesting you with an ai based robot sir but to my suprise it actually comes true😂😂 keep making this kind of videos 👍🏻🔥
Also try using ROS(robot operating system) sir.
Thank you so much 😀
you never disappoint me, your the best😁😁😁
You actually made a product video but I must admit a very good one :D
Adding Proximity sensor and one motor to the camera hinge so that it can look around, also add some servo & hand to move small objects from path change the tyres to roller treads and add suspension add gps to navigate outside on its own
WoW! Such cool video! Thanks for next awesome video!
Glad you liked it!
love you greatscott. . and also your voice
i just got the jetson nano up and running yesterday! What gr8 timing!
Nice!!
Thanks! I'm training my ai in assembling wheeled robots with mechanized hands, from 3d printed plastic parts. hopefully I'll have a bunch of them cleaning my driveway and painting my walls.
Deep Learning is totally possible on raspi 3 or 4 (even convolutional neural nets), when training the code on a pc -> just search for Donkey Car and you will find everything you'll need to know.
Expensive Training needs a GPU just running the network doesn't need a gpu.
I think ultrasonic sensor could be used with camera sensors to make the Robot notice objects and obstacles more efficiently
Why can't i like this video more than once?
this is one of your best videos ever, thanks a lot.
Great way to start! Keep going. AI engineer from de montfort university Leicester UK
1:09 "automonous", 1:47 "automonously", lmao
I recommend the courses from Coursera on Deep Learning. They are really helpful!
At first I did not understood how it worked but, with the courses I realized the tremendous potencial that it has for many aplications. It merges nicely with mechatronics and electronics too.
Do you recommend a specific one? I just started looking into AI CV and have some python experience
@@iAmCbasBoy You can choose the Courses from Deep Learning with Andrew Ng. This is with Python.
But, if you want to know the details about machine learning and neural networks, you can choose the course Machine Learning with Andrew Ng from Stanford... the last one is free.
try to use reinforcement learning instead, by adding proximity sensor / any kind of sensor for measuring distance / bump.
By using that sensor value, code the policy requirement for the network.
and bam you got reinforcement learning robot that can handle any terrain.
(sounds easy hard to implement tho)
Now this is quality youtube.
How to build an AI robot:
Buy a kit that does it for you.
Fūcking phenomenal m8.
AI actually isn't complicated, but complex. It's a number of formulas, which need to run in a certain order.
If you are familiar with neural networks, and have already built a single cell, then you are on the right path.
The creation of the model/neural network usually takes quite a bit more computing power than the network will then take once deployed
I wonder if it's possible to train on a computer with a powerful GPU and then deploy the model on something rather weak like a rapsberry PI
Humans: *AI exists* we are going to die
AI: if picture equals to "blocked" turn, else drive
After reading your comment,
GPT-3 laughed in ML
This is interesting. I literally came across the Nvidia SBC while researching for a similar video for this channel last week.
What a coincidence. You're a smart dude so i guess is a no brainer why you're always on top of your game Mr. Scott
Glad I could help!
@@greatscottlab I love your work. Keep pushing.
For just making obstacle detecting bot, why can't we use ultrasonic or ir? It would also do the same but in easier way I guess......
Great project, congratulations😎😎😎
Another awesome video by great scott 😘😘😘
Glad you enjoyed it
Finally... what I've been waiting for !
So much great idea
Good explanation
Thank you
Glad you liked it!
Excellent
Almost unbelievable how today's electrical hobbyists have many ways to create fun project.
Btw where you get jumper wires for your prefboard projects?
Its called silverd copper wire
@@electronicguy4550 thank you for answer, but i also mean wire that he shown in his first essential tools video, he shown pliers that can bend this wire acros two holes in prefboard. Thick, strait wire.(i dont know how to describe it beter, im from Czechia.)
Great video as name!! Surely a informing and interesting project!Looking ahead for a new video. And thanks for such a great knowledge and content. MUCH LOVE FROM INDIA😊😊
nice takes all the frustration out of trying to build one by trial and error
Waited for such a video from a long time... Thank you sir
That's brilliant topic to cover!
Im your very first viewer of this video, btw you are really a great scott
Hey, thanks!
@@greatscottlab Sir you are my inspiration to learn about electronics (me = 13 y/o) and you are welcome
Yo make more videos like this, love this video
Finally diy AI BY GREAT SCOT COOL STUFF
Great video. I have been looking at getting a Vector Robot to play with the AI but finding it may not be up to speed. I'm simply looking for something like a Vector robot that can explore the area, charge itself, and more where the AI comes in, be able to collect small objects and take it home. For this why I picked the vector since the base design is perfect yet with too many limitations. A perfect robotic mouse which I am so shocked I can't find anything close to it.
Thank you for this honest overview of the Jetson. It has completely shown no AI. What a pity. Only shows that the definition of what we are to call AI has seriously lowered its expectations
I love how Jeremy's German accent causes him to say *vikipedia*
Never knew his name was Jeremy. Assumed it was Scott all these years...
@@revealingfacts4all LOL. same. Who’s Scott then? Scott must be Jeremy’s mentor, since apparently Scott is great! :-)
@@gregclare ruclips.net/video/yX4ZguOZjhs/видео.html&ab_channel=ManticoreEscapee
How about creating an algorithm, that creates an imaged labelled "blocked" , and then later an image labelled free, to balance out the dataset. And after that retrain it with the new data. This way our AI can learn from it's mistakes. This could be useful in say' another room, where the bot has learned enough to not collide often, but still collides sometimes
Please prepare a video on ,'multi modal mobility morphobot project robot'
Would love to see a lidar connected to this bot and hopefully running ROS, we will be able to map the surroundings.
You make look easy man Thank you
can we implement the same Ai training model for collision avoidance in drones, too?
Great Scott: building an AI robot
Me: learns a lot from his video
but Breaks an rc car just to get a motor
But the videos are very knowledgable
Thanks mate :-)
@@greatscottlab
Do you have subreddit like electroboom has ?
I just finished making notes for GA and for the topping i got this video on my feed
So lucky 😂😂
Amazing!
In my understanding of AI, it should work like this: You set the robot free to roam around, and as it goes it takes its own pictures, classify them as "free = go" (not hit anything) or "blocked = no go" (hit something) Learning its way as a baby learns with a lot of bumps and falls. The stored photos can also be used in different "unknown" environments later on as a baseline and learning different places will go faster and faster as its "experience" grows.
Cellbots is another interesting approach.
Any plans to delve deeper into these A.I. subjects?
this is interesting but if all you are doing is avoiding obstacles a set of ultrasonic sensors or infrared or just a set of bumper switches could do the same thing and they are ultra simple to implement
Using sensors was not the point of the video.
If I added 2 wheels on the back, that aren't controlled by any components, would that work or would it mess up the original build.
Hi there! I'm just wondering where to store these pictures? Are these directly stored on the robot's computer or can I put them on a 3rd party service?
Great video sir but I have a doubt....The same can be made with Hc-sr04 Ultrasonics sensor with Arduino which detect the obstacles right....what's the difference .... sorry I'm not teasing you but I just want to know....?
Is it possible for the robot to remember the spatial layout of an area and be taught to move around the area based on this remembered layout? Ex. If it mapped your house, it can move autonomously to the toilet if only simply instructed to "go to the toilet".
Dude! You really need to get deeper in neural networks, they're amazing! I've seen it do crazy stuff, like generating music from the ground up from nothing (It generates waveforms, wich is a really complex task) and generating images!
Nice video! I love this subject... It would be amazing if you do more more about this robot or AI!
The writing hand has a face also 😀👍
Great Video. How do you power the board ? As far as I know it draw around 5A which most "normal" powe rbanks are not capable to deliver ?
That’s cool. You could set up your lawnmower and set up the mower to go exactly around your yard and miss all of the bad areas. I’m doin it lol
brother the one you made in this video is the one with Jetson Nano or without jetson nano ?
Im considering this topics as my uni project so please help
Instant 👍 after reading the title
Hope you enjoyed it!
Super bro 🙏
Hey , thanks for the video , can this be used for a totally beginners ?
You should do a video on the Platformio ide discussing it’s benefits and drawbacks in comparison to the original Arduino ide. Then explain why you will use one compared to another
"Python... the language used by Raspberry Pi"... Sorry for being pedantic, but Raspberry Pi can run any compiled or interpreted language that has ARM compiler just fine. C, C++, Java, Go, Python, Lua, Perl, Nim and many more run just fine.
can you make a series about jetson nano?
I saw that you tried to create one of these for a robot vacuum using LiDAR . Maybe this would work better.
Oooh erst nen neues Andreas Spiess Video und jetzt du auch noch. Nice
No problem ;-)
Really interesting indeed! 😃
Too bad it can't learn for itself... I mean, trying, colliding and taking pictures of where not to go... You know?
But other than that, it's a pretty impressive little kit! 😃
Stay safe and creative there! 🖖😊
From the thumbnail i thought it was also a self balancing robot😂
Not quite ;-)
@@greatscottlab but with AI, what not to do with self balancing robot?, that would be a good idea
When you realise a single board computer is more powerful than your pc.
I am totally going to replicate this project and share my results!
To be completely fair you should let it run in another place other than corridor cause if you lets it run just in the corridor you wont take advantage of any AI software cause it would be just remebering photos that are already taken not comparing with new photos
Robot: I'd like to play that guitar. Can I have a few fingers?
Danke. Elektor Heft ist bestellt :)
How about using a ToF(Time of Flight) Camera?
Dear Sir, which camera used in this project ?? Can Pi camera( V2) work ??
The pi cam v2 works with the jetson nano. There is another youtuber called JetsonHacks who has more info on how to set the camera up.
@@not_ever OK, I will try it.
The one you bought from Sparkfun, are they using their own algorithm or using AI/ML library such as Tensorflow/Open AI/etc ?
U know!!??
U should make a car design smarter than Tesla
That stops on all stop needs
And so on!!
I am lazy right now as it's 9:43 in the night so there are grammatical mistakes so just ignore them ;)
U sure u r not also drunk.
@@xboxgamer9216 idk
I was just lazy :)
So i wrote what came to my mind
You can actually use Raspberry PI with TensorFlow Lite
Good to know ;-)
I don't understand from where u started using that pad sir
Did this for my roomba, but I did all the training on my desktop and exported for the ONNX runtime 'cause it was easier. Still PyTorch.
Should have used the Google Coral board. It's made for things like this. I used it but don't like it because it's too new and doesn't have wide enough support for things like ROS and ROS packages.
Awesome well Explained and great As Always 💜
And currently learning python 😆
Jetson Nano or Raspberry pi 4 model B, which one is better for computer vision applications?