The thing about AI is that it's not creating an algorithm to compare the current image to the ones you fed it before, but instead creating a model that can generalize what "blocked" and "free" means, even for images or places that it has never seen before, that's the awesome thing with AI
If I expect an AI bot, it would be to recognize what blocked and free means based on certain parameters, rather than a pattern in an image associated with blocked or free (in this case the leg of the chair might mean blocked). so I assume using it in another room with different walls or chairs ecc would completely confuse the bot. did I got the concept right?
@@BlackXeno not necessarily, if you feed it with tens or even hundreds of thousands of images from houses, it would pretty much work in any room. In this case, if the other room has similar walls and general traits it might still work, just not as well as in the original room.
@@BlackXeno something like that, let me explain further. Old image recognition methods will have the problem that you mentioned, because they were based of examples of concepts, not concepts themselves. Imagine that you go to a new place, and there's a chair there, you have never seen that chair before, nor have you ever seen that kind of chair, but you know what a chair looks like, so you don't have to be told what it is. AI is the same, you teach the AI to know the essense of your problem, object, etc. Imagine that you want your AI to tell wheter something is a chair or not, so you feed 5 images of red chairs, and 5 images of green "not chair" objects; your data will not be varied enough to tell what a chair is, so it'll try to guess by very limited patterns, so for example, because it always sees photos of red chairs, it will assume that all chairs are red, so green chairs will not be recognized. The key is in training you AI with very varied and clear data, so it starts to recognize your object by it's core concept, and not what differentiates two similar kind of objects. In conclusion, well trained AIs will generalize, that means understanding your problem well enough that they can recognize patterns on data never seen before, just like we do. I hope I was clear enough! :)
Thanks to all. there's a way to ask to the network if it got the right concept? I read once of a tank recognition AI, failed, because associated the background grass to be the enemy. Which proofs the need to truly varied data. I wonder if was possible to verify this ahead, analitically...
Actually it is supervised learning since a human manually annotates the images into different classes. In this case either obstacle or no obstacle. Unsupervised would be if both classes would be shot randomly and not ordered by a human into separate folders. And then let the AI model figure out a pattern by itself. For example a VAE would do this.
I think he just meant it could be. That’s what I was thinking too doing it with reinforcement learning instead and let it bump into a few walls. You probably could get away without adding another sensor if you can see that your trying to move forward but aren’t but another sensor would def make it better for detecting when it hit a wall.
@@georgemazzeo7226 could be something to try. But reinforcement learning requires a lot of trail and error. I am talking about a couple thousand tries to get something driving. Let alone advanced navigation. So you need to physically put the car back at the start a couple thousand times which is not really doable.
@@sieuweelferink6852 I was thinking you could add a bump sensor on the front, if the car hits something then it could automatically back up and maybe choose a random direction to turn (perhaps weighted by its confidence in the blockedness of that direction)
Cool! It's amazing that people from all domains get interested in Machine Learning. Every domain of activity can benefit greatly from using these algorithms, but sometimes we underestimate the domain-specific knowledge needed to understand and solve the problem.
A cool way to continue the project, is implementing a pathfinding algorithm. Use the object avoidance to have the robot navigate to a specific location within your house.
Can attest to this, yes it can, as long as the stream is 720p@30fps, but even then your frame rate will be close to 1 FPS or maybe a bit higher Edit: Just to add to this, the board is not powerful enough to handle ML based inference and 3d vision at the same time
You could look into the Intel RealSense Depth and SLAM(location) Cameras. They do some of the computation on the camera themselves so it isn't as taxing on the Jetson nano. JetsonHacks is the best channel for that. He's made lots of videos specifically about Jetson + RealSense cameras and even manages a few git repositories, really good stuff.
Cool! Note that the raspberry pi, especially pi 4 can do deep learning tasks: Adafruit has a bunch of demos showing it off. But, I think they all use tensor flow lite, not pytorch.
My friend gave me his Jetson nano several months ago and I never had an idea of what i wanted to do with it. Im going to try this project out. Thnx for the vid.
A combined approach could be to teleoperate the robot, while it also collects training data. E.g., at intervals it records the visual data and the actions being directed by the operator. This is a common method by which robots (including self driven cars) are often being trained.
This is interesting. I literally came across the Nvidia SBC while researching for a similar video for this channel last week. What a coincidence. You're a smart dude so i guess is a no brainer why you're always on top of your game Mr. Scott
Thanks! I'm training my ai in assembling wheeled robots with mechanized hands, from 3d printed plastic parts. hopefully I'll have a bunch of them cleaning my driveway and painting my walls.
Adding Proximity sensor and one motor to the camera hinge so that it can look around, also add some servo & hand to move small objects from path change the tyres to roller treads and add suspension add gps to navigate outside on its own
try to use reinforcement learning instead, by adding proximity sensor / any kind of sensor for measuring distance / bump. By using that sensor value, code the policy requirement for the network. and bam you got reinforcement learning robot that can handle any terrain. (sounds easy hard to implement tho)
Great video as name!! Surely a informing and interesting project!Looking ahead for a new video. And thanks for such a great knowledge and content. MUCH LOVE FROM INDIA😊😊
The creation of the model/neural network usually takes quite a bit more computing power than the network will then take once deployed I wonder if it's possible to train on a computer with a powerful GPU and then deploy the model on something rather weak like a rapsberry PI
Deep Learning is totally possible on raspi 3 or 4 (even convolutional neural nets), when training the code on a pc -> just search for Donkey Car and you will find everything you'll need to know. Expensive Training needs a GPU just running the network doesn't need a gpu.
@@electronicguy4550 thank you for answer, but i also mean wire that he shown in his first essential tools video, he shown pliers that can bend this wire acros two holes in prefboard. Thick, strait wire.(i dont know how to describe it beter, im from Czechia.)
I recommend the courses from Coursera on Deep Learning. They are really helpful! At first I did not understood how it worked but, with the courses I realized the tremendous potencial that it has for many aplications. It merges nicely with mechatronics and electronics too.
@@iAmCbasBoy You can choose the Courses from Deep Learning with Andrew Ng. This is with Python. But, if you want to know the details about machine learning and neural networks, you can choose the course Machine Learning with Andrew Ng from Stanford... the last one is free.
Great video sir but I have a doubt....The same can be made with Hc-sr04 Ultrasonics sensor with Arduino which detect the obstacles right....what's the difference .... sorry I'm not teasing you but I just want to know....?
AI actually isn't complicated, but complex. It's a number of formulas, which need to run in a certain order. If you are familiar with neural networks, and have already built a single cell, then you are on the right path.
Is it possible for the robot to remember the spatial layout of an area and be taught to move around the area based on this remembered layout? Ex. If it mapped your house, it can move autonomously to the toilet if only simply instructed to "go to the toilet".
How about creating an algorithm, that creates an imaged labelled "blocked" , and then later an image labelled free, to balance out the dataset. And after that retrain it with the new data. This way our AI can learn from it's mistakes. This could be useful in say' another room, where the bot has learned enough to not collide often, but still collides sometimes
Dude! You really need to get deeper in neural networks, they're amazing! I've seen it do crazy stuff, like generating music from the ground up from nothing (It generates waveforms, wich is a really complex task) and generating images!
Really interesting indeed! 😃 Too bad it can't learn for itself... I mean, trying, colliding and taking pictures of where not to go... You know? But other than that, it's a pretty impressive little kit! 😃 Stay safe and creative there! 🖖😊
this is interesting but if all you are doing is avoiding obstacles a set of ultrasonic sensors or infrared or just a set of bumper switches could do the same thing and they are ultra simple to implement
Great video. I have been looking at getting a Vector Robot to play with the AI but finding it may not be up to speed. I'm simply looking for something like a Vector robot that can explore the area, charge itself, and more where the AI comes in, be able to collect small objects and take it home. For this why I picked the vector since the base design is perfect yet with too many limitations. A perfect robotic mouse which I am so shocked I can't find anything close to it.
You should do a video on the Platformio ide discussing it’s benefits and drawbacks in comparison to the original Arduino ide. Then explain why you will use one compared to another
brother the one you made in this video is the one with Jetson Nano or without jetson nano ? Im considering this topics as my uni project so please help
Hi there! I'm just wondering where to store these pictures? Are these directly stored on the robot's computer or can I put them on a 3rd party service?
U know!!?? U should make a car design smarter than Tesla That stops on all stop needs And so on!! I am lazy right now as it's 9:43 in the night so there are grammatical mistakes so just ignore them ;)
Can we do it without using raspberry Pi ... Just sending video footage to computer wirelessly and then giving it commands from computer to avoid or not.. I'm want to make project using this concept where I can just give it destination and it will automatically travel towards it avoiding obstacles
"Python... the language used by Raspberry Pi"... Sorry for being pedantic, but Raspberry Pi can run any compiled or interpreted language that has ARM compiler just fine. C, C++, Java, Go, Python, Lua, Perl, Nim and many more run just fine.
In my understanding of AI, it should work like this: You set the robot free to roam around, and as it goes it takes its own pictures, classify them as "free = go" (not hit anything) or "blocked = no go" (hit something) Learning its way as a baby learns with a lot of bumps and falls. The stored photos can also be used in different "unknown" environments later on as a baseline and learning different places will go faster and faster as its "experience" grows.
The thing about AI is that it's not creating an algorithm to compare the current image to the ones you fed it before, but instead creating a model that can generalize what "blocked" and "free" means, even for images or places that it has never seen before, that's the awesome thing with AI
Well explained :-)
If I expect an AI bot, it would be to recognize what blocked and free means based on certain parameters, rather than a pattern in an image associated with blocked or free (in this case the leg of the chair might mean blocked). so I assume using it in another room with different walls or chairs ecc would completely confuse the bot. did I got the concept right?
@@BlackXeno not necessarily, if you feed it with tens or even hundreds of thousands of images from houses, it would pretty much work in any room. In this case, if the other room has similar walls and general traits it might still work, just not as well as in the original room.
@@BlackXeno something like that, let me explain further.
Old image recognition methods will have the problem that you mentioned, because they were based of examples of concepts, not concepts themselves.
Imagine that you go to a new place, and there's a chair there, you have never seen that chair before, nor have you ever seen that kind of chair, but you know what a chair looks like, so you don't have to be told what it is. AI is the same, you teach the AI to know the essense of your problem, object, etc.
Imagine that you want your AI to tell wheter something is a chair or not, so you feed 5 images of red chairs, and 5 images of green "not chair" objects; your data will not be varied enough to tell what a chair is, so it'll try to guess by very limited patterns, so for example, because it always sees photos of red chairs, it will assume that all chairs are red, so green chairs will not be recognized.
The key is in training you AI with very varied and clear data, so it starts to recognize your object by it's core concept, and not what differentiates two similar kind of objects.
In conclusion, well trained AIs will generalize, that means understanding your problem well enough that they can recognize patterns on data never seen before, just like we do.
I hope I was clear enough! :)
Thanks to all. there's a way to ask to the network if it got the right concept? I read once of a tank recognition AI, failed, because associated the background grass to be the enemy. Which proofs the need to truly varied data. I wonder if was possible to verify this ahead, analitically...
NEXT EPISODE: HEART PACEMAKER 'DIY' or 'BUY'?
🤪
Yes, and of course, with an arduino nano.
sounds more like 'DIE' or 'BUY'
@@joewulf7378 it´s fine electronics but no nanoscience haha
Yes
Never clicked faster before on a great Scott video
Same
Sammee
Awesome :-) Thanks
Same my dude ;D
This could be a really neat unsupervised learning project!
Actually it is supervised learning since a human manually annotates the images into different classes. In this case either obstacle or no obstacle. Unsupervised would be if both classes would be shot randomly and not ordered by a human into separate folders. And then let the AI model figure out a pattern by itself. For example a VAE would do this.
I think he just meant it could be. That’s what I was thinking too doing it with reinforcement learning instead and let it bump into a few walls. You probably could get away without adding another sensor if you can see that your trying to move forward but aren’t but another sensor would def make it better for detecting when it hit a wall.
@@georgemazzeo7226 could be something to try. But reinforcement learning requires a lot of trail and error. I am talking about a couple thousand tries to get something driving. Let alone advanced navigation. So you need to physically put the car back at the start a couple thousand times which is not really doable.
@@sieuweelferink6852 I was thinking you could add a bump sensor on the front, if the car hits something then it could automatically back up and maybe choose a random direction to turn (perhaps weighted by its confidence in the blockedness of that direction)
From his first video to all of his videos always a joy to watch and learn something new. Keep up the awesome work you do Great Scott
Thanks, will do!
Cool! It's amazing that people from all domains get interested in Machine Learning. Every domain of activity can benefit greatly from using these algorithms, but sometimes we underestimate the domain-specific knowledge needed to understand and solve the problem.
Absolutely!
A cool way to continue the project, is implementing a pathfinding algorithm. Use the object avoidance to have the robot navigate to a specific location within your house.
Maybe the next step could be adding another camera for 3D vision.. Could the Jetson board handle that?
Should be able to pull it off :-)
what about a ToF(time of flight) camera
@@greatscottlab Or lidar for 3d vision
Can attest to this, yes it can, as long as the stream is 720p@30fps, but even then your frame rate will be close to 1 FPS or maybe a bit higher
Edit: Just to add to this, the board is not powerful enough to handle ML based inference and 3d vision at the same time
You could look into the Intel RealSense Depth and SLAM(location) Cameras. They do some of the computation on the camera themselves so it isn't as taxing on the Jetson nano. JetsonHacks is the best channel for that. He's made lots of videos specifically about Jetson + RealSense cameras and even manages a few git repositories, really good stuff.
I like the fact that you face the camera to urself when doing the intro
Thanks :-)
Explaining computers and Great Scott videos uploaded on the same day, what a treat!
This Channel is like my second school because I learned a lot knowledge about electronics and stuff from here. You're awesome!
Cool! Note that the raspberry pi, especially pi 4 can do deep learning tasks: Adafruit has a bunch of demos showing it off. But, I think they all use tensor flow lite, not pytorch.
My friend gave me his Jetson nano several months ago and I never had an idea of what i wanted to do with it. Im going to try this project out. Thnx for the vid.
Good luck!
Last week i though of suggesting you with an ai based robot sir but to my suprise it actually comes true😂😂 keep making this kind of videos 👍🏻🔥
Also try using ROS(robot operating system) sir.
Thank you so much 😀
If the robot had an ultrasonic sensor, it seems as though it could take its own pictures and learn on it's own without having to be explictly trained.
I just got into PyTorch and saw this. Perfect timing. Thanks.
A combined approach could be to teleoperate the robot, while it also collects training data. E.g., at intervals it records the visual data and the actions being directed by the operator. This is a common method by which robots (including self driven cars) are often being trained.
i just got the jetson nano up and running yesterday! What gr8 timing!
Nice!!
You actually made a product video but I must admit a very good one :D
Please prepare a video on ,'multi modal mobility morphobot project robot'
Now this is quality youtube.
How to build an AI robot:
Buy a kit that does it for you.
Fūcking phenomenal m8.
This is interesting. I literally came across the Nvidia SBC while researching for a similar video for this channel last week.
What a coincidence. You're a smart dude so i guess is a no brainer why you're always on top of your game Mr. Scott
Glad I could help!
@@greatscottlab I love your work. Keep pushing.
For just making obstacle detecting bot, why can't we use ultrasonic or ir? It would also do the same but in easier way I guess......
WoW! Such cool video! Thanks for next awesome video!
Glad you liked it!
Thanks! I'm training my ai in assembling wheeled robots with mechanized hands, from 3d printed plastic parts. hopefully I'll have a bunch of them cleaning my driveway and painting my walls.
I think ultrasonic sensor could be used with camera sensors to make the Robot notice objects and obstacles more efficiently
So much great idea
Good explanation
Thank you
Glad you liked it!
you never disappoint me, your the best😁😁😁
Adding Proximity sensor and one motor to the camera hinge so that it can look around, also add some servo & hand to move small objects from path change the tyres to roller treads and add suspension add gps to navigate outside on its own
this is one of your best videos ever, thanks a lot.
Great way to start! Keep going. AI engineer from de montfort university Leicester UK
nice takes all the frustration out of trying to build one by trial and error
love you greatscott. . and also your voice
Great project, congratulations😎😎😎
That's brilliant topic to cover!
can we implement the same Ai training model for collision avoidance in drones, too?
try to use reinforcement learning instead, by adding proximity sensor / any kind of sensor for measuring distance / bump.
By using that sensor value, code the policy requirement for the network.
and bam you got reinforcement learning robot that can handle any terrain.
(sounds easy hard to implement tho)
Great video as name!! Surely a informing and interesting project!Looking ahead for a new video. And thanks for such a great knowledge and content. MUCH LOVE FROM INDIA😊😊
The creation of the model/neural network usually takes quite a bit more computing power than the network will then take once deployed
I wonder if it's possible to train on a computer with a powerful GPU and then deploy the model on something rather weak like a rapsberry PI
Deep Learning is totally possible on raspi 3 or 4 (even convolutional neural nets), when training the code on a pc -> just search for Donkey Car and you will find everything you'll need to know.
Expensive Training needs a GPU just running the network doesn't need a gpu.
Almost unbelievable how today's electrical hobbyists have many ways to create fun project.
Btw where you get jumper wires for your prefboard projects?
Its called silverd copper wire
@@electronicguy4550 thank you for answer, but i also mean wire that he shown in his first essential tools video, he shown pliers that can bend this wire acros two holes in prefboard. Thick, strait wire.(i dont know how to describe it beter, im from Czechia.)
Why can't i like this video more than once?
Cellbots is another interesting approach.
Would love to see a lidar connected to this bot and hopefully running ROS, we will be able to map the surroundings.
Yo make more videos like this, love this video
Excellent
Finally diy AI BY GREAT SCOT COOL STUFF
I recommend the courses from Coursera on Deep Learning. They are really helpful!
At first I did not understood how it worked but, with the courses I realized the tremendous potencial that it has for many aplications. It merges nicely with mechatronics and electronics too.
Do you recommend a specific one? I just started looking into AI CV and have some python experience
@@iAmCbasBoy You can choose the Courses from Deep Learning with Andrew Ng. This is with Python.
But, if you want to know the details about machine learning and neural networks, you can choose the course Machine Learning with Andrew Ng from Stanford... the last one is free.
Another awesome video by great scott 😘😘😘
Glad you enjoyed it
Great video sir but I have a doubt....The same can be made with Hc-sr04 Ultrasonics sensor with Arduino which detect the obstacles right....what's the difference .... sorry I'm not teasing you but I just want to know....?
can you make a series about jetson nano?
Any plans to delve deeper into these A.I. subjects?
You make look easy man Thank you
AI actually isn't complicated, but complex. It's a number of formulas, which need to run in a certain order.
If you are familiar with neural networks, and have already built a single cell, then you are on the right path.
If I added 2 wheels on the back, that aren't controlled by any components, would that work or would it mess up the original build.
Hey , thanks for the video , can this be used for a totally beginners ?
Is it possible for the robot to remember the spatial layout of an area and be taught to move around the area based on this remembered layout? Ex. If it mapped your house, it can move autonomously to the toilet if only simply instructed to "go to the toilet".
The writing hand has a face also 😀👍
You can actually use Raspberry PI with TensorFlow Lite
Good to know ;-)
Im your very first viewer of this video, btw you are really a great scott
Hey, thanks!
@@greatscottlab Sir you are my inspiration to learn about electronics (me = 13 y/o) and you are welcome
How about creating an algorithm, that creates an imaged labelled "blocked" , and then later an image labelled free, to balance out the dataset. And after that retrain it with the new data. This way our AI can learn from it's mistakes. This could be useful in say' another room, where the bot has learned enough to not collide often, but still collides sometimes
Dude! You really need to get deeper in neural networks, they're amazing! I've seen it do crazy stuff, like generating music from the ground up from nothing (It generates waveforms, wich is a really complex task) and generating images!
Really interesting indeed! 😃
Too bad it can't learn for itself... I mean, trying, colliding and taking pictures of where not to go... You know?
But other than that, it's a pretty impressive little kit! 😃
Stay safe and creative there! 🖖😊
this is interesting but if all you are doing is avoiding obstacles a set of ultrasonic sensors or infrared or just a set of bumper switches could do the same thing and they are ultra simple to implement
Using sensors was not the point of the video.
Great video. I have been looking at getting a Vector Robot to play with the AI but finding it may not be up to speed. I'm simply looking for something like a Vector robot that can explore the area, charge itself, and more where the AI comes in, be able to collect small objects and take it home. For this why I picked the vector since the base design is perfect yet with too many limitations. A perfect robotic mouse which I am so shocked I can't find anything close to it.
I don't understand from where u started using that pad sir
I saw that you tried to create one of these for a robot vacuum using LiDAR . Maybe this would work better.
You should do a video on the Platformio ide discussing it’s benefits and drawbacks in comparison to the original Arduino ide. Then explain why you will use one compared to another
Thas an awesome pice of hardware, but, I have a question: can it run crysis?
Jetson Nano or Raspberry pi 4 model B, which one is better for computer vision applications?
Great Scott: building an AI robot
Me: learns a lot from his video
but Breaks an rc car just to get a motor
But the videos are very knowledgable
Thanks mate :-)
@@greatscottlab
Do you have subreddit like electroboom has ?
Finally... what I've been waiting for !
How much power does it draw?
Did it also work in another room it didn't have photos of?
What about using a less expensive but IA-optimised sipeed maixduino ?
brother the one you made in this video is the one with Jetson Nano or without jetson nano ?
Im considering this topics as my uni project so please help
Hi there! I'm just wondering where to store these pictures? Are these directly stored on the robot's computer or can I put them on a 3rd party service?
Waited for such a video from a long time... Thank you sir
U know!!??
U should make a car design smarter than Tesla
That stops on all stop needs
And so on!!
I am lazy right now as it's 9:43 in the night so there are grammatical mistakes so just ignore them ;)
U sure u r not also drunk.
@@xboxgamer9216 idk
I was just lazy :)
So i wrote what came to my mind
Great Video. How do you power the board ? As far as I know it draw around 5A which most "normal" powe rbanks are not capable to deliver ?
How about using a ToF(Time of Flight) Camera?
yes the 64gig sd makes that kit a worth while investment
I am new to programming. Is this considered hard to do?
Can we do it without using raspberry Pi ... Just sending video footage to computer wirelessly and then giving it commands from computer to avoid or not.. I'm want to make project using this concept where I can just give it destination and it will automatically travel towards it avoiding obstacles
"Python... the language used by Raspberry Pi"... Sorry for being pedantic, but Raspberry Pi can run any compiled or interpreted language that has ARM compiler just fine. C, C++, Java, Go, Python, Lua, Perl, Nim and many more run just fine.
In my understanding of AI, it should work like this: You set the robot free to roam around, and as it goes it takes its own pictures, classify them as "free = go" (not hit anything) or "blocked = no go" (hit something) Learning its way as a baby learns with a lot of bumps and falls. The stored photos can also be used in different "unknown" environments later on as a baseline and learning different places will go faster and faster as its "experience" grows.
Oooh erst nen neues Andreas Spiess Video und jetzt du auch noch. Nice
No problem ;-)
That’s cool. You could set up your lawnmower and set up the mower to go exactly around your yard and miss all of the bad areas. I’m doin it lol
Make rubik's cube solving robot which takes pictures of each side of cube and give an algorithm to solve it
Just attach google coral tpu processor on your rpi and youre good to go
improvement fps of CV is from 2fps to 35fps in inferencing
Humans: *AI exists* we are going to die
AI: if picture equals to "blocked" turn, else drive
After reading your comment,
GPT-3 laughed in ML
diy or buy???
I just finished making notes for GA and for the topping i got this video on my feed
So lucky 😂😂
Amazing!
Did this for my roomba, but I did all the training on my desktop and exported for the ONNX runtime 'cause it was easier. Still PyTorch.
Nice video! I love this subject... It would be amazing if you do more more about this robot or AI!
Could you make this into an unsupervised learning variant by adding a bumper to detect impact/pain?
Is it possible to do this with meccanoid g15? That's the closest thing I have and I'm going for as cheap as possible
Can anyone suggest a good final year project for a CS student
Dear Sir, which camera used in this project ?? Can Pi camera( V2) work ??
The pi cam v2 works with the jetson nano. There is another youtuber called JetsonHacks who has more info on how to set the camera up.
@@not_ever OK, I will try it.
Can u please do many more interesting projects based on AI 😍🤩THEY R SO MESMERIZING TECHNOLOGY 🤩🤩🤩💕