💬 Machine Learning AI in Unity! Lots of people requested this topic so here it is. It's a really awesome toolset and learning the basics in this video will enable making some really awesome scenarios. This was a ton of work to make in researching/testing/recording/editing so if you find it helpful please hit the like button!
Thanks code monkey you listened to my comment on your post. I love your explanation thanks. I will be sleeping when this will premeire so I will watch tomorrow morning as I wake up. I huge thank you. I love the simple way you explain. Please keep uploading more videos on AI and various other stuff. Love you 3000
if you are having issues with installing ml agents try using the following command: pip3 install mlagents Also make sure Tensorflo supports your python version
Yeah I found that out a few months back too, the only issue is if you want to use the folder path input field soon after (or you need to copy it), it won’t work until you press back to reset it to the correct path.
I saw this 2 months ago and finally had the time to not just watch but follow along and can I just say again, this is perfection. MLAgents always seemed to clunky / obtuse to use, but this made it straight forward. This tutorial took me straight through and I am amazed at what I was able to train on and get done. This just opens up a world of possibilities I can't wait to dive into, thanks again for this amazing tutorial!
Hi from 2023! I had some trouble instaling the python enviroment but I made it work. Unity: 2022.3.14f1 Python 3.9.13 Before venv: py -3.9 -m venv venv After venv: python -m pip install --upgrade pip pip install mlagents pip3 install torch torchvision torchaudio pip install protobuf==3.20.3 pip install packaging Works fine! Hope so u too :D
man i dont know how to thank you...... i was trying to install it from a week or 2 ..... today finally after seeing your comment i was able to install sucessfully without error👍👍👍🙏🙏🙏🙏🙏🙏
Awesome work, thank you so much man ! For people joining the bandwagon around July/August 2022 : - Take the ML-Agents package manager version 2.2.1-exp.1 (at least) - When launching Tensorboard, don't hesitate to give the folder where the results are in the command you type (ex: logdir=C:/user/MLProject/results), instead of typing "results" - If you know your way around the Python environment, feel free to use Anaconda, works just as good (I'll update this comment if I find any other stuff that changed since)
For everyone who gets this error: Traceback (most recent call last): TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). Simple Solution that worked for me is: pip3 install --upgrade protobuf==3.20.0
After a few days of following your tutorial little by little (sorry I'm still a newbie),, finally I can understand what you teach, even though there are many changes to the ML-Agents.. but thank you so much Mr.hugo for the knowledge this is very useful for me curious about machine learning.
What an amazing tut bro!, just everything that you need to get started with this demon. As you say barely at the end, when you finally manage to make it work its jus like magic!. Thanks a lot for the effort!. Marvelous man!
If you are having problems changing directory in the CMD (Command Prompt), it might be because the folder you are trying to locate is not on the C drive. If this is the case simply type: cd /D
How do you do that tho? I installed the correct version (3.7.9) but already had a newer version installed so pycharm won't install and I don't know how to run the correct version by default
Thanks so much for the tutorial. Iv wanted to dip my toes into Unity's ML-Agents for a while now. And you helped me to do that! I cant help but watch the little move around the screen as he learns :)
Great work Code Monkey! You are pumping out quality content like no one else. This topic is very interestig and not many tutorials on this. Very very good work man.
Very fun video! Thanks! As an AI noob I have a question: Couldn't you also speed up the process if you would have given the AI also a reward if the position is closer to the goal? In that case it wouldn't have start to move in the right direction right from the beginning. So it not only gets points if it touches the goal but also if the new position is in closer proximity to it.
I think you could! Perhaps putting a timer in the Agent to check to check the distance to the goal and add a positive/negative reward based on whether or not it got closer to the goal or further away since the last timer. I suggest the timer because doing this continuously would be a bit heavy/overkill on the reward system.
Yes! I love your videos so much. I recently got your tutorials on steam and Udemy. And I can use them as an reference when I teach my son how to program in C# with Unity.
@@edgysphere He has a free Code Monkey Steam App that teaches some stuff in Game Dev(do check it out) and also he has two Udemy courses, one is a builder defender project course with unity and C# and the other one is a visual scripting course! I have the builder defender course and it is awesome! Naturally the other course must also be good!
@@edgysphere The free Code Monkey Steam App is here store.steampowered.com/app/1294220/Learn_Game_Development_Unity_Code_Monkey/ The courses are here unitycodemonkey.com/courses
Hello,I have python 3.10 installed,but I am having trouble in installing pytorch,upon writing everything you wrote at 5:35,by pressing enter,its giving errors"Could not find a version that satisfies the requirement",please help.
If someone is having issues with latest ML (release 18), these steps helped me to fix namespace errors Not sure if everything is working as intented tho.. examples seem to work 1. Edit->Preferences->External Tools->External Script Editor->Visual studio-> i put checkmarks on everything except unknown sources->regenerate project files 2. install input system (1.1.0-preview.3) (not sure how active input handling method changes anything but that can be switched in Project settings->Player->Other settings->Active Input Handling
I was a little scared when I saw the length of the video (still the same problem with understanding English), but I didn't see the time go by! Superb video, very clear. Thanks for the time spent explaining how to install the various necessary programs, this kind of configuration can sometimes waste a lot of time and motivation. I look forward to the rest. I haven't taken the time to realize your example yet, and I'm wondering what happens if we put several players on the same floor.
Yup, the setup is what took me the longest during research and coming up with tons of problems so I really wanted to do it step by step. If more than one player was in the same place going to the same object they would constantly be bumping into each other in order to try to get to the goal
Do you ever want a new object to not be on 0,0,0? i see everyone always resetting it manually, but theres an option to just always spawn it on 0,0,0 :P Preferences > Scene View > Create Objects at Origin :)
When I started fallowing this tutorial I would get a lot of errors, but I managed to fix those errors, and then I decided to make my own ML-Agents video. It is vary similar to this video but it shows how I fixed all the errors I was getting. For those that want to fallow my tutorial here is a link: ruclips.net/video/RANRz9oyzko/видео.html. But I just wanted to say thank you code monkey for this awesome video. without it I would not have been able to make my video.
@@CodeMonkeyUnity @theashbot4097 I had some problems when I run the test for ml agent and your installation worked =) thanks. will keep looking at the video =)
Amazing video! Perfectly explained, that's obvious the lot of work that are behind this video! I'm impatient so see next videos about ML! One question : with the final brain, you would be able to move manually on runtime the target, and the player would follow that target? Thank you! And as always, good job!
Small note: for that to happen, you need to train almost all possible positions on the board, as the added observations does not mean that the agent is aware of his position and the ball positions and knows what actions to take in order to get there. It only means that you trained for scenario X many times and you know how to get a positive reward, the agent will follow the Y steps as it was trained. And that, with a larger board becomes a pain. To better achieve this desired effect, it will be better to look at the AI vision tutorial, or even at the car ML because those better explain how an agent can be "aware" of its surroundings and targets.
@@TheScorpionAly I'm not sure that's true... What the AI is learning here is that the relationship between all the vector values contains a pattern, and that its actions can affect the likelihood of a reward based on taking an action related to that observed pattern. The pattern is that when the distance between the objects reduces, the reward is more likely, and it learns how to take actions to reduce that distance on each cycle. I bet the second brain in this example would be able to chase the target reasonably well if you just started moving the target around, even though it wasn't trained in precisely that scenario because the only information it has on each cycle is the two postitions.
No doubt, cool! This is "CM"! He almost always has the most useful videos. (almost, only because the "inventory system" in my opinion is incomplete, uncritical bugs, and a lot of things need to be completed and added yourself, not even taking into account the graphic part) But all other "systems" are universal and work extremely reliably )))
@@CodeMonkeyUnity Minor bugs: the disappearance of items in the inventory if they are not exactly thrown over the slot, the disappearance of items in the inventory-crafting when other slots not related to the dictionary are filled at the time of crafting. All this is tested in the original downloaded project. I think I can fix them. Only here it will introduce newcomers to a stupor. It would be nice if all these inventory systems were initially connected to each other with the ability bar and drop-drag-using system. I think I can do that, too. But it would be nice for beginners to make the entire complete system through a scriptable object in the root. Try to make and put up for sale somewhere, I think many will be happy to buy. You can enter a vote on how many people are interested in it to see if it's worth your time spent on the full system. I think it should be easier than creating a course. Well, or make it part of the new course of " RPG " games, but it will be a ton of work. In any case, I like your approach to the code and to systems in General, thank you for the interesting videos )))
Awesome video. I have seen many videos for MLAgenst but in every video instead of getting solutions, I got more doubts and errors. Thanks for such a nice step-by-step video.
This is really cool, I have been trying to train RL agents with AI Gym, but that is not a compelling environment. But I love game development in Unity. So This is awesome stuff. I am just a little worried that this ML-Agents project has not been updated in a while, and maybe it will not be long supported. But I will have a quick play sometime.
I can't see where these dislikes come from, after you commit your time and knowledge to the public for free. I respect your work, please continue sharing your knowledge.
I am stuck at around 17:50 ,I keep getting this in my command prompt "[W ..\torch\csrc\utils\tensor_numpy.cpp:77] Warning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (function operator ())". And also getting spammed "Heuristic method called but not implemented. Returning placeholder actions." in my unity console. and receiving 0 in my debug.log. Would really appreciate any help from anyone who offers a solution! my guess is maybe something wrong with numpy or maybe just that the ai doesn't have any input and its a new requirement on mlagent, cheers.
Check if the path to your project contains spaces and/special characters. Some software are pretty bad on dealing with those, specially when running a from command line like this one If that's the case try to move your project to another folder
Thanks a lot Code Monkey. It was fun getting my first ML-Agent to work. I am planning to make the ML-Agent track moving ball and then use ray-casting, instead of passing in the Target location to see, how the neural network will handle that. After that I will implement the Wumpus world example found in most AI text-books. Lots of fun with this API.
WHEN DOWNLOADING PYTHON, YOU HAVE TO DOWNLOAD THE 64-BIT VERSION!!! I have run into some issues during the installation and it was because of the 32-bit version I have downloaded
yeah if you have a 64-bit OS installed you need the 64 bit version of software :) If you have a 32 bit system it's the other way around, although 32 bit is getting less and less common with time.
Okay if the training environment can be set up entirely on python, how? Without pytorch? How do I get rid of pytorch, or whatever it seems to have managed to install before it failed? It threw Error no 2 for seemingly no reason whatsoever and it did take up 2 gb of space on my drive, but it clearly failed to complete whatever task it was trying to perform.
if you guys are having a problem with opening up the .yaml file via command prompt. Download notepad++ and save the file using that instead of regular notepad, you'll get the file saved as yaml if it works.
My friend thank you very much for this video. I always wanted to dive into ML/RL but I always thought the topic is too big. With this Unity ML Agents library it's really abstracting away a lot of setup and let's you directly dive into training an AI model. Very good explained and introduced!
Is it realistic to use this in a complex procedural 3D scenario? This would really help me validate and spice up my levels, given I could have an agent automatically try to pass each one.
Great video, thank you, I did want to mention I had to downgrade to the python version you were using in order for it to work, I am sure there is an updated way but I could not figure it out. Also for anyone using the sample packs from Unity's github pushblock with input was broken and I had to remove that out of the folder for there to be no complication errors
Thanks so much for this tutorial! it was the most fun ive had so far in programming, made a working pong game with the agent getting better and better still if i upped the difficulty with smaller bump and balls and speed, really fun to have an own made brain to play with =)
Great lesson - really opens up the mind (no pun intended) I went through this for about 2d straight. I learned a lot but so far I ended up failing to install. In order to get the pytorch stack working properly you need a video card supported by the CUDA . There are work arounds but they are not currently working for me. Thank you for this demo.....now I need to upgrade my really really really old computer.
@@CodeMonkeyUnity Hi A follow up - I got the entire stack working (minus needing cuda) as per your demo. I just had to spend a bit of extra time on pytorch and getting the right version to fit my hw. My machine is very old so I went with a cpu-only build and this solved all my problems (even if it meant I could only run half the amount of environments as you). Thanks again - you really opened up a new door for me in a big way.
A Usecase I would like to see: An AI which can do the stuff mentioned in the "all mario 64 copies are personalized" creepypasta. (Basically an AI running in the Background and analysing the player, and changing the game according to that analysis. Spawning different enemies, changing the stage geometry, rearranging the stage order, increase or decrease difficulty, and sometimes adding new stages to the game.)
Thanks for a greate tutorial! I've been playing with DL and RL for the last year or so (though not very intesively), and it's one of my main objects of interest, especially RL. And it's a greate pleasure to me to finally discover such things as ML-Agents in general, and Unity integration in particular (why didn't I find it before?). Can't even express now what a huge window of possibilites I see now for myself. A huge field of experiments I wanted to do for a long time. And what an awesome way to create my own environments! I just saw that I can even simply just create the envs in Unity, but make the rest of learning separately in Python!
Thanks for the amazing (and long waited lol) tutorial. Everything was crystal clear to me except that yaml file and why we needed that. You added this file when the model failed to reach the goal when you moved it, but seemingly it didnt solve anything.
That file contains all the parameters for the training algorithm, it tells it how fast it learns, how much randomness it adds, how much each reward affects the algorithm and so on It's pretty complex, there's explanations of each parameter in the docs, I'm still trying to understand what they all do. Also if you don't define a specific config file then it simply uses the defaults
Okay, so I've tried to post the same troubleshooting comment like 3 times for anyone having setup issues like I have been around @5:38. I think it gets flagged and removed because I included a link outside of youtube? 1. Try an older version of python. E.g. I created my venv with 3.7.8, @Jaya Temara had luck with 3.8.10 2. If you get an error about torch version being unavailable, try the full python command from pytorch getstarted previousversions. He uses 1.7.0 and cu110 3. If that fails on version again but it says "using cached" and then a url ending in whl, try your original command substituting that cached wheel url after -f
I know this video is from 2years ago but pls help me. When I want to write the "OnCollisionEnter" ( 24:40), I dont get the suggestion and it doesnt work :/. My game is in 2d can this be the problem??
If you're using 2D colliders then you should use OnCollisionEnter2D If you don't get suggestions then perhaps your visual studio isn't linked to Unity, go to Edit - Preferences and select VS in the External Tools tab
Not sure if this was the case 2 years ago, but there needs to be a rigid body on the cube for the collision to count. this took me about an hour to fix. Still a Great video thx bro
💬 Machine Learning AI in Unity!
Lots of people requested this topic so here it is. It's a really awesome toolset and learning the basics in this video will enable making some really awesome scenarios.
This was a ton of work to make in researching/testing/recording/editing so if you find it helpful please hit the like button!
How is it possible that all day I’m trying to install ml agents and now, when I’m still stuck you will provide full tutorial.😂😂 can’t wait to see it.
Thanks code monkey you listened to my comment on your post. I love your explanation thanks. I will be sleeping when this will premeire so I will watch tomorrow morning as I wake up. I huge thank you. I love the simple way you explain. Please keep uploading more videos on AI and various other stuff. Love you 3000
Thanks for this. I hope to have more time to watch it later. Did you already do one of these for procedural generation?
Hi, code monkey! I recently bought gtx 1650 super, it has 1280 cuda cores. Is this good for deep reinforcement learning?
@@skinnyboystudios9722 Anything nVidia will help but the current Release 10 is not using CUDA, I'm guessing they will add it back in the future
Quick Tip: In windows explorer, in the folder you want in cmd, just type cmd in the address bar. it will open cmd.exe to that folder
Oooh that's an excellent tip, didn't know that! Thanks!
I always press shift and then right click and select open powershell here. But that won't open cmd if you need that for some reason.
if you are having issues with installing ml agents try using the following command:
pip3 install mlagents
Also make sure Tensorflo supports your python version
Yeah I found that out a few months back too, the only issue is if you want to use the folder path input field soon after (or you need to copy it), it won’t work until you press back to reset it to the correct path.
yo that's so good
I saw this 2 months ago and finally had the time to not just watch but follow along and can I just say again, this is perfection. MLAgents always seemed to clunky / obtuse to use, but this made it straight forward. This tutorial took me straight through and I am amazed at what I was able to train on and get done. This just opens up a world of possibilities I can't wait to dive into, thanks again for this amazing tutorial!
Thanks! I'm glad you found the video useful!
Hi from 2023! I had some trouble instaling the python enviroment but I made it work.
Unity: 2022.3.14f1
Python 3.9.13
Before venv:
py -3.9 -m venv venv
After venv:
python -m pip install --upgrade pip
pip install mlagents
pip3 install torch torchvision torchaudio
pip install protobuf==3.20.3
pip install packaging
Works fine! Hope so u too :D
man i dont know how to thank you...... i was trying to install it from a week or 2 ..... today finally after seeing your comment i was able to install sucessfully without error👍👍👍🙏🙏🙏🙏🙏🙏
Dios, estuve una hora intentando que funcione hasta que vi tu comentario se agradece mucho
thank you you really saved my life!!
You're a literal angel, thank you so much!
thx man, really helped alot
How you make so much quality content in so less time? Do you have a very talented team or just you?
It's just me, I definitely work way too much heh
@@CodeMonkeyUnity Thanks a lot man. Just wanted you to know that I appreciate every bit of work that you put in your videos.
@@shreyanshsingh999 Thanks!
@@frogmasto same
@@CodeMonkeyUnity how much money do you make from everything?
Our graduation project is with Unity ML Agents and this video helped me to get the basics. Thank you!
I'm glad the video helped! Best of luck with that project!
an Immortal Video for a decade .
Concise and straight to the point as always, but at the same time very detailed explanations. Long time patron and a fan.
Thanks! Glad you found it helpful!
Awesome work, thank you so much man !
For people joining the bandwagon around July/August 2022 :
- Take the ML-Agents package manager version 2.2.1-exp.1 (at least)
- When launching Tensorboard, don't hesitate to give the folder where the results are in the command you type (ex: logdir=C:/user/MLProject/results), instead of typing "results"
- If you know your way around the Python environment, feel free to use Anaconda, works just as good
(I'll update this comment if I find any other stuff that changed since)
your manner of speaking is clear, to the point and easy to follow, thank you
For everyone who gets this error:
Traceback (most recent call last):
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
Simple Solution that worked for me is: pip3 install --upgrade protobuf==3.20.0
TY!
thanks a bunch!!!Really gratefull
I FUCKING love you. This has plagued me for SO long.
I can't imagine the the work you had to put into this, Really appreciate all your work, Thank you CM!!✨
i wish i could appreciate him... I can't download pytorch plz help me
This video was so much fun to follow along with!!! Simply awesome
After a few days of following your tutorial little by little (sorry I'm still a newbie),, finally I can understand what you teach, even though there are many changes to the ML-Agents.. but thank you so much Mr.hugo for the knowledge this is very useful for me curious about machine learning.
What an amazing tut bro!, just everything that you need to get started with this demon. As you say barely at the end, when you finally manage to make it work its jus like magic!. Thanks a lot for the effort!. Marvelous man!
So hyped about this video!! Awesome topic and I can't wait to see your project man!
If you are having problems changing directory in the CMD (Command Prompt), it might be because the folder you are trying to locate is not on the C drive. If this is the case simply type: cd /D
Thx man 👍
Followed along this tutorial and was able to train my first agent. Great tutorial and attention to details. Thanks!
Learn from my mistake. Don't install Python 3.9 because PyTorch won't install. Use the version shown in the video: Python 3.79
3.8.1 works too
Super helpful, thanks!
How do you do that tho? I installed the correct version (3.7.9) but already had a newer version installed so pycharm won't install and I don't know how to run the correct version by default
Thanks so much for the tutorial. Iv wanted to dip my toes into Unity's ML-Agents for a while now. And you helped me to do that!
I cant help but watch the little move around the screen as he learns :)
Great work Code Monkey! You are pumping out quality content like no one else. This topic is very interestig and not many tutorials on this. Very very good work man.
Thanks!
45min video by Code Monkey, this is better than cinema movies.
I will be waiting for you always, soo thank you sir, for your learning. this means a lot to me.
Amazing work. I just watched it all the way through and next time I'm going to step through with you!
Really looking forward!!
wth This is the video I was looking for!!!! Awesome
Very fun video! Thanks!
As an AI noob I have a question: Couldn't you also speed up the process if you would have given the AI also a reward if the position is closer to the goal? In that case it wouldn't have start to move in the right direction right from the beginning. So it not only gets points if it touches the goal but also if the new position is in closer proximity to it.
I think you could! Perhaps putting a timer in the Agent to check to check the distance to the goal and add a positive/negative reward based on whether or not it got closer to the goal or further away since the last timer. I suggest the timer because doing this continuously would be a bit heavy/overkill on the reward system.
LIFE SAVER. this helped me get this running. thank you!
Yes! I love your videos so much. I recently got your tutorials on steam and Udemy. And I can use them as an reference when I teach my son how to program in C# with Unity.
Awesome! That's great to hear!
That course is free?
@@edgysphere He has a free Code Monkey Steam App that teaches some stuff in Game Dev(do check it out) and also he has two Udemy courses, one is a builder defender project course with unity and C# and the other one is a visual scripting course! I have the builder defender course and it is awesome! Naturally the other course must also be good!
@@sundarakrishnann8242 links for it pleasee!
@@edgysphere The free Code Monkey Steam App is here store.steampowered.com/app/1294220/Learn_Game_Development_Unity_Code_Monkey/
The courses are here unitycodemonkey.com/courses
It was really good, it seems very interesting! Waiting to see the applications of ML!
Hello,I have python 3.10 installed,but I am having trouble in installing pytorch,upon writing everything you wrote at 5:35,by pressing enter,its giving errors"Could not find a version that satisfies the requirement",please help.
If someone is having issues with latest ML (release 18), these steps helped me to fix namespace errors
Not sure if everything is working as intented tho.. examples seem to work
1. Edit->Preferences->External Tools->External Script Editor->Visual studio-> i put checkmarks on everything except unknown sources->regenerate project files
2. install input system (1.1.0-preview.3) (not sure how active input handling method changes anything but that can be switched in Project settings->Player->Other settings->Active Input Handling
I was a little scared when I saw the length of the video (still the same problem with understanding English), but I didn't see the time go by!
Superb video, very clear. Thanks for the time spent explaining how to install the various necessary programs, this kind of configuration can sometimes waste a lot of time and motivation.
I look forward to the rest.
I haven't taken the time to realize your example yet, and I'm wondering what happens if we put several players on the same floor.
Yup, the setup is what took me the longest during research and coming up with tons of problems so I really wanted to do it step by step.
If more than one player was in the same place going to the same object they would constantly be bumping into each other in order to try to get to the goal
@@CodeMonkeyUnity That would be an amazing thing to watch
Best explaination/tutorial so far. Really got me going and hooked.
QuickTip: To easy open cmd in a folder, press shift and right click, then 'Open command line prompt here'
you can also type cmd into the folder's url bar
Honestly, this is way above my understanding. But I found it interesting nonetheless. Thank you
Do you ever want a new object to not be on 0,0,0? i see everyone always resetting it manually, but theres an option to just always spawn it on 0,0,0 :P
Preferences > Scene View > Create Objects at Origin :)
you are an amazing person, just amazing... you just saved my final thesis project
When I started fallowing this tutorial I would get a lot of errors, but I managed to fix those errors, and then I decided to make my own ML-Agents video. It is vary similar to this video but it shows how I fixed all the errors I was getting. For those that want to fallow my tutorial here is a link: ruclips.net/video/RANRz9oyzko/видео.html. But I just wanted to say thank you code monkey for this awesome video. without it I would not have been able to make my video.
That's awesome! I definitely need to find the time to look into MLAgents again and see what's changed
@@CodeMonkeyUnity @theashbot4097 I had some problems when I run the test for ml agent and your installation worked =) thanks. will keep looking at the video =)
Thank you for all the great work and tuts..
the least we can do to support you.. keep it up.
Amazing video! Perfectly explained, that's obvious the lot of work that are behind this video! I'm impatient so see next videos about ML!
One question : with the final brain, you would be able to move manually on runtime the target, and the player would follow that target?
Thank you! And as always, good job!
Yup, as long as its trained with some randomness it should learn how to go towards the target and not a specific position
Small note: for that to happen, you need to train almost all possible positions on the board, as the added observations does not mean that the agent is aware of his position and the ball positions and knows what actions to take in order to get there. It only means that you trained for scenario X many times and you know how to get a positive reward, the agent will follow the Y steps as it was trained. And that, with a larger board becomes a pain.
To better achieve this desired effect, it will be better to look at the AI vision tutorial, or even at the car ML because those better explain how an agent can be "aware" of its surroundings and targets.
@@TheScorpionAly I'm not sure that's true... What the AI is learning here is that the relationship between all the vector values contains a pattern, and that its actions can affect the likelihood of a reward based on taking an action related to that observed pattern. The pattern is that when the distance between the objects reduces, the reward is more likely, and it learns how to take actions to reduce that distance on each cycle. I bet the second brain in this example would be able to chase the target reasonably well if you just started moving the target around, even though it wasn't trained in precisely that scenario because the only information it has on each cycle is the two postitions.
No doubt, cool! This is "CM"! He almost always has the most useful videos. (almost, only because the "inventory system" in my opinion is incomplete, uncritical bugs, and a lot of things need to be completed and added yourself, not even taking into account the graphic part) But all other "systems" are universal and work extremely reliably )))
What issues did you have with the Inventory system?
@@CodeMonkeyUnity Minor bugs: the disappearance of items in the inventory if they are not exactly thrown over the slot, the disappearance of items in the inventory-crafting when other slots not related to the dictionary are filled at the time of crafting. All this is tested in the original downloaded project. I think I can fix them. Only here it will introduce newcomers to a stupor. It would be nice if all these inventory systems were initially connected to each other with the ability bar and drop-drag-using system. I think I can do that, too. But it would be nice for beginners to make the entire complete system through a scriptable object in the root. Try to make and put up for sale somewhere, I think many will be happy to buy. You can enter a vote on how many people are interested in it to see if it's worth your time spent on the full system. I think it should be easier than creating a course. Well, or make it part of the new course of " RPG " games, but it will be a ton of work. In any case, I like your approach to the code and to systems in General, thank you for the interesting videos )))
This channel is no doubt the next Brackeys!
Are you straight VANGA or Nostradamus )))
@@seb.5053 I'm sorry, but I don't know what you mean. The language barrier in action )))
@@arcday4281 I was saying I hope you have a fantastic day today! :D
@@seb.5053 I hope you have the same day)
Awesome video. I have seen many videos for MLAgenst but in every video instead of getting solutions, I got more doubts and errors. Thanks for such a nice step-by-step video.
Thanks! I'm glad you found the video useful!
This is really cool, I have been trying to train RL agents with AI Gym, but that is not a compelling environment. But I love game development in Unity. So This is awesome stuff. I am just a little worried that this ML-Agents project has not been updated in a while, and maybe it will not be long supported.
But I will have a quick play sometime.
I can't see where these dislikes come from, after you commit your time and knowledge to the public for free. I respect your work, please continue sharing your knowledge.
As long as the likes overwhelm the dislikes I'm happy!
can u pls make a new vid.
You are great !
Waiting to more content from you in ML Agents :))
OK! 2023, for release 20 of ML agent! the Torch install is now as follows
pip install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0
Awesome :) My love theme - ML + AI in Game
I am stuck at around 17:50 ,I keep getting this in my command prompt "[W ..\torch\csrc\utils\tensor_numpy.cpp:77] Warning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (function operator ())". And also getting spammed "Heuristic method called but not implemented. Returning placeholder actions." in my unity console. and receiving 0 in my debug.log. Would really appreciate any help from anyone who offers a solution! my guess is maybe something wrong with numpy or maybe just that the ai doesn't have any input and its a new requirement on mlagent, cheers.
Awesome man, Thanks for creating !
Best tutorial for beginners on ML :)
Thanks! It was a ton of work so I'm glad people find it useful!
It would be useful to know how you imported the Github release into your project as I have had nonstop namespace/reference issues trying to import it
Check if the path to your project contains spaces and/special characters.
Some software are pretty bad on dealing with those, specially when running a from command line like this one
If that's the case try to move your project to another folder
Thanks a lot Code Monkey. It was fun getting my first ML-Agent to work. I am planning to make the ML-Agent track moving ball and then use ray-casting, instead of passing in the Target location to see, how the neural network will handle that. After that I will implement the Wumpus world example found in most AI text-books. Lots of fun with this API.
WHEN DOWNLOADING PYTHON, YOU HAVE TO DOWNLOAD THE 64-BIT VERSION!!! I have run into some issues during the installation and it was because of the 32-bit version I have downloaded
yeah if you have a 64-bit OS installed you need the 64 bit version of software :)
If you have a 32 bit system it's the other way around, although 32 bit is getting less and less common with time.
This is huge man big thanks !
You are a great teacher
Love your course on Udemy - terrific pace and getting lots of insight, looking forward to this too...
Thank you! It took me a few hours, but rlly worth it, thank you so much
Amazing. I have a couple questions though:
1. Why was pytorch needed in this case?
2. Is there a way to do this entirely on python?
yes
correct
Okay if the training environment can be set up entirely on python, how? Without pytorch? How do I get rid of pytorch, or whatever it seems to have managed to install before it failed? It threw Error no 2 for seemingly no reason whatsoever and it did take up 2 gb of space on my drive, but it clearly failed to complete whatever task it was trying to perform.
if you guys are having a problem with opening up the .yaml file via command prompt. Download notepad++ and save the file using that instead of regular notepad, you'll get the file saved as yaml if it works.
On my side there are no ActionBuffers in the OnActionReceived, but a float array. What difference is there?
Make sure you install the latest version of the package, not v1.0
@@CodeMonkeyUnity Yup i installed ML 2.0.0 but its pre-3 so im a bit concerned
Same problem here
My friend thank you very much for this video. I always wanted to dive into ML/RL but I always thought the topic is too big. With this Unity ML Agents library it's really abstracting away a lot of setup and let's you directly dive into training an AI model. Very good explained and introduced!
I Am Having A Lot Of Errors When I Try To Download And Install Everything.
Thanks Mr. Monkey, I love it man. Keep up the great work.
Is it realistic to use this in a complex procedural 3D scenario? This would really help me validate and spice up my levels, given I could have an agent automatically try to pass each one.
I am going to try this soon. Did you end up trying it?
@@Stonium nope, but ill probably have to in the future. having some kind of adversial scenario perhaps with a escaper and a chaser would be cool.
@@pixboi Yes! I'll post here again if I end up doing it
@@Stonium hit me up if you figure out something
If u cant install pytorch because no doctype html
Use Python 3.8.10
Dont use 3.9.10!
Best tutorial video ever! Thanks for helping me reach enlightenment:)
Caution !!! Don't create "SKYNET" by accident ! Terminators will destroy humanity )))
HA HA
Amazing! Thank you so much! happy new year aswell
Just did it.! my own one.
Thank your very much. It is very fun to learn AI thing.
Great video, thank you, I did want to mention I had to downgrade to the python version you were using in order for it to work, I am sure there is an updated way but I could not figure it out. Also for anyone using the sample packs from Unity's github pushblock with input was broken and I had to remove that out of the folder for there to be no complication errors
Thanks so much for this tutorial! it was the most fun ive had so far in programming, made a working pong game with the agent getting better and better still if i upped the difficulty with smaller bump and balls and speed, really fun to have an own made brain to play with =)
As promised, this is the first video I saw this morning as i woke up
Awesome content, now training my bots! thank you!
Cool! thank you for the video! It 2 years old but still actual!
I'm glad it's still helpful! Thanks!
Excellent tutorial,I learned a lot👍
Great lesson - really opens up the mind (no pun intended) I went through this for about 2d straight. I learned a lot but so far I ended up failing to install. In order to get the pytorch stack working properly you need a video card supported by the CUDA . There are work arounds but they are not currently working for me. Thank you for this demo.....now I need to upgrade my really really really old computer.
It's been a while since I touched MLAgents but I believe CUDA is no longer used in the latest versions so you can skip that step.
@@CodeMonkeyUnity Hi A follow up - I got the entire stack working (minus needing cuda) as per your demo. I just had to spend a bit of extra time on pytorch and getting the right version to fit my hw. My machine is very old so I went with a cpu-only build and this solved all my problems (even if it meant I could only run half the amount of environments as you). Thanks again - you really opened up a new door for me in a big way.
Thank you mate! Your tutorial helped me a lot!
A Usecase I would like to see: An AI which can do the stuff mentioned in the "all mario 64 copies are personalized" creepypasta. (Basically an AI running in the Background and analysing the player, and changing the game according to that analysis. Spawning different enemies, changing the stage geometry, rearranging the stage order, increase or decrease difficulty, and sometimes adding new stages to the game.)
im learning a way to add realtime data as objects.. so if you created a way that took data and then created some realtime data it might work
That's like suggesting the space shuttle after seeing someone building a plastic bottle rocket
Thanks for a greate tutorial! I've been playing with DL and RL for the last year or so (though not very intesively), and it's one of my main objects of interest, especially RL. And it's a greate pleasure to me to finally discover such things as ML-Agents in general, and Unity integration in particular (why didn't I find it before?). Can't even express now what a huge window of possibilites I see now for myself. A huge field of experiments I wanted to do for a long time. And what an awesome way to create my own environments! I just saw that I can even simply just create the envs in Unity, but make the rest of learning separately in Python!
I'm glad you found the video helpful!
Thank you so much for this excellent tutorial!
Wow! It's better than the tutorial in the unity learn!
That's high praise! Thanks!
Totally agree!
Wow what an amazing lesson, ty!
Yo my man is back
i´m excited to use and start train my own agents :D
Epic Video. Left my like. Subscribed for more. Thank you! Hope you the best!
Everyone must hit the like button right now!!!
Hoho, thanks for this man, keep it up 💪
Thanks for the amazing (and long waited lol) tutorial. Everything was crystal clear to me except that yaml file and why we needed that. You added this file when the model failed to reach the goal when you moved it, but seemingly it didnt solve anything.
That file contains all the parameters for the training algorithm, it tells it how fast it learns, how much randomness it adds, how much each reward affects the algorithm and so on
It's pretty complex, there's explanations of each parameter in the docs, I'm still trying to understand what they all do.
Also if you don't define a specific config file then it simply uses the defaults
cool video just what i was looking for
Thanks
Thanks for the thanks!
Okay, so I've tried to post the same troubleshooting comment like 3 times for anyone having setup issues like I have been around @5:38.
I think it gets flagged and removed because I included a link outside of youtube?
1. Try an older version of python. E.g. I created my venv with 3.7.8, @Jaya Temara had luck with 3.8.10
2. If you get an error about torch version being unavailable, try the full python command from pytorch getstarted previousversions. He uses 1.7.0 and cu110
3. If that fails on version again but it says "using cached" and then a url ending in whl, try your original command substituting that cached wheel url after -f
Would love another tutorial to Machine Learning in unity.
I made one about a month ago. ruclips.net/video/RANRz9oyzko/видео.html
I know this video is from 2years ago but pls help me. When I want to write the "OnCollisionEnter" ( 24:40), I dont get the suggestion and it doesnt work :/. My game is in 2d can this be the problem??
If you're using 2D colliders then you should use OnCollisionEnter2D
If you don't get suggestions then perhaps your visual studio isn't linked to Unity, go to Edit - Preferences and select VS in the External Tools tab
@@CodeMonkeyUnity ty very much ! I found the problem and am able to follow the rest of your tutorial!! Very good tutorial 👍👍
Really nice video!
Not sure if this was the case 2 years ago, but there needs to be a rigid body on the cube for the collision to count. this took me about an hour to fix. Still a Great video thx bro
very instructive :) thanks a lot
Legend says he's still not back..
If you are getting a error when running mlagents-learn --help, try "pip install protobuf==3.20.3" in the same directory and then try again.
thanks