Hello everybody! I have created a Discord Channel for everybody wanting to learn ML-Agents. It's a place where we can help each other out, ask questions, share ideas, and so on. You can join here: discord.gg/wDPWsQT
Great intro video! I'd love to see: 1) How to setup for agents/players with multiple states (switching from stealing to attacking to shooting, etc.) 2) How to have agents/players on the same team and performing the same goals together 3) How to apply these different states to AI agents in gameplay
The video was amazingly well done! To anyone following this on a windows 10 machine. Make sure you have python 64-bit installed and not 32. And the command that works for me is this: "python ml-agents/mlagents/trainers/learn.py --run-id=MyFirstAI". You first need to cd into the ml-agents repo just like the video says and then type in this command to start training. If you still get issues, you can use Anaconda to create a virtual env but Unity themselves removed that dependency.
Still not working for me, I get no message, I'm simply prompted to enter another command. I'm using the ml-agents0-release_6, do you think that's the issue?
Note for everybody watching: Two days ago ML-Agents Release 2 was released. Don't worry, the latest release just contained bug fixes, meaning you can still follow the tutorial without doing anything differently. The naming may be a bit confusing because Release 2 sounds like a big thing but it isn't, they just changed their naming scheme. I would always recommend using the latest release version! Enjoy! :)
Were there any further releases? Some may consider it cringe, but I'm asking ChatGPT to clarify certain methods that I can't understand from the documentation and it told me that the newest version of ML Agents takes ActionBuffers parameters, instead of float [ ] parameters. This could obviously just be ChatGPT being wrong, but this video and the tutorial I am following are 2 years old after all.
Hey, Great tutorial! I'm still new to both Unity and ML Agents, but just finished training my first agents on the 3D ball game. Just want to point out, at the time of watching the video, the ML Agents repository was on release 12 and I had an extra problem in addition to the "trainer_config" problem (thanks DeJMan). I'll do my best to describe it below: I was getting 254 errors - most of them were "CS0234" - essentially the C# script couldn't recognize any of the ML Agents related stuff (I think - still new so not 100% sure). The solution that worked for me was to go to Package Manager, click on the + in the top left corner, and select "Add Package from disk", then go to the ML Agents downloaded repository downloaded from the github link in the description, then to the folder "com.unity.ml-agents", and add the file "package.json". Then, do the above again but for the folder "com.unity.ml-agents.extensions" - again the "package.json" file. This solved it for me. Hope it helps someone else.
You are an absolute legend! how did you figure that out? it cured that problem I was having. Hopefully this comment I'm writing boosts this comment up in the list so more people can see it. Thank You!
For everyone having problems importing ml agents folder into the unity project: 1. If you get errors like: "The type or namespace "mlagents" could not be found": reimport the mlagents package or import it if you hadnt already done it. 2. If you get errors like: "The type or namespace "mlagents.actuators" could not be found", after already having imported Ml Agents package through the package manager: try importing it manually from the github cloned folder (machine learning release 8) /com.unity.mllagents/package.json
@@lpthurler Yeah that is was I was meaning, for some reason I couldnt install the 1.5.0 version in the package manager and installed it manually. I didnt know it was only me.
Alternative Solution: if you have the ml-agents package from the package manager (which is currently a bit older than the latest release: 1.0.5) - than you could just download an older release from github with the older version (in my case it was 1.0.2) and use its examples folder (projects/assets/mlagents) - helped me with the „2.“ issue
Hi Sebastian! Just wanted to share a very slight correction regarding stacked observations: You sated that the stacked observation setting determines how many vector observations are gathered before sending them off to the agent; this is close but slightly inaccurate. It will still gather a new vector observation every time a decision is requested, and it will simply add the new vector to the running "stacked" history, while bumping the oldest observation out of the stacked vector array. For example, if you have stacked observations set to two and you are sending only a velocity reading to the agent, on the first decision the stacked vector might be (.567, 0f), on the second decision, the stacked observation might be (0.897f, 0.567f), the third could be (0.924f, 0.897f), and so on...
@@SebastianSchuchmannAI I think there may be some ways to do live editing or re-uploading. (people often need to change videos to make corrections or remove copyrighted content). Def look into it when you get a chance! (there's also the popup box option). Not a really big deal though; that one error is the only imperfection in an otherwise perfect video!
You are really amazing.... That was a friendly introduction to unity ml-agent. You explain things in a very simple way. That makes it easy to understand clearly the concept behind ml agent. Great work guy
Hi Sebastian Really cool video, just started to play with ML-Agents myself and have trained all of the example ones. But never dig into how it work, but your video explained it nicely. You have gotten 1 new subscriber :)
When I install the package of the ml agents I start to appear a lot of errors related to the fact that the 'Actuators' does not exist and the same with the 'ActionBuffers' Does anyone know how to solve this problem?
Anyone else get a stack of compiler errors at Step 5: "Assets/ML-Agents/Examples/PushBlockWithInput/Scripts/PushBlockWithInputPlayerController.cs(109,31): error CS0246: The type or namespace name 'IInputActionCollection2' could not be found (are you missing a using directive or an assembly reference?)" ?
ML-agent 7 has a code problem Assets\ml-agents-release_7\com.unity.ml-agents\Runtime\Grpc\CommunicatorObjects\UnityInput.cs(134,28): error CS0115: 'UnityInputProto.ToString()': no suitable method found to overrid
10:23 That *simple* pip3 install mlagents took me over 2 hours because I didn't know that mlagents relly on something named "PEP 517" and "H5PY" which them relly on python 3.7, so as me having 3.8.2 it couldn't be installed, then I had to use the preinstalled 3.7 in a separate environment setup. so, yeah it took over 2 hrs and I'm not sure whether this gonna work at all or not :/
1. What is the use of Academy.Instance.EnvironmentParameters? Why do we use ResetParameters in the Academy instead of manually putting those values inside OnEpisodeBegin? 2. How exactly does setting values in the "actionsOut" array in the Heuristic function call the OnActionReceived function with those values? The Heuristic method does not return any values.
If you want to run with a specific config file, use this command: mlagents-learn config/ppo/CustomConfigNameHere.yaml --run-id=MyFirstAI This will let you change the number of max steps so that your model can continue to learn indefinitely and you can tweak the other values to figure out which ones work best. If you want to continue training a specific model, you can do mlagents-learn --run-id=MyFirstAI --resume. This will let you pause your model and later get back to training. (Please note that on my computer, it writes out two dashes as -- instead of - -. Make sure to use two dashes for these commands!)
For everyone having problems installing ml agent via Python: 1. Check that you are using Python x64 bits 2. Remember to run it on a Shell like Windows powershell, windows cmd or Anaconda instead of directly on the python exe. If you see this ">>>", you are inside the python interpreter, write quit() or pres Ctrl-Z + return to exit to the shell. If you quit the program when doing that you were not on a shell. 3. If you are using Python 3.9 or higher (it probably gives you ERROR:Exit status 1...) try installing python 3.8 and ensure you reasign the paths for python or delete Python 3.9 so it automatically uses python 3.8. 4. Try updating pip with "python -m pip install -U pip" in windows or "pip install -U pip" in Linux or MacOs.
when i try to run: mlagents-learn config/trainer_config.yaml --run-id=MyFirstAI it says 'mlagents-learn' is not recognized as an internal or external command, operable program or batch file. how would i fix this?
@@maarten9222 This helped me get it running forum.unity.com/threads/mlagents-learn-is-not-recognized-as-an-internal-or-external-command-operable-program-or-batch-fil.909716/ Maybe it also helps you.
Thank you, this type of video are so usefull for me, I'm going to start a ML trainership next month. Can you make a video about the most important categories of Machine Learning and how to use them within ML Agents?
I'm new in Unity, I am wondering to work for developing ML models with Tensorflow, Keras, But for commercial purposes I have to use unity, but I'm a little confused which one is better to work? Which one is much commercial?? Unity or directly work with Python AI frameworks like Tensorflow and keras????
Sebastian, Great video. I followed the steps and everything works except last mlagent-learn command. Still figuring out. You were little fast and You step 5, Step 6 on screen display was overlapping your mouse clicks. but I could follow pausing and replaying .. so not a big deal. Keep up the good work
Hello, are the commands different for windows and have I to type them in the cmd. Iam realy confused couse the installing dindt fkt. and the console say that there doesnt exist such a command
Thanx for version 1.0 video. It would be nice to see video where you build one example from scratch. And also have time to tell about settings and parameters in Unity. And use graphs to check out how learning is working when you change settings.
Hey Sebastian, sorry for the stupid questions, but did you ever make a video/post about what extensions you use with vs and unity? I'm especially curious about the inline parameter hints, and I couldn't find any help on the web and on your discord. But great video, I enjoyed it a lot :) Liebe Grüße aus Berlin ;)
Keep going.. I left for a a bit also and came back found out everything has changed alot. Your vids where the refresh I needed... Maybe we can collaborate sometime :)
I wonder can you train agent give him one behaviour and then train like other behaviour for example if I got shooter game and train agent to walk and shoot but later I want to that my agent can roam the environment can you do that?
Hey Sebastian. Thank you for this video. I would like to ask if there is any way to extract the data that our agent is currently gathering to make data processing. I know that our python API is doing that for us behind the scenes, but in the case that we wanted to code our own algorithms to work with unity that would be helpful.
Great video man. With the help of this, I trained an AI to balance a pole in 3D. I even made a video. Can I ask you a doubt? How do you record videos from Unity Editor? (I used an asset from the Asset store called Video capture. Do you use the same?)
@CraftyClawBoom if you want the agent to learn from player dont do reinforcement learning instead learn imitation learning, you can learn it from channel called code monkey but yes you need to know the basics
I got it working! Quick question though: I've been running the 'walker' scene and I still feel like the agents are unbalanced and could use more training after it automatically stopped. Is there a way I can add to their training time and improve performance?
Hey! I am about to start working on my Game Design bachelor thesis. ML is pretty much the only part of programming I haven't really explored a lot, though I have an understanding of the concepts behind it (Hidden layers, bias and weights, fitness function...), which is why I want to explore ML with my bachelor thesis, in the context of game design. I haven't really settled on a topic, though I'd love to turn the training process itself into some sort of game. I got a few ideas regarding this: 1) Let the user perform the selection which would otherwise happen by evolutionary algorithms (basically chosing AIs to keep for the next iteration, i.e. "manual evolution"); 2) Let the user change the rewards => Turn it into an "AI sandbox". I am just diving into the topic and stumbled upon Unity's ML-Agents. Do you know if my two options would be applicable for the ML-Agents framework? My main concern is that I could be locking myself into the framework too much and I lose control (or might need a different training algorithm which is not included), not allowing me to achieve my goal. Any thoughts on this? It seems like you have a better understanding than I have on ML-Agents. Gutes Video übrigens, direkt nen abo da gelassen :D
I think you have valid concern. The problem with having the users change the reward is that the training process is seperate from the engine and therefore seperate from the built application. If you want a nicely packaged application like unity can deliever, only inference is possible right now. You would need to create your own package that somehow includes the python and the unity part, which is probably possible in some way. Additionally, evolutionary algorithms are not included in ML-Agents though implementing one isnt too challenging if you are not striving for maximum efficiency. In summary, I would recommend prototyping with ML-Agents because its easy to work with but be careful not to invest too much.
Hello Sebastian ! Greatly appreciated the video. For a future video or tutorial series can you please show how we can write the python script that allows us to train the ml-agents ? I would prefer if you would cover both the TD3 and PPO algorithm just to see which would work better . Congratulations on a job well done with this video ! Hope to see more !
Awesome, Thanks. FYI I am getting the error right at the end after running the last command, Please help: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/bin/mlagents-learn", line 8, in sys.exit(main()) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 250, in main run_cli(parse_command_line()) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 49, in parse_command_line return RunOptions.from_argparse(args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/settings.py", line 871, in from_argparse key mlagents.trainers.exception.TrainerConfigError: The option default was specified in your YAML file, but is invalid.
I was more referring to the ML-Agents Framework but of course it is even more true for Machine Learning in general. I think Hyperparameter Tuning is one of those complex topics, where an understanding of the algorithms (PPO/SAC) helps a lot, as well as curiosity, gail and of course all the basics of Machine learning and reinforcement learning in general. This course by OpenAI is a great resource for that: spinningup.openai.com/en/latest/spinningup/rl_intro.html But I have in no way mastered it, so take my advice with a large grain of Salt.
shame it doesnt work for me for some reason i cant install mlagents properly which wont allow me to run the yaml file. Im getting some issue with Environemnt Error when installing tensorflow
great video! liked and subscribed, but in macosx terminal, after install Python3.6.8, running mlagents-learn error message: command not found, please help
I tried a bunch of package versions (including preview versions), both through importing directly from the package manager and manually importing the corresponding .json file. However, I still cannot run the 3D ball demo due to the following compiler error: Assets\ML-Agents\Examples\Match3\Scripts\Match3Agent.cs(5,22): error CS0234: The type or namespace name 'Extensions' does not exist in the namespace 'Unity.MLAgents' (are you missing an assembly reference?). Any ideas what could be wrong/what else to try?
(using cloned repository) You now need to install the mlagents package and the extensions package separately. From Unity package manager, click + (top left of window), then select add package from disk, navigate to \ml-agents-release_12\com.unity.ml-agents.extensions and select the package.json
This is very good! Could you maybe do a video on how to make the observation and rewards? I do have experience with NN and ML. It would be super interesting if you could make your own simple AI by setting up your model, for creating one from scratch.
I was really into this and super eager to subscribe but this 12 minute video just didnt cut it for me because it really all just felt like a high level overview with basic explanations which is not what I was expecting to learn when I clicked on a video called "Training your first A.I" since the video didnt walk me through actually creating a new agent and behavior and then training the AI to do that task as would be assumed from the context of the video. When you upload a more complete tutorial I will like, subscribe, hit the bell icon and tell all my friends to do the same.
@@Kakanics Have the same issue. Deinstalled and installed it with the ´´Save python to path´´ checked but still get the same syntax error. Using Python 3.7
This was so cool! I want something a little different. I want to make an AI character who can look at a scene, then "close it's eyes" and try to recreate the scene it saw from memory. Is mlagent capable of something like that? Or is it only capable of moving characters around?
Sounds fun! It might be possible to do with ML-Agents, it's hard to tell without having tried it. My gut feeling says this sounds more like an Autoencoder problem, it's a type of neural architecture that might be suited for this kind of task. I would say in general that it's less of a Reinforcement Learning problem, so I would look into PyTorch or Tensorflow to implement an Autoencoder. Good luck!
do you have tutorial of installation in Windows 10? I follow the doc instruction but still in vain. the Error is "DLL load failed while importing _pywrap_tensorflow_internal", "Failed to load the native TensorFlow runtime" . Any idea?
I had the same issue but it was becouse i had a folder named ml-agents-release_2 inside the ml agents folder so i had to do the command "cd desktop/ml-agents-release_2/ml-agents-release_2" to get it to work
Hi, I'm using windows ten, I opened python I've just installed and input the pip3 install mlagents line, it complains invalid syntax. What I got wrong? I'm familiar with unity and C# but never touched python. I can't install anything by following the video, any ideas?
The documentation of ML agents is what you need to look at for troubleshooting. For me, things started working once I used a virtual environment, which they explain how to set up. Here is the documentation. github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started.md
Yup it's RL. So where's the state? I would say there is the environment state which encapsulates everything about the environment and its internal, private state (The game objects in the scene, their logic and so on). Then there's also the agent state, which is a partial representation of the environment state, so everything the agent observes about the environment via its sensors like raycasts or cameras. Does that make sense?
@@SebastianSchuchmannAI Thank you for your reply. Not many youtube tutorials do that. I uninstalled and reinstalled python buy still get the same problem. No worries thank you anyway.
Thanks for your video. I have some questions about how to install mlagents. You just installed mlagents via pip install mlagents but from what I've seen in another website, it says that 1)install anaconda, 2) set anaconda to fit in python3.6, 3) go to ml-agents folder in Unity SDK and install by 'pip install -e .' . Is your approach is okay?
I just imitated your approach but my computer shows a a lot of errors. C:\Users\ymc\Desktop\ml-agents-release-0.15.1>mlagents-learn config/trainer_config.yaml -run-id=whatever Traceback (most recent call last): File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\ymc\anaconda3\lib unpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\ymc\anaconda3\lib unpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\ymc\anaconda3\Scripts\mlagents-learn.exe\__main__.py", line 4, in File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 12, in from mlagents import tf_utils File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\__init__.py", line 1, in from mlagents.tf_utils.tf import tf as tf # noqa File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\tf.py", line 3, in import tensorflow as tf # noqa I201 File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in from tensorflow.python.tools import module_util as _module_util File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 50, in from tensorflow.python import pywrap_tensorflow File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 69, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: Failed to load the native TensorFlow runtime. See www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. why does it give errors like that? Do you happen to have an idea?
@@Uebermensch03 I had the same error. Dont know about your other problems, but i resolved this with installing another version of tensorflow via pip. In the end i used tensoflow 2.0.0
Hello everybody! I have created a Discord Channel for everybody wanting to learn ML-Agents. It's a place where we can help each other out, ask questions, share ideas, and so on. You can join here: discord.gg/wDPWsQT
Does the server still exist?
@@soareverix yup
Can you pay attention to dc server that you are streaming because there is no admin and no order as well it creates complicated situations.
instablaster
@@soareverix maybe
Great intro video! I'd love to see:
1) How to setup for agents/players with multiple states (switching from stealing to attacking to shooting, etc.)
2) How to have agents/players on the same team and performing the same goals together
3) How to apply these different states to AI agents in gameplay
You are the only one really explaining ml agents, thanks a lot my game needed you ;)
The video was amazingly well done! To anyone following this on a windows 10 machine. Make sure you have python 64-bit installed and not 32. And the command that works for me is this:
"python ml-agents/mlagents/trainers/learn.py --run-id=MyFirstAI". You first need to cd into the ml-agents repo just like the video says and then type in this command to start training. If you still get issues, you can use Anaconda to create a virtual env but Unity themselves removed that dependency.
Still not working for me, I get no message, I'm simply prompted to enter another command. I'm using the ml-agents0-release_6, do you think that's the issue?
Thank you for the video, it is awesome! :)
I loved the effort you placed in attaching memorable graphics to explaining the vocabulary and fields used
Assing rewards @3:06 nice.
Note for everybody watching: Two days ago ML-Agents Release 2 was released. Don't worry, the latest release just contained bug fixes, meaning you can still follow the tutorial without doing anything differently. The naming may be a bit confusing because Release 2 sounds like a big thing but it isn't, they just changed their naming scheme. I would always recommend using the latest release version! Enjoy! :)
Oh man, Release 2 was a year ago, but it's still the latest verified version that can be downloaded from the package manger
Were there any further releases? Some may consider it cringe, but I'm asking ChatGPT to clarify certain methods that I can't understand from the documentation and it told me that the newest version of ML Agents takes ActionBuffers parameters, instead of float [ ] parameters. This could obviously just be ChatGPT being wrong, but this video and the tutorial I am following are 2 years old after all.
Hey,
Great tutorial! I'm still new to both Unity and ML Agents, but just finished training my first agents on the 3D ball game.
Just want to point out, at the time of watching the video, the ML Agents repository was on release 12 and I had an extra problem in addition to the "trainer_config" problem (thanks DeJMan).
I'll do my best to describe it below:
I was getting 254 errors - most of them were "CS0234" - essentially the C# script couldn't recognize any of the ML Agents related stuff (I think - still new so not 100% sure).
The solution that worked for me was to go to Package Manager, click on the + in the top left corner, and select "Add Package from disk", then go to the ML Agents downloaded repository downloaded from the github link in the description, then to the folder "com.unity.ml-agents", and add the file "package.json".
Then, do the above again but for the folder "com.unity.ml-agents.extensions" - again the "package.json" file.
This solved it for me. Hope it helps someone else.
You are an absolute legend! how did you figure that out? it cured that problem I was having. Hopefully this comment I'm writing boosts this comment up in the list so more people can see it. Thank You!
@@pixelpowergo1607 Thanks! Glad it helped someone :)
For everyone having problems importing ml agents folder into the unity project:
1. If you get errors like: "The type or namespace "mlagents" could not be found": reimport the mlagents package or import it if you hadnt already done it.
2. If you get errors like: "The type or namespace "mlagents.actuators" could not be found", after already having imported Ml Agents package through the package manager: try importing it manually from the github cloned folder (machine learning release 8) /com.unity.mllagents/package.json
Jorge Barroso, i solved the second problem just importing the ml agents 1.5.0 - preview version.
@@lpthurler Yeah that is was I was meaning, for some reason I couldnt install the 1.5.0 version in the package manager and installed it manually. I didnt know it was only me.
which unity version did you have?
@@gagamagaj2136 2019.4.3f1
Alternative Solution: if you have the ml-agents package from the package manager (which is currently a bit older than the latest release: 1.0.5) - than you could just download an older release from github with the older version (in my case it was 1.0.2) and use its examples folder (projects/assets/mlagents) - helped me with the „2.“ issue
This video was a good start for me. I got the demos working... Now the hard/fun part. Thank you.
I can only reiterate the same feeling as everyone else here aroud! Thank you so much for those videos! They're great and so are you!
Hi Sebastian! Just wanted to share a very slight correction regarding stacked observations:
You sated that the stacked observation setting determines how many vector observations are gathered before sending them off to the agent; this is close but slightly inaccurate. It will still gather a new vector observation every time a decision is requested, and it will simply add the new vector to the running "stacked" history, while bumping the oldest observation out of the stacked vector array. For example, if you have stacked observations set to two and you are sending only a velocity reading to the agent, on the first decision the stacked vector might be (.567, 0f), on the second decision, the stacked observation might be (0.897f, 0.567f), the third could be (0.924f, 0.897f), and so on...
Hey! Thanks for the correction, I wish I could change the Video 😄
@@SebastianSchuchmannAI It's a really well made video lol
@@SebastianSchuchmannAI I think there may be some ways to do live editing or re-uploading. (people often need to change videos to make corrections or remove copyrighted content). Def look into it when you get a chance! (there's also the popup box option). Not a really big deal though; that one error is the only imperfection in an otherwise perfect video!
Does this mean if I only pass in positions then velocity won't be calculated?
Same here, really can't wait for more!! You are amazing! Your videos are amazing!👏
Your video is amazing, that really just saved me 170 years of learning bro. Great work!!
You are really amazing....
That was a friendly introduction to unity ml-agent.
You explain things in a very simple way. That makes it easy to understand clearly the concept behind ml agent.
Great work guy
keep it up Sebastian! this is the next chapter my friend! cheers from New Mexico!
Hi Sebastian
Really cool video, just started to play with ML-Agents myself and have trained all of the example ones.
But never dig into how it work, but your video explained it nicely.
You have gotten 1 new subscriber :)
I have never seen such in depth explanation anywhere. Great.
When I install the package of the ml agents I start to appear a lot of errors related to the fact that the 'Actuators' does not exist and the same with the 'ActionBuffers'
Does anyone know how to solve this problem?
When we can expect updated tutorial? :)
9:19 why did unity team wrote gameObject.transform 🤨
They should just write transform.
Assing rewards is my favorite type of reward.
Anyone else get a stack of compiler errors at Step 5: "Assets/ML-Agents/Examples/PushBlockWithInput/Scripts/PushBlockWithInputPlayerController.cs(109,31): error CS0246: The type or namespace name 'IInputActionCollection2' could not be found (are you missing a using directive or an assembly reference?)" ?
Nicely done. I look forward to more videos on ML-Agents
ML-agent 7 has a code problem Assets\ml-agents-release_7\com.unity.ml-agents\Runtime\Grpc\CommunicatorObjects\UnityInput.cs(134,28): error CS0115: 'UnityInputProto.ToString()': no suitable method found to overrid
Nice explanation! Thanks 🙌
You definitely deserve more subscribers, my dude. This was a fantastic tutorial!
Thanks ❤️
Nice tutorial but didn't work for me on Windows machine.
Please make a tutorial how to install properly on windows machine.
this video was Amazing!, im watching it for the 3th time... and now about to start the traning
Why doesn't pip3 install mlagents work on my pc is it because it's windows?
10:23 That *simple* pip3 install mlagents took me over 2 hours
because I didn't know that mlagents relly on something named "PEP 517" and "H5PY"
which them relly on python 3.7, so as me having 3.8.2 it couldn't be installed,
then I had to use the preinstalled 3.7 in a separate environment setup.
so, yeah it took over 2 hrs and I'm not sure whether this gonna work at all or not :/
1. What is the use of Academy.Instance.EnvironmentParameters? Why do we use ResetParameters in the Academy instead of manually putting those values inside OnEpisodeBegin?
2. How exactly does setting values in the "actionsOut" array in the Heuristic function call the OnActionReceived function with those values? The Heuristic method does not return any values.
THANK YOU SO MUCH FOR THIS VIDEO! it helped me so much
Thank you! It is a very informative video regarding on Unity ML-agent
Literally 6 months later you cant even run the project anymore... all there is is compiler errors cause Unity updated everything...
do you know any fixes?ive been trying to fix it for 2 days
This video is just so awesammmmmmeee thankyou
If you want to run with a specific config file, use this command: mlagents-learn config/ppo/CustomConfigNameHere.yaml --run-id=MyFirstAI
This will let you change the number of max steps so that your model can continue to learn indefinitely and you can tweak the other values to figure out which ones work best.
If you want to continue training a specific model, you can do mlagents-learn --run-id=MyFirstAI --resume. This will let you pause your model and later get back to training.
(Please note that on my computer, it writes out two dashes as -- instead of - -. Make sure to use two dashes for these commands!)
This video is incredibly helpful!! You are amazing!
For everyone having problems installing ml agent via Python:
1. Check that you are using Python x64 bits
2. Remember to run it on a Shell like Windows powershell, windows cmd or Anaconda instead of directly on the python exe. If you see this ">>>", you are inside the python interpreter, write quit() or pres Ctrl-Z + return to exit to the shell. If you quit the program when doing that you were not on a shell.
3. If you are using Python 3.9 or higher (it probably gives you ERROR:Exit status 1...) try installing python 3.8 and ensure you reasign the paths for python or delete Python 3.9 so it automatically uses python 3.8.
4. Try updating pip with "python -m pip install -U pip" in windows or "pip install -U pip" in Linux or MacOs.
Amazing ! I would like more videos about the Machine Learning Framework, please !
great info
very good video, here is a random comment to help you build your channel. good luck
When installing ML Agents in Unity's package manager, if ML Agents cannot be found, try selecting Unity Registry.
You just earned a new subscriber.
Hi,
Thank you for a great video.
Can I ask you where I can find the text you show on 8:20?
All the best
thank you ... Ii really like the ML Agents project.. I hope you post more tutorials on it
when i try to run: mlagents-learn config/trainer_config.yaml --run-id=MyFirstAI it says 'mlagents-learn' is not recognized as an internal or external command,
operable program or batch file.
how would i fix this?
I'm struggling with this error for over 2h hours now. No solution from Mr Google could help me. Have you figured out a solution for this?
@@UitzUitz nope I gave up
@@maarten9222 This helped me get it running forum.unity.com/threads/mlagents-learn-is-not-recognized-as-an-internal-or-external-command-operable-program-or-batch-fil.909716/
Maybe it also helps you.
This is great. I'd love to see you start a new simple scene using an ML agent, code and all.
If you have an error while installing mlagents in python on windows it may be because you use 32bit python and not 64bit.
Very useful and informative, thank you
Thank you, this type of video are so usefull for me, I'm going to start a ML trainership next month. Can you make a video about the most important categories of Machine Learning and how to use them within ML Agents?
I'm new in Unity, I am wondering to work for developing ML models with Tensorflow, Keras, But for commercial purposes I have to use unity, but I'm a little confused which one is better to work? Which one is much commercial?? Unity or directly work with Python AI frameworks like Tensorflow and keras????
Man, this is huge. Thanks so much
What a clean video, can’t wait for more
Great video, thanks
Amazing video! Thanks a lot!
@3:19 *Assigning Rewards
Noooooooooo
Sebastian, Great video. I followed the steps and everything works except last mlagent-learn command. Still figuring out. You were little fast and You step 5, Step 6 on screen display was overlapping your mouse clicks. but I could follow pausing and replaying .. so not a big deal. Keep up the good work
Very interesting, definitely something I will mess with in the future!
Hello, are the commands different for windows and have I to type them in the cmd. Iam realy confused couse the installing dindt fkt. and the console say that there doesnt exist such a command
Thanx for version 1.0 video. It would be nice to see video where you build one example from scratch. And also have time to tell about settings and parameters in Unity. And use graphs to check out how learning is working when you change settings.
This is my Plan !
Hey Sebastian, sorry for the stupid questions, but did you ever make a video/post about what extensions you use with vs and unity? I'm especially curious about the inline parameter hints, and I couldn't find any help on the web and on your discord. But great video, I enjoyed it a lot :)
Liebe Grüße aus Berlin ;)
what's the behaviour script/tree GUI at 6:17?
I've never seen it inside of Unity
Its a third party asset: Behavior Designer
I am getting error message module 'torch' has no attribute 'set_num_threads'
You are great guy keep up the good work
This opens so many doors for everybody
great! thank you so much
Nice video Seppi!
Heyo! Thanks ❤️❤️❤️
Keep going.. I left for a a bit also and came back found out everything has changed alot. Your vids where the refresh I needed... Maybe we can collaborate sometime :)
Thank you very much. You have a great channel, I can probably learn a lot from you. Always open to collaboration!!
I wonder can you train agent give him one behaviour and then train like other behaviour for example if I got shooter game and train agent to walk and shoot but later I want to that my agent can roam the environment can you do that?
Hey Sebastian. Thank you for this video. I would like to ask if there is any way to extract the data that our agent is currently gathering to make data processing. I know that our python API is doing that for us behind the scenes, but in the case that we wanted to code our own algorithms to work with unity that would be helpful.
Great video man. With the help of this, I trained an AI to balance a pole in 3D. I even made a video. Can I ask you a doubt? How do you record videos from Unity Editor? (I used an asset from the Asset store called Video capture. Do you use the same?)
There's a built in record feature if you're using Windows using the game window Win+G
How could you train the AI while in game? My game idea requires the agent to learn from the player. Is this possible?
@CraftyClawBoom if you want the agent to learn from player dont do reinforcement learning instead learn imitation learning, you can learn it from channel called code monkey but yes you need to know the basics
I got it working! Quick question though: I've been running the 'walker' scene and I still feel like the agents are unbalanced and could use more training after it automatically stopped. Is there a way I can add to their training time and improve performance?
(Just had to edit the config file. Got it!)
fantastic tutorial!!
Hey! I am about to start working on my Game Design bachelor thesis. ML is pretty much the only part of programming I haven't really explored a lot, though I have an understanding of the concepts behind it (Hidden layers, bias and weights, fitness function...), which is why I want to explore ML with my bachelor thesis, in the context of game design.
I haven't really settled on a topic, though I'd love to turn the training process itself into some sort of game. I got a few ideas regarding this:
1) Let the user perform the selection which would otherwise happen by evolutionary algorithms (basically chosing AIs to keep for the next iteration, i.e. "manual evolution");
2) Let the user change the rewards => Turn it into an "AI sandbox".
I am just diving into the topic and stumbled upon Unity's ML-Agents. Do you know if my two options would be applicable for the ML-Agents framework? My main concern is that I could be locking myself into the framework too much and I lose control (or might need a different training algorithm which is not included), not allowing me to achieve my goal.
Any thoughts on this? It seems like you have a better understanding than I have on ML-Agents.
Gutes Video übrigens, direkt nen abo da gelassen :D
I think you have valid concern. The problem with having the users change the reward is that the training process is seperate from the engine and therefore seperate from the built application. If you want a nicely packaged application like unity can deliever, only inference is possible right now. You would need to create your own package that somehow includes the python and the unity part, which is probably possible in some way. Additionally, evolutionary algorithms are not included in ML-Agents though implementing one isnt too challenging if you are not striving for maximum efficiency. In summary, I would recommend prototyping with ML-Agents because its easy to work with but be careful not to invest too much.
Hello Sebastian !
Greatly appreciated the video.
For a future video or tutorial series can you please show how we can write the python script that allows us to train the ml-agents ?
I would prefer if you would cover both the TD3 and PPO algorithm just to see which would work better .
Congratulations on a job well done with this video !
Hope to see more !
Great Video!! Awesome!
Awesome, Thanks.
FYI I am getting the error right at the end after running the last command, Please help:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/bin/mlagents-learn", line 8, in
sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 250, in main
run_cli(parse_command_line())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 49, in parse_command_line
return RunOptions.from_argparse(args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mlagents/trainers/settings.py", line 871, in from_argparse
key
mlagents.trainers.exception.TrainerConfigError: The option default was specified in your YAML file, but is invalid.
Great Video
Awesome.
You mentioned complex topics that need to be mastered to fully grasp machine learning. Can you list out some terminologies?
I was more referring to the ML-Agents Framework but of course it is even more true for Machine Learning in general. I think Hyperparameter Tuning is one of those complex topics, where an understanding of the algorithms (PPO/SAC) helps a lot, as well as curiosity, gail and of course all the basics of Machine learning and reinforcement learning in general. This course by OpenAI is a great resource for that: spinningup.openai.com/en/latest/spinningup/rl_intro.html But I have in no way mastered it, so take my advice with a large grain of Salt.
Really cool!!!
shame it doesnt work for me for some reason i cant install mlagents properly which wont allow me to run the yaml file. Im getting some issue with Environemnt Error when installing tensorflow
great video! liked and subscribed,
but in macosx terminal, after install Python3.6.8,
running mlagents-learn
error message: command not found,
please help
this doesnt work any more; ie loading the example assets causes an error
I tried a bunch of package versions (including preview versions), both through importing directly from the package manager and manually importing the corresponding .json file. However, I still cannot run the 3D ball demo due to the following compiler error: Assets\ML-Agents\Examples\Match3\Scripts\Match3Agent.cs(5,22): error CS0234: The type or namespace name 'Extensions' does not exist in the namespace 'Unity.MLAgents' (are you missing an assembly reference?).
Any ideas what could be wrong/what else to try?
(using cloned repository) You now need to install the mlagents package and the extensions package separately. From Unity package manager, click + (top left of window), then select add package from disk, navigate to \ml-agents-release_12\com.unity.ml-agents.extensions and select the package.json
Do we also need pytorch ??
This is great!
This is very good! Could you maybe do a video on how to make the observation and rewards? I do have experience with NN and ML. It would be super interesting if you could make your own simple AI by setting up your model, for creating one from scratch.
The last command 'mlagents-learn config/trainer_config.yaml --run-id=MyFirstAI' says that MLagents is typed wrong or could not be found, any ideas?
try this "mlagents-learn --run-id=MyFirstAI"
if that doesnt work do this first: "pip3 install --upgrade mlagents"
I was really into this and super eager to subscribe but this 12 minute video just didnt cut it for me because it really all just felt like a high level overview with basic explanations which is not what I was expecting to learn when I clicked on a video called "Training your first A.I" since the video didnt walk me through actually creating a new agent and behavior and then training the AI to do that task as would be assumed from the context of the video. When you upload a more complete tutorial I will like, subscribe, hit the bell icon and tell all my friends to do the same.
this tutorial has more info on how to do that. ruclips.net/video/RANRz9oyzko/видео.html&t. I made it a month ago so everything is up to date.
it says invalid syntax when I type "pip3 install mlagents"
have you installed python? did you checkmark the bottom box that said something like "Save Python to path" or something like that.
@@Kakanics Have the same issue. Deinstalled and installed it with the ´´Save python to path´´ checked but still get the same syntax error. Using Python 3.7
@@lasko8628 what cpu are you using? Does it support avx?
@@lasko8628 try pip install mlagents instead of pip3
@@Kakanics Intel Q6600, I dont know. I try to install it on my Macbook and see if it works. WIth pip install mlagents I get the same error
This was so cool! I want something a little different. I want to make an AI character who can look at a scene, then "close it's eyes" and try to recreate the scene it saw from memory. Is mlagent capable of something like that? Or is it only capable of moving characters around?
Sounds fun! It might be possible to do with ML-Agents, it's hard to tell without having tried it. My gut feeling says this sounds more like an Autoencoder problem, it's a type of neural architecture that might be suited for this kind of task. I would say in general that it's less of a Reinforcement Learning problem, so I would look into PyTorch or Tensorflow to implement an Autoencoder. Good luck!
@@SebastianSchuchmannAI Awesome thank you! I'm new to the AI world and your tutorial was really easy to follow. Keep up the great work!
do you have tutorial of installation in Windows 10? I follow the doc instruction but still in vain. the Error is "DLL load failed while importing _pywrap_tensorflow_internal", "Failed to load the native TensorFlow runtime" . Any idea?
I had the same issue but it was becouse i had a folder named ml-agents-release_2 inside the ml agents folder so i had to do the command "cd desktop/ml-agents-release_2/ml-agents-release_2" to get it to work
@@alexhammer2802 I finally know the reason. My CPU is too old that current tensorflow does not support. :(
Amazing
Hi, I'm using windows ten, I opened python I've just installed and input the pip3 install mlagents line, it complains invalid syntax. What I got wrong? I'm familiar with unity and C# but never touched python. I can't install anything by following the video, any ideas?
The documentation of ML agents is what you need to look at for troubleshooting. For me, things started working once I used a virtual environment, which they explain how to set up.
Here is the documentation.
github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started.md
I am getting following error - "ERROR: Could not find a version that satisfies the requirement tensorflow=1.7 (from mlagents) (from versions: none)"
Same here
Can anyone help?
@@mr.puffin3947 you're either using python 3.8 or 32bit python
It looks like this is Reinforcement Learning. I am new to this, please correct me if I am wrong. If it is RL, then where is the State?
Yup it's RL. So where's the state? I would say there is the environment state which encapsulates everything about the environment and its internal, private state (The game objects in the scene, their logic and so on). Then there's also the agent state, which is a partial representation of the environment state, so everything the agent observes about the environment via its sensors like raycasts or cameras. Does that make sense?
when I run pip3 install mlagents it flags install as invalid syntax
Make sure you have python installed and checked the box "Add to PATH" when installing.
@@SebastianSchuchmannAI Thank you for your reply. Not many youtube tutorials do that. I uninstalled and reinstalled python buy still get the same problem. No worries thank you anyway.
you actually should run "pip3 install mlagents" from Command Prompt in windows not from the python interpreter
@@Notion615 I tried that same error
Can sameone tell me examples when the local installation is needed. I mean : pip3 install -e ./ml-agents-envs and pip3 install -e ./ml-agents
Thanks for your video. I have some questions about how to install mlagents. You just installed mlagents via pip install mlagents but from what I've seen in another website, it says that 1)install anaconda, 2) set anaconda to fit in python3.6, 3) go to ml-agents folder in Unity SDK and install by 'pip install -e .' . Is your approach is okay?
oh, I forgot a stage between 2) 3), which is the procedure of creating new env using 'conda activate "~~" '
I just imitated your approach but my computer shows a a lot of errors.
C:\Users\ymc\Desktop\ml-agents-release-0.15.1>mlagents-learn config/trainer_config.yaml -run-id=whatever
Traceback (most recent call last):
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\ymc\anaconda3\lib
unpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\ymc\anaconda3\lib
unpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\ymc\anaconda3\Scripts\mlagents-learn.exe\__main__.py", line 4, in
File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 12, in
from mlagents import tf_utils
File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\__init__.py", line 1, in
from mlagents.tf_utils.tf import tf as tf # noqa
File "c:\users\ymc\anaconda3\lib\site-packages\mlagents\tf_utils\tf.py", line 3, in
import tensorflow as tf # noqa I201
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in
from tensorflow.python.tools import module_util as _module_util
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 50, in
from tensorflow.python import pywrap_tensorflow
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 69, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\ymc\anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\ymc\anaconda3\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "c:\users\ymc\anaconda3\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed:
Failed to load the native TensorFlow runtime.
See www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
why does it give errors like that? Do you happen to have an idea?
@@Uebermensch03 I had the same error.
Dont know about your other problems, but i resolved this with installing another version of tensorflow via pip.
In the end i used tensoflow 2.0.0