Awesome video! Great job! Was very happy to share this with my fellow ML-Agents enthusiasts. Looking forward to doing the Hummingbirds Tutorial. Keep up the good work!
Perfect! Thank you for this. :) I have followed along for the whole video, and I can confirm all of the steps should work for people using Linux-based systems. I cannot wait for your next walkthrough :)
I'm excited to see the shift to PyTorch. I started with ML Agents last Spring, but the inability to use custom/ different ML algorithms brought me to a halt. My ML work uses Evolutionary Solutions (ES) approaches to reinforcement learning, all in PyTorch, so I stopped looking at Unity. I'll have to dig into the docs and see if it works for me now. I also found that the camera based examples did not work well (or at all) but I'm hopeful that the new attention-based image processing, especially using ES might work better. I look forward to your upcoming tutorials
Your videos are pretty simple and interesting and easy to learn. Thanks for the videos. Is there any chat support where I can ask some questions related to ML-Agents or when facing any issue related to ML-Agents?
brilliant intro, and you didn't even prepare it: impressive. Please, more examples analysis, and ideally also how to plan and setup Training with a Curriculum.
Great tutorial. Welcome back to ml-agents 1.0.10 . I was wondering if you could make a tutorial of how to make our own environments. Although there are many complete tutorials online i'd love to see some smaller implementations into one video , exploring all the features given by the library, e.g the hyperparameters . I trying to write my master thesis on the subject but I have no info on how to make a more humanoid agent like in the Walker example. How to use the body parts etc etc.
I think it would be extremely difficult to fit everything into one video. There’s just too much to cover to explore all features. Have you checked out the Hummingbirds course on Unity Learn? That’s a full end to end project. Not a humanoid walker though.
@@ImmersiveLimit I just checked it. And it's an official unity tutorial, awesome. I understand there are many features that need to be discussed so many videos would be better. I was struggling on how to make an implementation with 2 enemy agents with different goals. Anyways great vid keep up the good work!
Yeah, Unity paid us to make it for them. 🙂 We will consider competing enemy agents with different brains for future RUclips videos though. Sounds interesting!
Hi Adam! great video I cant wait to see the next one! I'm curious to see what's the difference now with pytorch, the workflow seems to be the same within unity so I would like to know why pytorch is the their goto choice now. Also, are you gonna do a video on the recently added grid sensors? That'd be really great to see explained since it's brand new. On another topic, have you check unity's perception tools? If I'm not wrong, it does what you were doing in blender for synthetic data generation right?
Hi Elizabeth, thank you! There's really no difference with pytorch unless you're writing your own neural nets and modifying the python side of things. If you're just using ML-Agents normally, you wouldn't even know that anything had changed. I chatted with the dev team and they said they were switching to pytorch because it allowed for easier neural network experimentation, which lines up with what I've heard about the differences between pytorch and tensorflow from many practitioners. Clearly they thought it was worth it because it seems to have been quite an undertaking. As for the grid sensor, I definitely want to try it out, but haven't had a chance to yet. I'll try to incorporate it in a project soon. Finally, yes I have checked out Unity's perception tools and they seem to do a lot of what I have done with Blender. When I tried them out a few months ago, it was pretty unstable and changing even faster than ML-Agents, so I was afraid if I made any tutorials, they'd be out of date almost immediately. I'll almost certainly come back to it soon. Thanks for the good questions!
Please explain how to add 2d (and 3d) convolutions to deal with obstacles. And please explain how to use curriculum, and how to use the type of imitation learning that leads to superhuman agents (rather than clones of the recorded examples)
I don't see any lines representing the Ray Perception Sensor. Does anyone know if there is option to turn them visually on or off? Everything else work and looks the same as this video. I'm using Unity 2021.1.11.
Cool. I also prefer using Anaconda for managing python and ML studies. But I could not get my Anaconda environment to work with Unity ML-Agents. I could install pytorch and ml-agents into a new Anaconda environment, but the tensorboard was incompatible with the Anaconda environment, despite regressing to a previous 1.7.0 tensorboard version. I could play Unity Trained agents scenes, But I could not Train with Anaconda environment. I had to resort to a previous native python 3.7 to enable ml-agents training sessions, and follow the Windows centric installation instructions for legacy python 3.7, pytorch and then the ml-agents. The Unity Ml-Agents Anaconda instructions are now depreciated.
@@ImmersiveLimit Hi I have actually now got an Anaconda environment to work with ML-Agents Release 18. I created an ml agents environment in conda, with a python 3.7.1 base, and I used conda install pytorch on windows. I then used the pip install of ml agents 0.27.0 and the ml agents learning now seems to work under the conda prompt.
Bummer. Docs still seem to say that only PPO and SAC algorithms available and no way to add new PyTorch- based training. Not sure why they mention PyTorch - seems to be internal use only.
@@ImmersiveLimit found this book to be the best www.amazon.com/Deep-Reinforcement-Learning-Python-distributional-ebook/dp/B08HSHV72N/ref=pd_rhf_dp_p_img_4?_encoding=UTF8&psc=1&refRID=RZBFZTMW1H0Q9MVGM6NY as it goes step by step. I found that trying to learn ML agents without understanding fundamentals of RL is hurting my ability of learning since when I am stuck I do not know what to do. Currently having fun reading this book and now understand why thing work the way they do or why are they not working. I currently on chapter 4 and I will return to your tutorials at least after I finish chapter 9. Thanks but also I think you should mention prerequisites as this subject is an advanced masters computer science course and people may be mislead if they do not know what to learn first.
:( now im stuck with tensorflow asking CUDA(nvidia) but my GPU is AMD.. so many thing go wrong with newest version Edit: reinstall python and its work after installing directML and add AMD GPU
Hi , i am currently doing a college project where I want to use camera observation, and vector observation together in one agent. I am able to get both observations using python mlagents env but then I have to implement my own algorithm. I have implemented a2c but it is not as good as PPO. And I don't have enough time to learn it , so how can I use both observations to train using inbuilt PPO algorithm?
I’d recommend taking a look at the Visual examples in the Ml-Agents GitHub repo. I believe Pyramids, Hallway, and 3D Ball all have visual versions. You can attach a Camera Sensor (built into ML-Agents) just like you can attach a ray perception component and then implement vector observations as normal.
@@ImmersiveLimit I have tried visual observations and vector observations but I have used only visual or vector not both together for one agent. I want to use both together in one agent. Thanks for your advice but I think I was not clear in explaining myself.
@@ImmersiveLimit I do not understand, code monkey said it has no support for gpu in his new ML series and this has discouraged me from using it. I also read this from unity forum.
When I train with MLA R-10 using mlagents-learn, my Task Manager > Performance tab > GPU shows Cuda activity if I change one of the monitoring graphs to show Cuda. So it’s definitely doing something with the GPU.
Awesome video! Great job! Was very happy to share this with my fellow ML-Agents enthusiasts. Looking forward to doing the Hummingbirds Tutorial. Keep up the good work!
Thank you!!
Perfect! Thank you for this. :)
I have followed along for the whole video, and I can confirm all of the steps should work for people using Linux-based systems.
I cannot wait for your next walkthrough :)
Nice! I am returning to ML agents after 9 months of hiatus, similar to you. This video has very good timing for me :)
It’s good to take a break from the exhaustion of 20 training runs in a row with no progress. 🤣 Looking forward to seeing what you make!
Can't wait for more awesome content on unity ml agents!
We will try to post at least one every week for the rest of the year! (and probably into next year too)
@@ImmersiveLimit that is great! Looking forward to it!! 🙏
@@ImmersiveLimit great!!!
Your tutorials are amazing
Thanks so much!
I'm excited to see the shift to PyTorch. I started with ML Agents last Spring, but the inability to use custom/ different ML algorithms brought me to a halt. My ML work uses Evolutionary Solutions (ES) approaches to reinforcement learning, all in PyTorch, so I stopped looking at Unity. I'll have to dig into the docs and see if it works for me now. I also found that the camera based examples did not work well (or at all) but I'm hopeful that the new attention-based image processing, especially using ES might work better.
I look forward to your upcoming tutorials
Your videos are pretty simple and interesting and easy to learn. Thanks for the videos.
Is there any chat support where I can ask some questions related to ML-Agents or when facing any issue related to ML-Agents?
Here’s the official ML-Agents Forum: forum.unity.com/forums/ml-agents.453/
brilliant intro, and you didn't even prepare it: impressive. Please, more examples analysis, and ideally also how to plan and setup Training with a Curriculum.
Thank you! More examples coming in the next video. Curriculum is definitely more complex now, so that's a good idea for a future video. 🙂
Great tutorial. Welcome back to ml-agents 1.0.10 . I was wondering if you could make a tutorial of how to make our own environments. Although there are many complete tutorials online i'd love to see some smaller implementations into one video , exploring all the features given by the library, e.g the hyperparameters . I trying to write my master thesis on the subject but I have no info on how to make a more humanoid agent like in the Walker example. How to use the body parts etc etc.
I think it would be extremely difficult to fit everything into one video. There’s just too much to cover to explore all features. Have you checked out the Hummingbirds course on Unity Learn? That’s a full end to end project. Not a humanoid walker though.
@@ImmersiveLimit I just checked it. And it's an official unity tutorial, awesome.
I understand there are many features that need to be discussed so many videos would be better.
I was struggling on how to make an implementation with 2 enemy agents with different goals. Anyways great vid keep up the good work!
Yeah, Unity paid us to make it for them. 🙂 We will consider competing enemy agents with different brains for future RUclips videos though. Sounds interesting!
would love to see this maybe a total rundown from 0 100 how to make a easy env or maybe make a discord give ur viewers a monthly assignment
Hi Adam! great video I cant wait to see the next one! I'm curious to see what's the difference now with pytorch, the workflow seems to be the same within unity so I would like to know why pytorch is the their goto choice now. Also, are you gonna do a video on the recently added grid sensors? That'd be really great to see explained since it's brand new. On another topic, have you check unity's perception tools? If I'm not wrong, it does what you were doing in blender for synthetic data generation right?
Hi Elizabeth, thank you! There's really no difference with pytorch unless you're writing your own neural nets and modifying the python side of things. If you're just using ML-Agents normally, you wouldn't even know that anything had changed. I chatted with the dev team and they said they were switching to pytorch because it allowed for easier neural network experimentation, which lines up with what I've heard about the differences between pytorch and tensorflow from many practitioners. Clearly they thought it was worth it because it seems to have been quite an undertaking. As for the grid sensor, I definitely want to try it out, but haven't had a chance to yet. I'll try to incorporate it in a project soon. Finally, yes I have checked out Unity's perception tools and they seem to do a lot of what I have done with Blender. When I tried them out a few months ago, it was pretty unstable and changing even faster than ML-Agents, so I was afraid if I made any tutorials, they'd be out of date almost immediately. I'll almost certainly come back to it soon. Thanks for the good questions!
already had my atempt at it its fun to mess around with agents
made it so spiders try to climb my hands using leap motion controller
🕷spooky!
Please explain how to add 2d (and 3d) convolutions to deal with obstacles. And please explain how to use curriculum, and how to use the type of imitation learning that leads to superhuman agents (rather than clones of the recorded examples)
Thanks for the suggestions! The only one that might be tricky is 3d convolutions, but it would definitely be an interesting challenge. 🙂
I don't see any lines representing the Ray Perception Sensor. Does anyone know if there is option to turn them visually on or off? Everything else work and looks the same as this video.
I'm using Unity 2021.1.11.
Cool. I also prefer using Anaconda for managing python and ML studies. But I could not get my Anaconda environment to work with Unity ML-Agents.
I could install pytorch and ml-agents into a new Anaconda environment, but the tensorboard was incompatible with the Anaconda environment, despite regressing to a previous 1.7.0 tensorboard version. I could play Unity Trained agents scenes, But I could not Train with Anaconda environment. I had to resort to a previous native python 3.7 to enable ml-agents training sessions, and follow the Windows centric installation instructions for legacy python 3.7, pytorch and then the ml-agents. The Unity Ml-Agents Anaconda instructions are now depreciated.
Thanks for the info!
@@ImmersiveLimit Hi I have actually now got an Anaconda environment to work with ML-Agents Release 18.
I created an ml agents environment in conda, with a python 3.7.1 base, and I used conda install pytorch on windows. I then used the pip install of ml agents 0.27.0 and the ml agents learning now seems to work under the conda prompt.
Bummer. Docs still seem to say that only PPO and SAC algorithms available and no way to add new PyTorch- based training. Not sure why they mention PyTorch - seems to be internal use only.
Well, the Python training portion is open source, so no reason you couldn’t add your own right?
Whats the best book for self learning reinforcement learning?
Not sure! I haven't read any books on the subject.
@@ImmersiveLimit found this book to be the best www.amazon.com/Deep-Reinforcement-Learning-Python-distributional-ebook/dp/B08HSHV72N/ref=pd_rhf_dp_p_img_4?_encoding=UTF8&psc=1&refRID=RZBFZTMW1H0Q9MVGM6NY as it goes step by step. I found that trying to learn ML agents without understanding fundamentals of RL is hurting my ability of learning since when I am stuck I do not know what to do. Currently having fun reading this book and now understand why thing work the way they do or why are they not working. I currently on chapter 4 and I will return to your tutorials at least after I finish chapter 9. Thanks but also I think you should mention prerequisites as this subject is an advanced masters computer science course and people may be mislead if they do not know what to learn first.
:( now im stuck with tensorflow asking CUDA(nvidia) but my GPU is AMD..
so many thing go wrong with newest version
Edit: reinstall python and its work after installing directML and add AMD GPU
Hi , i am currently doing a college project where I want to use camera observation, and vector observation together in one agent. I am able to get both observations using python mlagents env but then I have to implement my own algorithm. I have implemented a2c but it is not as good as PPO. And I don't have enough time to learn it , so how can I use both observations to train using inbuilt PPO algorithm?
I’d recommend taking a look at the Visual examples in the Ml-Agents GitHub repo. I believe Pyramids, Hallway, and 3D Ball all have visual versions. You can attach a Camera Sensor (built into ML-Agents) just like you can attach a ray perception component and then implement vector observations as normal.
@@ImmersiveLimit I have tried visual observations and vector observations but I have used only visual or vector not both together for one agent. I want to use both together in one agent. Thanks for your advice but I think I was not clear in explaining myself.
There’s no trick to it, you add the Camera Sensor to the agent and also override the Collect Observation function in your agent class.
@@ImmersiveLimit wait this will work ? I thought it can either take observation or vector. Should have tried it . Thanks you saved me 😊.
@@medhavimonish41 The code will work, no guarantee the agents will behave. 😉
No GPU support. What a waste.
It trains with GPU by default now.
@@ImmersiveLimit I do not understand, code monkey said it has no support for gpu in his new ML series and this has discouraged me from using it. I also read this from unity forum.
When I train with MLA R-10 using mlagents-learn, my Task Manager > Performance tab > GPU shows Cuda activity if I change one of the monitoring graphs to show Cuda. So it’s definitely doing something with the GPU.
@@ImmersiveLimit Thanks I will reinstall again. I hate using gym because of ugly graphics.