This is the awesome lecture I've ever seen. Thanks to you I could handle the ML-Agent 2.0. Especially I got hard time with "Action Buffers and Heuristic parts". I'm looking forward to the next lecture on "Discrete Action". Thank you Thank you again for your painful effort and a great lecture!
Hi Rachel. Thank you so much for the tutorial. When I clicked the Play button for the first time, the ball does not seem to respond to my keyboard input. The console shows me the following message: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." Is there a way for me to fix this?
I have managed to fix the problem. For some reason, I did not at the "Decision Requester" component to the sphere object. The warning message is to be expected, since Rachel also ran into the same message in the video.
Hi Donglin, thanks for y our comment and your fix! Be careful with this warning because it can turn into an error. It's usually negligible, but in some instances, you might run into versioning issues. If you are having trouble training and have exhausted every other possible option, I'd recommend updating your coms package. More info on that can be found on MLagents github.
Hi, it shouldn't be a problem if you are training. When you are doing training you have no pre-saved model to load. Thus, you are making the brain. So after you train a model successfully, you could specify it as the brain in the hyper-parameters and do inference.
I tried this and had to change a couple of lines in the agent.cs: I believe is OnActionReceived(ActionBuffers actionBuffers) and Heuristic(in ActionBuffers actionsOut) instead of float[] lists
Hello Juan! This is an older tutorial on the previous ml-agents video. The updated for version 2.0 can be found here: ruclips.net/video/JQt2ne1wiOA/видео.html I'm glad you figured out the changes on your own!
I followed every step exactly and when I tested this by using 'Heuristic only' behavior type, it worked. But When I changed it to 'Default' and enter the command "mlagents-learn config/Roller_Ball_tutorial_config.yaml --run-id=RollerBallTest --force" and press the play button, the RollerAgent does not move.. Is there any solution?
@@rachelstclair9897 Yes. and I solved this problem. just leave this comment for those who will get the same error as me: I have downgraded ML Agents version from 1.7.2 to 1.5.0. I think the reason is discordance between Python API and Unity API. Anyway, thank you for your kind comment!
I didn't even know you could run unity on Linux. Amazing video! Clear concise and easy.
Unity + Blender + Linux + Monodevelop works like a magic
This is the awesome lecture I've ever seen.
Thanks to you I could handle the ML-Agent 2.0.
Especially I got hard time with "Action Buffers and Heuristic parts".
I'm looking forward to the next lecture on "Discrete Action".
Thank you Thank you again for your painful effort and a great lecture!
Oh my, oh my, so precise and to the point. Great work and keep it coming! 🤩
This was great. Glad I stumbled upon your channel. Looking forward to follow into the future 👍😊😎
Well done - plain and simple. Thank you.
Hi Rachel. Thank you so much for the tutorial. When I clicked the Play button for the first time, the ball does not seem to respond to my keyboard input. The console shows me the following message: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." Is there a way for me to fix this?
I have managed to fix the problem. For some reason, I did not at the "Decision Requester" component to the sphere object. The warning message is to be expected, since Rachel also ran into the same message in the video.
Hi Donglin, thanks for y our comment and your fix! Be careful with this warning because it can turn into an error. It's usually negligible, but in some instances, you might run into versioning issues. If you are having trouble training and have exhausted every other possible option, I'd recommend updating your coms package. More info on that can be found on MLagents github.
In the behaviors parameters script it says there is no model for this Brain, is this a problem? What does this mean. Thanks for the video
Hi, it shouldn't be a problem if you are training. When you are doing training you have no pre-saved model to load. Thus, you are making the brain. So after you train a model successfully, you could specify it as the brain in the hyper-parameters and do inference.
I tried this and had to change a couple of lines in the agent.cs:
I believe is OnActionReceived(ActionBuffers actionBuffers) and Heuristic(in ActionBuffers actionsOut) instead of float[] lists
Hello Juan! This is an older tutorial on the previous ml-agents video. The updated for version 2.0 can be found here: ruclips.net/video/JQt2ne1wiOA/видео.html
I'm glad you figured out the changes on your own!
@@rachelstclair9897 Still good! cool you have a new video
I followed every step exactly and when I tested this by using 'Heuristic only' behavior type, it worked. But When I changed it to 'Default' and enter the command "mlagents-learn config/Roller_Ball_tutorial_config.yaml --run-id=RollerBallTest --force" and press the play button, the RollerAgent does not move.. Is there any solution?
Hi, did you set up the yaml file correctly? And do you have all the requirements installed correctly?
@@rachelstclair9897 Yes. and I solved this problem. just leave this comment for those who will get the same error as me:
I have downgraded ML Agents version from 1.7.2 to 1.5.0. I think the reason is discordance between Python API and Unity API.
Anyway, thank you for your kind comment!
gracias a tu video pude llegar mas de la mitad :) pero me tranque en el minuto 21:00 :( pero gracias :,)
Thank you for your comments! A bit has changed since the new ML-Agents package update. I'll be posting a new video on the most recent version soon!
@@rachelstclair9897 Thank you very much I will be looking forward to it :,)