Your Explanation is too good. I have a humble request. Can you make RL training using MuJoCo Ant but registering it as custom environment. The GAIT parameter generation is quite treaky. If possible please make a tutorial on it.
Hey Johnny, I was wondering if you knew how to make the algorithm learn some already known states? I have a challenge related to make a DQN learn and start with already known states stored in a csv file, and I am struggling because I have no idea how to do that. Is it possible?
I'm guessing if you know those states, then you would know what action to take or not take in relation to those states. For example, a pawn on a chess board can't go backwards, since you know that state is impossible. If my interpretation of your question is correct, then you might want to look into "action masking", which prevents the agent from taking illegal actions. You can start with this SB3 reference, but the concept is not limited to PPO: sb3-contrib.readthedocs.io/en/master/modules/ppo_mask.html
Hi, your videos are great and helped me a lot since you were using the latest version of stable baseline3...But I am facing an issue that the verbose values are not getting printed in output I have put verbose = 1 and even tried to use verbose = 2 but not getting the desired outputs (like rewards, loss, iterations, ep_len_mean etc.) as it was getting printed in your videos. Can you please help me? Is this due to the custom environment I am using or something else? Also, tensorboard logs are also not working...
You should try creating a new conda environment and then install SB3 again. In my SB3 introduction video, I just ran pip install stable-baselines3[extra] and didn't do anything else special: ruclips.net/video/OqvXHi_QtT0/видео.html
Hi, @johnnycode, I tried reinstalling the stable-baselines3[extras] but I am not getting the monitor data also the tensorboard logs are also not getting displayed...Is there some issue with the new version of stable-baselines3[extra] can you please give me the version you installed when making the video?
Thanks for your video. Which ressources would you adivse to learn practical applications of reinforcement learning? I've been trying to implement a bot for a specific game and have to create my own environment and DQN. I'm familiar with neural nets, but all the rest is so hard to find good information on
Thank You!🎉
We need more videos (course) like that: create custom env using open ai gym.
Very helpful and nice explanation.
Thanks!
Thanks, dear 🎉, Please upload more videos on multi-agent RL for robotics and path planning for multi-robot with custom environment in Gymnasium.
Your Explanation is too good. I have a humble request. Can you make RL training using MuJoCo Ant but registering it as custom environment. The GAIT parameter generation is quite treaky. If possible please make a tutorial on it.
I also want this tutorial, so please Johnny
Thank you
Hey Johnny, I was wondering if you knew how to make the algorithm learn some already known states? I have a challenge related to make a DQN learn and start with already known states stored in a csv file, and I am struggling because I have no idea how to do that. Is it possible?
I'm guessing if you know those states, then you would know what action to take or not take in relation to those states. For example, a pawn on a chess board can't go backwards, since you know that state is impossible. If my interpretation of your question is correct, then you might want to look into "action masking", which prevents the agent from taking illegal actions. You can start with this SB3 reference, but the concept is not limited to PPO: sb3-contrib.readthedocs.io/en/master/modules/ppo_mask.html
Hi, your videos are great and helped me a lot since you were using the latest version of stable baseline3...But I am facing an issue that the verbose values are not getting printed in output I have put verbose = 1 and even tried to use verbose = 2 but not getting the desired outputs (like rewards, loss, iterations, ep_len_mean etc.) as it was getting printed in your videos. Can you please help me? Is this due to the custom environment I am using or something else?
Also, tensorboard logs are also not working...
You should try creating a new conda environment and then install SB3 again. In my SB3 introduction video, I just ran pip install stable-baselines3[extra] and didn't do anything else special: ruclips.net/video/OqvXHi_QtT0/видео.html
@@johnnycode Hi, I will try this one again...Thanks a lot for the reply and your time! Might need your help again...
Hi, @johnnycode, I tried reinstalling the stable-baselines3[extras] but I am not getting the monitor data also the tensorboard logs are also not getting displayed...Is there some issue with the new version of stable-baselines3[extra] can you please give me the version you installed when making the video?
stable-baselines3 2.0.0
tensorboard 2.13.0
Thanks, good video. Dose Gymnasium can support NeoGeo (SNK) roms? How to make it to support?
It doesn’t support Neo Geo roms. I think it would be extremely hard to bridge that support.
Thanks for your video. Which ressources would you adivse to learn practical applications of reinforcement learning? I've been trying to implement a bot for a specific game and have to create my own environment and DQN. I'm familiar with neural nets, but all the rest is so hard to find good information on
Sorry, I'm not an expert. I suggest inquiring at the r/reinforcementlearning subreddit. There are some very knowledgeable people there.
@@johnnycode Thank you for the answer, will do!
thanks for the amazing video
Could i use it for own game made it with Godot Engine?? Thanks!!
Yes, of course!