@theashbot4097 you'll get better over time and also when you get more followers you can put more effort into creating great thumbnails. I would also try youtube shorts to reach a wider audience and editing could improve. But, I'm also trying to grow my youtube channel but it's tough. These are the things I'm trying
@@mikhailhumphries I did not know my thumbnails are not the best so I will see what I can do for my next videos. I tried to make a short on my last video but I could not make it look good, so I just gave up. you said you are trying to grow your YT channel? you only got 1 video posted 2 years ago?
@@theashbot4097 i really apprecitate this tutorial. The only thing I'd ask to change is to smooth out the sound edits so everything is at one volume. I have a sound sensitivity and the slight changes in noise level when you edit in direction is distracting. Other then that, very greatful for this! Subbed and thanks!
@@Bands_Flip I recently learned that the video editor I use has a audio leveler. THe only problem is it makes the loudest part of all the edits the same, which might cause some problems but I will see if I can use that then manually change it went I start making more videos (Hopefully soon)
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
You technically can, but it is not a good idea. the ML-Agent will get confused because they said go up, but because the you intervened it when down. Does that make sense?
@@theashbot4097 If the agent is about to hit the wall and intervene, is it unreasonable for me to let the agent go down? I think it should be feasible?
Amigo, sabes si es posible abrir los .demo de Unity en Python, he buscado documentación y no encuentro algo así, proble con unas librerías y al parecer no les dieron continuidad. Tienes alguna idea de ello?
shouldn't the observation be a value between 0 and 1 as I suppose it is the input for the network ... I saw that you're simply subtracting the positions which may generate values over 1 or under 0 other than that ... nice project
@@theashbot4097 Ya sure ... if you worked with nural networks before ... you should know that the input of the network is always reduced for a value between 0 and 1 so that the data don't affect the weights of the network ... in your case ... the distance to the button in the observation could be larger than one most of the cases ... but strangely it is working very well ... any idea on that ?
Ok ... RUclips got worst after a certain update ... I already replied but whatever ... Ok ... So it's impossible for unity to do that cuz then it needs to have the highest value so that all inputs and devided by that value ... But it is working like so I'll just try it in my other projects ... Thanks
Do you start the training for the Ai, by use mlagents-learn? If so then that is the problem. For giving the AI a demo you just need to press the play button inside the unity editor. You do not need to run any python code.
@@theashbot4097 In simple terms, Deep Q Network is a deep learning method used to maximize q value, but I'm not asking about the DQN algorithm, but I'm more curious about how to customize the algorithm used by ML-Agents, the default algorithm from ML-Agents is PPO and SAC, for example, how do I want to use the XXX algorithm in ML-Agents? do I have to create a python script for the XXX algorithm? then how to connect the python script with ML-Agents, I don't know how, and haven't found a tutorial that can do it, help me
The sad thing is, Unity no longer works on the ML package! It is all updated by members of the community, but Unity has a VERY strict regulations. They also don't tell you what the regulations are. It is really sad, because it had so much potential. I plan on spending the next 2 ish years making a plugin for unity that directly implements TensorFlow and PyTorch into Unity. No having to mess around with python. The reason I say 2 years is because I will be doing other stuff too, and TensorFlow, and PyTorch have stuff for c++, but not c#. Sorry about this being such a long response.
Yep! I made this because I was getting frustrated with his tutorial so I made this one hoping it would help other. And it has! Do you have a problem with me improving his tutorial?
@@theashbot4097 broooo .... same ... his tutorial is good and all but i got frustrated as well ... thanks for the simplified and the more straightforward version
I did everthing the same except that i'm using descrete actions for my AI and I got the following error : torch.nn.functional.one_hot(_act.T, action_size[i]).float() RuntimeError: Class values must be non-negative. No matter what my training stops after 12 seconds and I get this error how can I solve that ?
I'm getting the following errors when trying to use Gail: [WARNING] Failed to load for module Optimizer:value_optimizer. Initializing [WARNING] Did not find these keys ['value_heads.value_heads.gail.weight', 'value_heads.value_heads.gail.bias'] in checkpoint. Initializing. [WARNING] Failed to load for module Module:GAIL. Initializing Do you know what the problem is?
Great video! I'm having an error trying to train by cmd console: mlagents\trainers\settings.py", line 41, in check_and_structure raise TrainerConfigError( mlagents.trainers.exception.TrainerConfigError: The option gail was specified in your YAML file for TrainerSettings, but is invalid.
Hell yeah, exactly the kind of video I needed! Making a university project with ML agents, and you're tutorials are by far the best ones I've found ♥
I am so happy these helped.
Your videos are awesome !!!
Iam not english speaker but your videos are good and i need it
I am glad this helped you!
Great job. Keep going
Thank you! what do you think I could improve on my next videos, and what did I do well?
@theashbot4097 you'll get better over time and also when you get more followers you can put more effort into creating great thumbnails. I would also try youtube shorts to reach a wider audience and editing could improve. But, I'm also trying to grow my youtube channel but it's tough. These are the things I'm trying
@@mikhailhumphries I did not know my thumbnails are not the best so I will see what I can do for my next videos. I tried to make a short on my last video but I could not make it look good, so I just gave up. you said you are trying to grow your YT channel? you only got 1 video posted 2 years ago?
@@theashbot4097 i really apprecitate this tutorial. The only thing I'd ask to change is to smooth out the sound edits so everything is at one volume. I have a sound sensitivity and the slight changes in noise level when you edit in direction is distracting. Other then that, very greatful for this! Subbed and thanks!
@@Bands_Flip I recently learned that the video editor I use has a audio leveler. THe only problem is it makes the loudest part of all the edits the same, which might cause some problems but I will see if I can use that then manually change it went I start making more videos (Hopefully soon)
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
You technically can, but it is not a good idea. the ML-Agent will get confused because they said go up, but because the you intervened it when down. Does that make sense?
@@theashbot4097 If the agent is about to hit the wall and intervene, is it unreasonable for me to let the agent go down? I think it should be feasible?
@@keyhaven8151 If you do let the AI go down it would not learn. The AI would not know why it survived, and did not die.
What you could do it to incorporate imitation learning like GAIL or behavior cloning. There are tutorials on youtube that show you how
Amigo, sabes si es posible abrir los .demo de Unity en Python, he buscado documentación y no encuentro algo así, proble con unas librerías y al parecer no les dieron continuidad. Tienes alguna idea de ello?
Sí, debería ser posible. No sé cómo hacerlo pero sí sé que debería ser posible.
Hello, I have been facing the problem that the array of DiscreteActions in the index 0 returns always zero. I didnt found any material that help me
Did you test with heuristic.
shouldn't the observation be a value between 0 and 1 as I suppose it is the input for the network ... I saw that you're simply subtracting the positions which may generate values over 1 or under 0
other than that ... nice project
Sorry I do not understand understand what you are trying to say. Could you try to reword it?
@@theashbot4097 Ya sure ... if you worked with nural networks before ... you should know that the input of the network is always reduced for a value between 0 and 1 so that the data don't affect the weights of the network ... in your case ... the distance to the button in the observation could be larger than one most of the cases ... but strangely it is working very well ... any idea on that ?
@@adelAKAdude Most likely unity is doing changing the observations before it sends it to the neural network.
Ok ... RUclips got worst after a certain update ... I already replied but whatever ... Ok ... So it's impossible for unity to do that cuz then it needs to have the highest value so that all inputs and devided by that value ... But it is working like so I'll just try it in my other projects ... Thanks
@@adelAKAdude No problem!
when i try to teach the ai ,the game is sped up insanely which makes it hard t play ,how can i disable this
Do you start the training for the Ai, by use mlagents-learn? If so then that is the problem. For giving the AI a demo you just need to press the play button inside the unity editor. You do not need to run any python code.
Ohhh thank you
@@theashbot4097
hii, help mee, i want to ask, how to implement DQN Algorithm into ML-Agents? pleasee😭😭
DQN? I do not know what DQN is. if you tell me what It is I can help you.
@@theashbot4097 In simple terms, Deep Q Network is a deep learning method used to maximize q value, but I'm not asking about the DQN algorithm, but I'm more curious about how to customize the algorithm used by ML-Agents, the default algorithm from ML-Agents is PPO and SAC, for example, how do I want to use the XXX algorithm in ML-Agents? do I have to create a python script for the XXX algorithm? then how to connect the python script with ML-Agents, I don't know how, and haven't found a tutorial that can do it, help me
Sorry I did not see this. I do not know how to do that sorry.
what a mess ML agents is, surely it could be made simpler
The sad thing is, Unity no longer works on the ML package! It is all updated by members of the community, but Unity has a VERY strict regulations. They also don't tell you what the regulations are. It is really sad, because it had so much potential. I plan on spending the next 2 ish years making a plugin for unity that directly implements TensorFlow and PyTorch into Unity. No having to mess around with python. The reason I say 2 years is because I will be doing other stuff too, and TensorFlow, and PyTorch have stuff for c++, but not c#. Sorry about this being such a long response.
your videos about ML is just a straight up copy of codingmonkey's videos..
Yep! I made this because I was getting frustrated with his tutorial so I made this one hoping it would help other. And it has! Do you have a problem with me improving his tutorial?
@@theashbot4097 broooo .... same ... his tutorial is good and all but i got frustrated as well ... thanks for the simplified and the more straightforward version
I prefer this fast tutorial.
@@shi-t Sorry. I did not see this here. Thank you though! I tried making it better even though it is similar.
I did everthing the same except that i'm using descrete actions for my AI and I got the following error : torch.nn.functional.one_hot(_act.T, action_size[i]).float()
RuntimeError: Class values must be non-negative. No matter what my training stops after 12 seconds and I get this error how can I solve that ?
Is this an error in the CMD or in Unity?
I'm getting the following errors when trying to use Gail:
[WARNING] Failed to load for module Optimizer:value_optimizer. Initializing
[WARNING] Did not find these keys ['value_heads.value_heads.gail.weight', 'value_heads.value_heads.gail.bias'] in checkpoint. Initializing.
[WARNING] Failed to load for module Module:GAIL. Initializing
Do you know what the problem is?
Sounds like you have the .YAML file formatted wrong.
Great video! I'm having an error trying to train by cmd console: mlagents\trainers\settings.py", line 41, in check_and_structure
raise TrainerConfigError(
mlagents.trainers.exception.TrainerConfigError: The option gail was specified in your YAML file for TrainerSettings, but is invalid.
That means the formatting/spelling of the file is not right.
Thank you very much @@theashbot4097 !! now is working !!